第一步:Win7 配置 Python 虚拟环境
1. 安装 Python 3.8.10
- 下载地址:https://www.python.org/downloads/release/python-3810/
- 选择
Windows x86-64 executable installer,安装时必须勾选Add Python 3.8 to PATH。
2. 创建并激活虚拟环境
- 打开 cmd 命令行,执行以下命令创建项目文件夹和虚拟环境:
bash
运行
# 新建项目根目录 mkdir fruit_ncnn_project && cd fruit_ncnn_project # 创建虚拟环境(venv_fruit 为环境名) python -m venv venv_fruit - 激活虚拟环境:
bash
运行
激活后命令行前缀会出现# Win7 cmd 命令 venv_fruit\Scripts\activate.bat(venv_fruit),表示进入隔离环境。
3. 安装 Win7 兼容的依赖库
激活虚拟环境后,执行以下命令安装指定版本依赖,避免兼容性问题:
bash
运行
# 升级 pip 到兼容版本 python -m pip install --upgrade pip==21.3.1 # 安装 PyTorch + TorchVision CPU 版(Win7 最高兼容版本) pip install torch==1.12.1+cpu torchvision==0.13.1+cpu -f https://download.pytorch.org/whl/cpu/torch_stable.html # 安装其他必备库 pip install pillow==9.5.0 numpy==1.23.5 opencv-python==4.5.5.644. 退出虚拟环境(训练完成后执行)
bash
运行
deactivate第二步:训练模型并生成 .pth 文件
注意:运行前需将 Fruits-360 数据集放在
./datasets目录下,确保Training和Test子文件夹存在。全程在激活的虚拟环境中执行。
训练代码(fruit_train.py)
python
运行
import torch import torch.nn as nn import torch.optim as optim from torchvision import transforms from torchvision.datasets import ImageFolder from torch.utils.data import DataLoader # ==================== 配置项 ==================== TRAIN_DIR = "./datasets/Training" # 数据集训练集路径 TEST_DIR = "./datasets/Test" # 数据集测试集路径 MODEL_SAVE_PATH = "fruit_mobilenetv2.pth" # 模型保存路径 NUM_EPOCHS = 15 # 训练轮数 BATCH_SIZE = 32 # 批次大小 # ================================================ # 1. 数据预处理(适配 Fruits-360 图像尺寸) transform = transforms.Compose([ transforms.Resize((100, 100)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) # 2. 加载数据集(Win7 必须设 num_workers=0) train_dataset = ImageFolder(TRAIN_DIR, transform=transform) test_dataset = ImageFolder(TEST_DIR, transform=transform) num_classes = len(train_dataset.classes) # 自动适配 208 类 train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=0) test_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=0) # 3. 加载预训练 MobileNetV2 并修改分类头(迁移学习) from torchvision import models model = models.mobilenet_v2(pretrained=True) model.classifier[1] = nn.Linear(model.last_channel, num_classes) model = model.to("cpu") # 强制 CPU 训练 # 4. 训练配置 criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) # 5. 训练+验证循环 print(f"开始训练 {num_classes} 类水果分类模型...") for epoch in range(NUM_EPOCHS): # 训练阶段 model.train() train_loss = 0.0 for imgs, labels in train_loader: optimizer.zero_grad() outputs = model(imgs) loss = criterion(outputs, labels) loss.backward() optimizer.step() train_loss += loss.item() * imgs.size(0) # 验证阶段 model.eval() test_acc = 0.0 with torch.no_grad(): for imgs, labels in test_loader: outputs = model(imgs) _, preds = torch.max(outputs, 1) test_acc += torch.sum(preds == labels.data) # 打印日志 avg_loss = train_loss / len(train_dataset) avg_acc = test_acc.double() / len(test_dataset) print(f"Epoch [{epoch+1}/{NUM_EPOCHS}] | Loss: {avg_loss:.4f} | Test Acc: {avg_acc:.4f}") # 6. 保存训练好的模型 torch.save(model.state_dict(), MODEL_SAVE_PATH) print(f"\n模型训练完成,已保存至 {MODEL_SAVE_PATH}")阿雪技术观
在科技发展浪潮中,我们不妨积极投身技术共享。不满足于做受益者,更要主动担当贡献者。无论是分享代码、撰写技术博客,还是参与开源项目维护改进,每一个微小举动都可能蕴含推动技术进步的巨大能量。东方仙盟是汇聚力量的天地,我们携手在此探索硅基生命,为科技进步添砖加瓦。
Hey folks, in this wild tech - driven world, why not dive headfirst into the whole tech - sharing scene? Don't just be the one reaping all the benefits; step up and be a contributor too. Whether you're tossing out your code snippets, hammering out some tech blogs, or getting your hands dirty with maintaining and sprucing up open - source projects, every little thing you do might just end up being a massive force that pushes tech forward. And guess what? The Eastern FairyAlliance is this awesome place where we all come together. We're gonna team up and explore the whole silicon - based life thing, and in the process, we'll be fueling the growth of technology