在Ubuntu上进行PyTorch的模型迁移学习,通常涉及以下步骤:
环境准备:
获取预训练模型:
import torch
import torchvision.models as models
model = models.resnet18(pretrained=True)
准备数据集:
torchvision.transforms来对图像进行预处理。修改模型结构:
model.fc = torch.nn.Linear(model.fc.in_features, 10)
定义损失函数和优化器:
训练模型:
评估模型:
保存和加载模型:
以下是一个简单的迁移学习示例代码:
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
# 数据预处理
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
# 加载数据集
train_dataset = datasets.ImageFolder('path_to_train_dataset', transform=transform)
val_dataset = datasets.ImageFolder('path_to_val_dataset', transform=transform)
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=32, shuffle=False)
# 加载预训练模型
model = models.resnet18(pretrained=True)
# 修改模型结构
model.fc = nn.Linear(model.fc.in_features, num_classes) # num_classes是你的类别数量
# 将模型移动到GPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
# 训练模型
for epoch in range(num_epochs):
model.train()
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
# 前向传播
outputs = model(inputs)
loss = criterion(outputs, labels)
# 反向传播和优化
optimizer.zero_grad()
loss.backward()
optimizer.step()
# 在验证集上评估模型
model.eval()
with torch.no_grad():
for inputs, labels in val_loader:
inputs, labels = inputs.to(device), labels.to(device)
outputs = model(inputs)
# 计算验证集上的准确率等指标
# 保存模型
torch.save(model.state_dict(), 'model.pth')
# 加载模型
model.load_state_dict(torch.load('model.pth'))
请根据你的具体任务和数据集调整上述代码。迁移学习的关键在于利用预训练模型学到的特征,并将其应用到新的任务上。