深度学习实战—基于ResNet18算法的云层图像分类识别模型
  【GTY_Web】   2025年12月18日   31   0

文章来源:https://www.guyuehome.com/detail?id=1983071833190068226

1.项目背景

云层作为地球大气系统中最活跃的组成部分之一,其类型、覆盖度和动态变化深刻影响着全球的天气过程、气候模式以及地表能量平衡。准确、高效地识别与分类云层,不仅是提高短期天气预报精度、预警强对流天气的关键,也是研究长期气候变化趋势、优化太阳能发电预报等领域不可或缺的科学基础。传统上,对云层的观测主要依赖于地面人工目测与气象卫星遥感数据的解译,然而前者受限于观测者的主观经验与时空覆盖的不足,后者则面临着海量数据亟待快速处理的巨大挑战。随着新一代气象卫星与地面监测设备持续产生着TB乃至PB级的影像数据,完全依赖气象专家进行人工判读已变得愈发困难,这导致了宝贵数据价值的浪费与信息提取的延迟,从而构成了对自动化、智能化云层识别技术的迫切需求。
恰逢其时,深度卷积神经网络在通用计算机视觉领域取得的革命性成功,为解决这一地学难题提供了强有力的技术工具。CNN模型具备从像素中自动学习层次化特征的强大能力,能够捕捉云图中从纹理、形状到空间结构等复杂模式,理论上可以识别出卷云、积云、层云等主要云类之间细微的视觉差异。然而,在气象遥感这一特定应用场景中,如何在保证模型性能的同时,兼顾计算效率与部署可行性成为一个核心考量。过于庞大复杂的模型虽然在理论上拥有更高的性能上限,但其对计算资源的高要求及较慢的推理速度,难以在需要近实时处理的海量卫星数据流或资源受限的边缘设备上有效应用。
因此,本研究立足于气象科学与人工智能的交叉点,旨在开展一项深度学习实战,构建一个基于ResNet18算法的云层图像分类识别模型。选择ResNet18这一经典的轻量级深度网络,正是为了在模型性能与实用效率之间寻求最佳平衡;其残差结构能有效缓解网络退化问题,确保特征提取的有效性,而其相对较少的参数规模则使其易于训练与快速部署。我们期望通过本次研究,不仅能够验证深度学习方法在云层分类任务上的技术可行性,更致力于探索一条面向实际业务化应用的路径,为未来开发集成于智能天气预报系统、可再生能源管理平台的高效云检模块奠定基础,最终推动气象观测与预报向更加智能化、自动化的方向演进。

2.数据集介绍

本实验数据集来源于Kaggle,该数据集包含 961 张标记的云图像,分为 7 种不同的云类型。
数据类型:摄影图像(JPG格式)。结构:图像被组织成子文件夹,每个子文件夹代表一个特定的云类别。
1.高积云
2.积云
3.卷云状()
4.层积云
5.晴空
6.层状
7.积雨云
类别平衡:数据集不平衡,某些类别(例如积雨云)的样本量比其他类别(例如高积云)少。
应用:
图像分类(云类型识别)
深度学习模型训练(例如 CNN)
天气和大气模式分析

3.技术工具

Python版本:3.9
代码编辑器:jupyter notebook

4.实验过程

4.1导入数据

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm.notebook import tqdm
from collections import Counter
import copy
import time
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F

import torch.optim as optim
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader, random_split, Subset
from torchvision.datasets import ImageFolder
import torchvision.transforms as transforms
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
from torch.cuda import amp
from sklearn.model_selection import train_test_split , StratifiedKFold
导入数据集并进行转换
Train_path = './cloiud-dataset/clouds_train'
Test_path = './cloiud-dataset/clouds_test'

IMG_SIZE = 224
BATCH_SIZE = 32
NUM_WORKERS = 4 
SEED = 42
N_FOLDS = 5
EPOCHS = 20
patience, bad = 5, 0

train_tfms = transforms.Compose([
    transforms.Resize((IMG_SIZE, IMG_SIZE)),
    transforms.RandomHorizontalFlip(),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406],
                         std=[0.229, 0.224, 0.225]),
])

eval_tfms = transforms.Compose([
    transforms.Resize((IMG_SIZE, IMG_SIZE)),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406],
                         std=[0.229, 0.224, 0.225]),
])

4.2数据预处理

base = datasets.ImageFolder(Train_path)
y = np.array(base.targets)

from sklearn.model_selection import StratifiedShuffleSplit
sss = StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=SEED)  
tr_idx, va_idx = next(sss.split(np.zeros(len(y)), y))

train_dataset = torch.utils.data.Subset(
    datasets.ImageFolder(Train_path, transform=train_tfms), tr_idx
)
val_dataset = torch.utils.data.Subset(
    datasets.ImageFolder(Train_path, transform=eval_tfms), va_idx
)
classes = base.classes
train_loader = DataLoader(
    train_dataset, batch_size=BATCH_SIZE, shuffle=True,
    num_workers=NUM_WORKERS, pin_memory=torch.cuda.is_available()
)

val_loader = DataLoader(
    val_dataset, batch_size=BATCH_SIZE, shuffle=False,
    num_workers=NUM_WORKERS, pin_memory=torch.cuda.is_available()
)
images, labels = next(iter(train_loader))
print("classes:", base.classes,'\n')
print("class_to_idx:", base.class_to_idx)

image.png

4.3数据可视化

查看类别分布

dataset = train_dataset
classes = dataset.dataset.classes 
ds = dataset.dataset if hasattr(dataset, "dataset") else dataset

idxs = dataset.indices if hasattr(dataset, "indices") else range(len(dataset))

labels = [base.targets[i] for i in idxs]
label_counts = Counter(labels)
counts = [label_counts[i] for i in range(len(classes))]

plt.figure(figsize=(8,5))
sns.barplot(x=classes, y=counts)
plt.title("Class Distribution")
plt.xlabel("Label")
plt.ylabel("Number of Images")
plt.xticks(rotation=90)
plt.show()

image.png
查看各类别数据图片

import random
import os
from PIL import Image

n_per_class = 5
plt.figure(figsize=(12, 10))

for idx, cls in enumerate(classes):
    cls_path = os.path.join(Train_path, cls)
    img_files = os.listdir(cls_path)
    samples = random.sample(img_files, n_per_class)
    
    for i, file in enumerate(samples):
        img_path = os.path.join(cls_path, file)
        img = Image.open(img_path).convert("RGB")
        
        plt.subplot(len(classes), n_per_class, idx*n_per_class + i + 1)
        plt.imshow(img)
        plt.axis("off")
        if i == 2:
            plt.title(cls)
plt.tight_layout()
plt.show()

image.png
image.png
PCA可视化

from torchvision.models import resnet18
from sklearn.decomposition import PCA
import torch

model = resnet18(weights='IMAGENET1K_V1')
model = torch.nn.Sequential(*(list(model.children())[:-1]))  
model.eval()

features, labels = [], []
for img, label in torch.utils.data.Subset(dataset, range(200)):
    x = transforms.Resize((224,224))(img).unsqueeze(0)
    with torch.no_grad():
        feat = model(x).squeeze().numpy()
    features.append(feat)
    labels.append(label)

pca = PCA(n_components=2)
proj = pca.fit_transform(features)

plt.figure(figsize=(6,5))
sns.scatterplot(x=proj[:,0], y=proj[:,1], hue=[classes[l] for l in labels])
plt.title("Feature Space Visualization (PCA)")
plt.legend(
    bbox_to_anchor=(1.05, 1),  
    loc='upper left',         
    borderaxespad=0.
)
plt.show()

4.4构建模型

def accuracy(logits, y):

    return (logits.argmax(1) == y).float().mean().item()

def run_epoch(loader, model, train=True, scaler=None):
    model.train(train)
    total_loss, total_acc, n = 0.0, 0.0, 0
    for x, y in loader:
        x, y = x.to(device), y.to(device)
        if train:
            optimizer.zero_grad(set_to_none=True)
            with amp.autocast():
                logits = model(x)
                loss = criterion(logits, y)
            scaler.scale(loss).backward()
            scaler.step(optimizer)
            scaler.update()
        else:
            with torch.no_grad():
                logits = model(x)
                loss = criterion(logits, y)

        bs = y.size(0)
        total_loss += loss.item() * bs
        total_acc  += accuracy(logits, y) * bs
        n += bs

    return total_loss / n, total_acc / n

K折交叉验证

base = datasets.ImageFolder(Train_path)  
classes = base.classes
y = np.array(base.targets)

skf = StratifiedKFold(n_splits=N_FOLDS, shuffle=True, random_state=SEED)
fold_results = []

for fold, (tr_idx, va_idx) in enumerate(skf.split(np.zeros(len(y)), y), start=1):
    print(f"\n===== Fold {fold}/{N_FOLDS} =====")

    # Datasets / Loaders
    train_set = Subset(datasets.ImageFolder(Train_path, transform=train_tfms), tr_idx)
    val_set   = Subset(datasets.ImageFolder(Train_path, transform=eval_tfms),  va_idx)

    train_loader = DataLoader(train_set, batch_size=BATCH_SIZE, shuffle=True,
                              num_workers=NUM_WORKERS, pin_memory=torch.cuda.is_available())
    val_loader   = DataLoader(val_set,   batch_size=BATCH_SIZE, shuffle=False,
                              num_workers=NUM_WORKERS, pin_memory=torch.cuda.is_available())

    # Model / Opt / Sched
    model = models.resnet18(weights=models.ResNet18_Weights.IMAGENET1K_V1)
    model.fc = nn.Linear(model.fc.in_features, len(classes))
    
    for name, p in model.named_parameters():
        p.requires_grad = name.startswith("fc.")
    model = model.to(device)

    criterion = nn.CrossEntropyLoss(label_smoothing=0.05)
    optimizer = optim.AdamW(filter(lambda p: p.requires_grad, model.parameters()),
                            lr=1e-3, weight_decay=1e-4)
    scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10)
    scaler = amp.GradScaler()

    history = {'Train_Loss': [], 'Validation_Loss': [],
               'Train_Accuracy': [], 'Validation_Accuracy': []}

    best_wts = copy.deepcopy(model.state_dict())
    best_val = -1.0
    bad = 0

    # ---- Epoch loop ----
    for epoch in range(1, EPOCHS+1):
        t0 = time.time()
        train_loss, train_acc = run_epoch(train_loader, model, train=True,  scaler=scaler)
        val_loss,   val_acc   = run_epoch(val_loader,   model, train=False, scaler=None)

        scheduler.step()
        elapsed = time.time() - t0

        print(f"[Fold {fold}][{epoch:02d}] "
              f"train_loss={train_loss:.4f} acc={train_acc:.4f} | "
              f"val_loss={val_loss:.4f} acc={val_acc:.4f} | time={elapsed:.1f}s")

        history['Train_Loss'].append(train_loss)
        history['Validation_Loss'].append(val_loss)
        history['Train_Accuracy'].append(train_acc)
        history['Validation_Accuracy'].append(val_acc)

        # Early stopping
        if val_acc > best_val:
            best_val = val_acc
            best_wts = copy.deepcopy(model.state_dict())
            bad = 0
            torch.save(best_wts, f"best_fold{fold}.pt")
        else:
            bad += 1
            if bad >= patience:
                print(f"Early stopping (fold {fold})")
                break

    
    model.load_state_dict(best_wts)
    fold_results.append(best_val)


print("\nK-Fold Val Acc mean:", np.mean(fold_results), "std:", np.std(fold_results))

image.png
K-Fold交叉验证结果表明,该模型在不同的数据分割中表现一致。较高的平均验证精度(≈0.89)和较小的标准偏差(≈0.02)表明模型的性能不严重依赖于训练数据的特定子集。这意味着模型已经学习了一般化的模式,而不是记忆特定的样本,展示了稳定和良好的一般化性能。

4.5模型训练

base = datasets.ImageFolder(Train_path)
y = np.array(base.targets)

sss = StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=SEED)  
tr_idx, va_idx = next(sss.split(np.zeros(len(y)), y))

train_dataset = torch.utils.data.Subset(
    datasets.ImageFolder(Train_path, transform=train_tfms), tr_idx
)
val_dataset = torch.utils.data.Subset(
    datasets.ImageFolder(Train_path, transform=eval_tfms), va_idx
)
train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True,
                          num_workers=NUM_WORKERS, pin_memory=torch.cuda.is_available())
val_loader   = DataLoader(val_dataset,   batch_size=BATCH_SIZE, shuffle=False,
                          num_workers=NUM_WORKERS, pin_memory=torch.cuda.is_available())
test_dataset = datasets.ImageFolder(Test_path, transform=eval_tfms)
test_loader  = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=False,
                          num_workers=NUM_WORKERS, pin_memory=torch.cuda.is_available())
classes = base.classes

model = models.resnet18(weights=models.ResNet18_Weights.IMAGENET1K_V1)
model.fc = nn.Linear(model.fc.in_features, len(classes))


criterion = nn.CrossEntropyLoss(label_smoothing=0.05)
optimizer = optim.AdamW(filter(lambda p: p.requires_grad, model.parameters()), lr=1e-3, weight_decay=1e-4)
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10)
scaler = amp.GradScaler()
best_wts = copy.deepcopy(model.state_dict())
best_val = -np.inf

history = {'Train_Loss': [], 'Validation_Loss': [],
           'Train_Accuracy': [], 'Validation_Accuracy': []}

for epoch in tqdm(range(1, EPOCHS+1)):
    t0 = time.time()
    train_loss, train_acc = run_epoch(train_loader, model, train=True, scaler=scaler)
    val_loss,   val_acc   = run_epoch(val_loader,   model, train=False)

    scheduler.step()
    elapsed = time.time()-t0

    print(f"[{epoch:02d}] "
          f"train_loss={train_loss:.4f} acc={train_acc:.4f} | "
          f"val_loss={val_loss:.4f} acc={val_acc:.4f} | "
          f"time={elapsed:.1f}s")

    history['Train_Loss'].append(train_loss)
    history['Validation_Loss'].append(val_loss)
    history['Train_Accuracy'].append(train_acc)
    history['Validation_Accuracy'].append(val_acc)

    # early stopping
    if val_acc > best_val:
        best_val = val_acc
        best_wts = copy.deepcopy(model.state_dict())
        bad = 0
        torch.save(best_wts, "best.pt")
    else:
        bad += 1
        if bad >= patience:
            print("Early stopping triggered.")
            break

model.load_state_dict(best_wts)

image.png

4.6模型评估

def evaluate_model(model, data_loader, criterion):
    model.eval()
    correct = 0
    total = 0
    running_loss = 0.0
    with torch.no_grad():
        for inputs, labels in data_loader:
            inputs, labels = inputs.to(device), labels.to(device)
            outputs = model(inputs)
            loss = criterion(outputs, labels)
            running_loss += loss.item()
            _, predicted = torch.max(outputs.data, 1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()
    
    accuracy = 100 * correct / total
    return running_loss/len(data_loader), accuracy

test_loss, test_acc = evaluate_model(model, test_loader, criterion)
print(f"Test Loss: {test_loss}, Test Accuracy: {test_acc}%")

image.png
输出模型分类报告与混淆矩阵

from sklearn.metrics import classification_report, confusion_matrix
import itertools

model.eval()
all_y, all_p = [], []
with torch.no_grad():
    for x, y in val_loader:
        x = x.to(device)
        logits = model(x)
        all_p += logits.argmax(1).cpu().tolist()
        all_y += y.tolist()

cm = confusion_matrix(all_y, all_p)
print(classification_report(all_y, all_p, target_names=base.classes))
print(cm)

image.png
混淆矩阵可视化

plt.figure(figsize=(6,5))
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues',
            xticklabels=classes, yticklabels=classes)
plt.xlabel('Predicted Label')
plt.ylabel('True Label')
plt.title('Confusion Matrix')
plt.show()

image.png
模型训练损失可视化

plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.plot(history['Train_Loss'], label='Training Loss')
plt.plot(history['Validation_Loss'], label='Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.title('Loss History')

plt.subplot(1, 2, 2)
plt.plot(history['Train_Accuracy'], label='Training Accuracy')
plt.plot(history['Validation_Accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.title('Accuracy History')

plt.tight_layout()
plt.show()

image.png
模型样本预测

plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.plot(history['Train_Loss'], label='Training Loss')
plt.plot(history['Validation_Loss'], label='Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.title('Loss History')

plt.subplot(1, 2, 2)
plt.plot(history['Train_Accuracy'], label='Training Accuracy')
plt.plot(history['Validation_Accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.title('Accuracy History')

plt.tight_layout()
plt.show()

image.png
image.png

最后一次编辑于 2025年12月29日 0 0

暂无评论