Loading

Quipoin Menu

Learn • Practice • Grow

deep-learning / Transfer Learning with CNNs
tutorial

Transfer Learning with CNNs

Training a large CNN from scratch requires massive data and computation. Transfer learning reuses a pre‑trained network (e.g., on ImageNet) and adapts it to your task, often with very little data.

Transfer learning: take a model trained on a large dataset, freeze early layers (generic features), retrain later layers (task‑specific).

Why Transfer Learning Works

Early CNN layers detect universal features: edges, corners, colors. These are useful for any vision task. Only the final layers need to adapt to your specific classes.

Two Common Approaches

  • Feature extraction: Freeze all pre‑trained layers, add new classifier on top. Train only the new layers.
  • Fine‑tuning: Unfreeze some top layers and retrain them along with the new classifier. Often yields better accuracy.

Example with PyTorch (ResNet18)

import torchvision.models as models

model = models.resnet18(pretrained=True)
# Freeze all layers
for param in model.parameters():
param.requires_grad = False
# Replace the final fully connected layer
model.fc = nn.Linear(512, num_classes)

When to Use Transfer Learning

  • Small dataset (hundreds to few thousand images).
  • Similar to pre‑training domain (e.g., natural images → medical images may still work).
  • Limited compute resources.


Two Minute Drill
  • Transfer learning reuses pre‑trained models.
  • Early layers learn universal features; later layers task‑specific.
  • Feature extraction: freeze pre‑trained layers.
  • Fine‑tuning: retrain some top layers.

Need more clarification?

Drop us an email at career@quipoinfotech.com