Transfer Learning Part II: When to use them in Image Processing

Transfer learning allows you to leverage an existing model by modifying and retraining it to fulfill a new use case. In this blog, we explore some key scenarios for why and when you should choose transfer learning over building a new machine learning model from scratch for image processing.

Transfer Learning Part II:  When to use them in Image Processing

Currently two of the most popular domains for employing transfer learning include computer vision and natural language processing (NLP), made famous lately by GPT2 and GPT3. If you need a refresher on transfer learning – a method to use and retrain an existing machine learning (ML) model on new data rather than creating a new model from scratch – check out our blog from last month, Don't Start your Model from Scratch - Use Transfer Learning .

When considering transfer learning, there are three important questions to answer: when to transfer, what to transfer, and how to transfer. So far we've talked about what to transfer and how to do it, but when should you transfer?

In this blog, we'll take a closer look at when it makes sense to use transfer learning in computer vision and we'll review some use cases where transfer learning has been used successfully.

When to use Transfer Learning

There are four general scenarios where it makes sense to leverage an existing model instead of creating a new one.

Modifying and/or Retraining an Existing Model Where the Source and Target have Similar Domains

The first and most obvious scenario is when modifying and/or retraining an existing model would be quicker than creating and training it from scratch. Or when you can't or don't want to know all of the details of how the source model makes its predictions, especially when the source and target domain are the same.

Consider a use case of retraining an existing image classification model (the source) whose task is to classify images of different types of cars to one that classifies trucks. You could leverage the fact that the model has been trained to identify features common to all vehicles such as wheels, tires, doors, etc. (the domain), by choosing to leave (freeze) some or all of the model's layers as is, and then replacing and retraining the classifier to classify images of trucks (the target) from a new set of training data.

Since the bulk of the work involves classifying varying levels of vehicular features, why reinvent the wheel (no pun intended)? In this example, both models ultimately classify images of vehicles, so much of the model can probably be frozen. And although both the source and target data (cars and trucks respectively) fall in the same domain (vehicles), this approach certainly can also apply where there are different source and target domains as well.

Repurpose an Existing Model to a Different Domain

Similarly, transfer learning can be useful to repurpose an existing model, potentially from another domain. You might want to do this when a model doesn't exist to do the task at hand, or do it adequately, and/or don't want to start from scratch. For example, say you want to do image segmentation on medical images of specific anomalies like tumours but don't have the time or resources to create and train a model from scratch. Chances are you can find, modify, and retrain an existing medical image segmentation model (e.g., one which was trained on a different anomaly such as bone fractures) to serve your purpose.

Small Dataset

Another scenario is when your target dataset is relatively small. For example, if labelled data samples for medical images depicting a rare medical condition are hard to come by and you only have a small dataset, it might make sense to leverage and even build upon an existing model which has already been trained on a larger dataset, such as that created by another medical researcher. This is a great example of how the ML community can use transfer learning to work together to solve problems, advance their research, and share their work. In fact, after you update the model, the original creator may then leverage your work using transfer learning to further enhance the model!

Similar Architecture

Finally, you might also investigate transfer learning when creating a new model would result in the same or similar architecture as an existing one. For example, image classification, segmentation, and object detection models can be quite sophisticated but are often constructed using common model design patterns such as U-Nets, ResNets, CNNs, etc. So if you're going to end up re-creating the same topology, why not start with an existing, proven model? Even if the architecture needs to change and the source domain is completely different, it's likely that most of the layers can be reused, even if much of it needs to be unfrozen (e.g., to handle the new domain).

In their paper Understanding the Mechanisms of Deep Transfer Learning for Medical Images, Ravishankar et al. showed how transferring a model trained to perform image classification of kidney problems depicted in ultrasound images can outperform a state-of-the-art feature-engineered pipeline. Another interesting case study is described in Neural Style Transfer in 10 Minutes with VGG-19 and Transfer Learning, where a VGG-19 CNN and Neural style transfer are used as the basis for generating new images of people who look as if they were painted.

And while we're on the topic of use cases, be sure to check out our ODSC East 2021 blog where we discuss our upcoming ODSC demo. In the demo we will retrain a model to classify a group of microscopy images (e.g., for healthcare) using transfer learning. Here, we'll take a more generalized MobileNetV2 trained with ImageNet weights (i.e., a model trained with a larger dataset), and repurpose it for use on a smaller, more specialized set of microscopy images. This will also bypass the need to create an image classification model from scratch.

Transfer Learning in Practice

The use of transfer learning today is growing by leaps and bounds, driven by the fact that most ML models (e.g., DNNs) are highly configurable (e.g., via hyperparameters) by their very nature. In addition, there are now a lot of open source frameworks, models, and datasets freely available to leverage.

Check out ImageNet, an image database which currently contains over 20,000 categories of more than 14 million images with hand-annotated descriptions. In development since 2006, ImageNet was built to facilitate research and development for computer vision, and many ML practitioners use the data to provide a baseline of weights when training their models.

Our previous blog on transfer learning – see the demo on YouTube here – showed how you can easily take advantage of these resources by creating a Custom component in PerceptiLabs to leverage an existing model for transfer learning.  In that example, we used a MobileNetV2 model from Keras Applications, pre-loaded with weights from ImageNet, but there are a myriad of other models to choose from. We also showed how you can freeze the model in PerceptiLabs for better explainability and how its visualization features provide insight into how the transferred model transforms data and ultimately how it performs during training.

Screenshot of a model in PerceptiLabs being retrained.
Screenshot of a model in PerceptiLabs being retrained.

Soon we'll be adding more functionality to support transfer learning, including our upcoming model hub feature where you can create and share models with the ML community to use as the basis for new models. So be sure to stay tuned as we announce this, and many other cool new features.


Transfer learning can save you time by leveraging existing models especially for image processing and computer vision use cases. When starting out on an ML project, be sure to analyze if it's possible to build upon an existing model, using one of the four scenarios above, before setting out to create one from scratch. And don't forget to take advantage of all of those great open source models available out there, as well as datasets like ImageNet and these other top five we recently wrote about.