The objective of transfer learning is to transfer the learning experience from one domain to another and it usually involve at least two tasks. The task the allow us to get learn experience is called source task and the task that we want to apply the learned experience to is called target task.
A learning task consists of four components:
- Feature space: This is the input data.
- Marginal distribution: This is the distribution of the input data.
- Label space: This is the label data.
- Prediction Distribution: This is essentially the modeling part. We want to know the conditional distribution of labels given the input data.
There are different techniques of transfer learning:
- Self-taught learning
- Multi-task learning
- Domain adaptation
- Zero-shot learning
- One short learning
They correspond to the following learning scenarios
- Learning with different feature space
- Learning with different marginal distribution
- Learning with different label space
- Learning with different prediction distribution
In this post, we will talk about self-taught learning.
Self-taught learning can be used when we have large volume of source data without label and relatively small number of labeled target data.
When we train a model (for the target task), the model needs to first "understand" the data. In technical term, the model needs to learn the latent representation of the input data. This latent representation is then used to perform tasks such as classification. Notice that learning the latent representation of input data does not require labels. There are many unsupervised learning algorithms such as Autoencoder that are designed for this task. The idea of self-taught learning is to first apply a unsupervised learning algorithms to the source data that don't have label and get the latent representation of the input data and then fine tune the network to perform the target task.
----- END -----
©2019 - 2022 all rights reserved