2. A psychological point of view
Transfer Learning is the dependency
of human conduct, learning, or
performance on prior experience
3. Machine Learning community
point of view
Transfer learning attempts to develop
methods to transfer knowledge
learned in one or more source tasks
and use it to improve learning in a
related target task
Source-Task
Knowledge
Target-Task
Data
Given Learned
4. Learn new Model :
1. Collect new Labeled
Data
2. Build new model
Reuse and
Adapt already
learned model!
$$
7. Transefer Learning Goal
improve learning in the target task by
leveraging knowledge from the source task.
By three common measures:
1- initial performance
2- amount of time
3- final performance
higher start
higher slop
higher asymptote
With transfer
Without transfer
8. Transfer in an inductive Learning
Works by allowing source-task
knowledge to affect the target task’s
inductive bias ((a set of assumptions
about the true distribution of the
training data)).
Concerned with improving the speed
with which a model is learned.
Concerned with improving its
generalization capability.
9. Transfer in an inductive Learning
Inductive Transfer:
◦ the target-task inductive bias is chosen
or adjusted based on the source-task
knowledge.
◦ depending on which inductive learning
algorithm is used to learn the source
and target tasks.
Search
Allowed Hypotheses Allowed Hypotheses
Inductive learning Inductive Transfer
All Hypotheses All Hypotheses
Search
10. Transfer in an inductive Learning
Bayesian Transfer:
◦ Bayesian learning uses a prior
distribution to smooth the estimates
from training data.
◦ Bayesian transfer may provide a more
informative prior from source-task
knowledge.
Posterior
Distribution
Prior
Distribution
Data
+
=
Bayesian learning Bayesian Transfer
11. Transfer in an inductive Learning
Hierarchical Transfer:
◦ Solutions to simple tasks are combined
or provided as tools to produce a
solution to a more complex task.
◦ Can involve many tasks.
◦ The target task might use entire source-
task solutions as parts of its own.
Pipe
Surface Circle
CurveLine
19. AVOIDING NEGATIVE TRANSFER
Rejecting Bad
Information
reject harmful
source-task
knowledge while
learning the target
task. The goal is to
minimize the
impact of bad
information, so
that the transfer
performance is at
least no worse
than learning the
target task
without transfer
Choosing a
Source Task
the problem
becomes
choosing the
best source task.
Transfer
methods without
much protection
may still be
effective, as long
as the best
source task is at
least a decent
match
Modeling Task
Similarity
explicitly model
relationships
between tasks
and include this
information in
the transfer
method. This can
lead to better
use of source-
task knowledge
and decrease the
risk of negative
transfer.
20. AUTOMATICALLY
MAPPING TASKS
When an agent applies knowledge
from one task in another, it is often
necessary to map the characteristics of
one task onto those of the other to
specify correspondences.
Source Task Target Task
Property1
Property2
Property1
Property M
Property N
…
…
21. AUTOMATICALLY MAPPING TASKS
Mapping by
Analogy
it may be
possible to avoid
the mapping
problem
altogether by
ensuring that the
source and
target tasks have
the same
representation.
Trying Multiple
Mappings
One
straightforward
way of solving
the mapping
problem is to
generate several
possible
mappings and
allow the target-
task agent to try
them all.
Equalizing Task
Representations
There are some
methods that
construct a
mapping by
analogy. That
examine the
characteristics of
the source and
target tasks and
find elements
that correspond.
22. Conclusion
Transfer learning
has become a sizeable subfield in
machine learning.
is seen as an important aspect of
human learning.
can make machine learning more
efficient.
has some challenges
should be faced.