SlideShare ist ein Scribd-Unternehmen logo
1 von 33
Downloaden Sie, um offline zu lesen
Beyond function approximators for
batch mode reinforcement learning:
rebuilding trajectories
Damien Ernst
University of Li`ege, Belgium
2010 NIPS Workshop on
“Learning and Planning from Batch Time Series Data”
Batch mode Reinforcement Learning ≃
Learning a high-performance policy for a sequential decision
problem where:
• a numerical criterion is used to define the performance of
a policy. (An optimal policy is the policy that maximizes
this numerical criterion.)
• “the only” (or “most of the”) information available on the
sequential decision problem is contained in a set of
trajectories.
Batch mode RL stands at the intersection of three worlds:
optimization (maximization of the numerical criterion),
system/control theory (sequential decision problem) and
machine learning (inference from a set of trajectories).
A typical batch mode RL problem
Discrete-time dynamics:
xt+1 = f(xt , ut , wt ) t = 0, 1, . . . , T − 1 where xt ∈ X, ut ∈ U
and wt ∈ W. wt is drawn at every time step according to Pw (·).
Reward observed after each system transition:
rt = ρ(xt , ut , wt ) where ρ : X × U × W → R is the reward
function.
Type of policies considered: h : {0, 1, . . . , T − 1} × X → U.
Performance criterion: Expected sum of the rewards
observed over the T-length horizon
PCh
(x) = Jh
(x) = E
w0,...,wT−1
[
T−1
t=0 ρ(xt , h(t, xt ), wt )] with x0 = x
and xt+1 = f(xt , h(t, xt ), wt ).
Available information: A set of elementary pieces of
trajectories Fn = {(xl
, ul
, rl
, yl
)}n
l=1 where yl
is the state
reached after taking action ul
in state xl
and rl
the
instantaneous reward associated with the transition. The
functions f, ρ and Pw are unknown. 2
Batch mode RL and function approximators
Training function approximators (radial basis functions, neural
nets, trees, etc) using the information contained in the set of
trajectories is a key element to most of the resolution schemes
for batch mode RL problems with state-action spaces having a
large (infinite) number of elements.
Two typical uses of FAs for batch mode RL:
• the FAs model of the sequential decision problem (in
our typical problem f, r and Pw ). The model is afterwards
exploited as if it was the real problem to compute a
high-performance policy.
• the FAs represent (state-action) value functions which
are used in iterative schemes so as to converge to a
(state-action) value function from which a
high-performance policy can be computed. Iterative
schemes based on the dynamic programming principle
(e.g., LSPI, FQI, Q-learning). 3
Why look beyond function approximators ?
FAs based techniques: mature, can successfully solve many
real life problems but:
1. not well adapted to risk sensitive performance criteria
2. may lead to unsafe policies - poor performance
guarantees
3. may make suboptimal use of near-optimal trajectories
4. offer little clues about how to generate new experiments in
an optimal way
4
1. not well adapted to risk sensitive performance criteria
An example of risk sensitive performance criterion:
PCh
(x) =
−∞ if P(
T−1
t=0 ρ(xt , h(t, xt ), wt ) < b) > c
Jh
(x) otherwise.
FAs with dynamic programming: very problematic because
(state-action) value functions need to become functions that
take as values “probability distributions of future rewards” and
not “expected rewards”.
FAs with model learning: more likely to succeed; but what
about the challenges of fitting the FAs to model the distribution
of future states reached (rewards collected) by policies and not
only an average behavior ?
5
2. may lead to unsafe policies - poor performance
guarantees
• Benchmark: • Trajectory set • Trajectory set not
puddle world covering the puddle covering the puddle
• RL algorithms: ⇒ Optimal policy ⇒ Suboptimal
FQI with trees (unsafe) policy
Typical performance guarantee in the deterministic case for
FQI = (estimated return by FQI of the policy it outputs minus
constant×(’size’ of the largest area of the state space not
covered by the sample)).
6
3. may make suboptimal use of near-optimal
trajectories
Suppose a deterministic batch mode RL problem and that in
the set of trajectory, there is a trajectory:
(xopt. traj.
0 , u0, r0, x1, u1, r1, x2, . . . , xT−2, uT−2, rT−2, xT−1, uT−1, rT−1, xT )
where the ut s have been selected by an optimal policy.
Question: Which batch mode RL algorithms will output a
policy which is optimal for the initial state xopt. traj.
0 whatever the
other trajectories in the set ? Answer: Not that many and
certainly not those using parametric FAs.
In my opinion: batch mode RL algorithms can only be
successful on large-scale problems if (i) in the set of
trajectories, many trajectories have been generated by
(near-)optimal policies (ii) the algorithms exploit very well the
information contained in those (near-)optimal trajectories.
7
4. offer little clues about how to generate new
experiments in an optimal way
Many real-life problems are variants of batch mode RL
problems for which (a limited number of) additional trajectories
can be generated (under various constraints) to enrich the
initial set of trajectories.
Question: How should these new trajectories be generated ?
Many approaches based on the analysis of the FAs produced
by batch mode RL methods have been proposed; results are
mixed.
8
Rebuilding trajectories
We conjecture that mapping the set of trajectories into FAs
generally lead to the loss of essential information for
addressing these four issues ⇒ We have developed a new line
of research for solving batch mode RL that does not use at all
FAs.
Line of research articulated around the the rebuilding of
artificial (likely “broken”) trajectories by using the set of
trajectories input of the batch mode RL problem; a rebuilt
trajectory is defined by the elementary pieces of trajectory it is
made of.
The rebuilt trajectories are analysed to compute various things:
a high-performance policy, performance guarantees, where to
sample, etc.
9
BLUE ARROW = elementary piece of trajectory
Set of trajectories Examples of 5-length
given as input of the rebuilt trajectories made
batch RL problem from elements of this set
10
Model-Free Monte Carlo Estimator
Building an oracle that estimates the performance of a policy:
important problem in batch mode RL.
Indeed, if an oracle is available, problem of estimating a
high-performance policy can be reduced to an optimization
problem over a set of candidate policies.
If a model of sequential decision problem is available, a Monte
Carlo estimator (i.e., rollouts) can be used to estimate the
performance of a policy.
We detail an approach that estimates the performance of a
policy by rebuilding trajectories so as to mimic the behavior of
the Monte Carlo estimator.
11
Context in which the approach is presented
Discrete-time dynamics:
xt+1 = f(xt , ut , wt ) t = 0, 1, . . . , T − 1 where xt ∈ X, ut ∈ U
and wt ∈ W. wt is drawn at every time step according to Pw (·)
Reward observed after each system transition:
rt = ρ(xt , ut , wt ) where ρ : X × U × W → R is the reward
function.
Type of policies considered: h : {0, 1, . . . , T − 1} × X → U.
Performance criterion: Expected sum of the rewards
observed over the T-length horizon
PCh
(x) = Jh
(x) = E
w0,...,wT−1
[
T−1
t=0 ρ(xt , h(t, xt ), wt )] with x0 = x
and xt+1 = f(xt , h(t, xt ), wt ).
Available information: A set of elementary pieces of
trajectories Fn = {(xl
, ul
, rl
, yl
)}n
l=1. f, ρ and Pw are unknown.
Approach aimed at estimating Jh
(x) from Fn.
12
Monte Carlo Estimator
Generate nbTraj T-length trajectories by simulating the system
starting from the initial state x0; for every trajectory compute the
sum of rewards collected; average these sum of rewards over
the nbTraj trajectories to get an estimate MCEh
(x0) of Jh
(x0).
trajectory 2
r1
r2
r0
r4
r3 = r(x3, h(3, x3), w3)trajectory 1
sum rew. traj. 1 = 4
i=0 ri
MCEh(x0) = 1
3
3
i=1 sum rew. traj. i
w3 ∼ Pw (·)
x4 = f(x3, h(3, x3), w3)
x3
Illustration with
and T = 5
nbTraj = 3
x0
trajectory 3
Bias MCEh
(x0) = E
nbTraj ∗T rand. var. w∼Pw (·)
[MCEh
(x0) − Jh
(x0)]= 0
Var. MCEh
(x0) = 1
nbTraj
(Var. of the sum of rewards along a traj.)
13
Description of Model-free Monte Carlo Estimator
(MFMC)
Principle: To rebuild nbTraj T-length trajectories using the
elements of the set Fnand to average the sum of rewards
collected along the rebuilt trajectories to get an estimate
MFMCEh
(Fn, x0) of Jh
(x0).
Trajectories rebuilding algorithm: Trajectories are
sequentially rebuilt; an elementary piece of trajectory can only
be used once; trajectories are grown in length by selecting at
every instant t = 0, 1, . . . , T − 1 the elementary piece of
trajectory (x, u, r, y) that minimizes the
distance ∆((x, u), (xend
, h(t, xend
)))
where xend
is the ending state of the already rebuilt part of the
trajectory (xend
= x0 if t = 0).
14
Remark: When sequentially selecting the pieces of
trajectories, no information on the value of the disturbance w
“behind” the new piece of elementary trajectory
(x, u, r = ρ(x, u, w), y = f(x, u, w)) that is going to be selected
is given if only (x, u) and the previous elementary pieces of
trajectories selected are known. Important for having a
meaningful estimator !!!
rebuilt trajectory 2
r3 r21
r9
rebuilt trajectory 1
x0
rebuilt trajectory 3
r18
Illustration with
T = 5 and
(x18, r18, u18, y18)
sum rew. re. traj. 1 = r3
+ r18
+ r21
+ r7
+ r9
MFMCEh(Fn, x0) = 1
3
3
i=1 sum rew. re. traj. i
nbTraj = 3,
r7
F24 = {(xl
, rl
, ul
, yl
)}24
l=1
15
Analysis of the MFMC
Random set ˜Fn defined as follows:
Made of n elementary pieces of trajectory where the first two
components of an element (xl
, ul
) are given by the first two
element of the lth element of Fn and the last two are generated
by drawing for each l a disturbance signal wl
at random from
PW (·) and taking rl
= ρ(xl
, ul
, wl
) and yl
= f(xl
, ul
, wl
).
Fn is a realization of the random set ˜Fn.
Bias and variance of MFMCE defined as:
Bias MFMCEh
( ˜Fn, x0) = E
w1,...,wn∼Pw
[MFMCEh
( ˜Fn, x0) − Jh
(x0)]
Var. MFMCEh
( ˜Fn, x0) =
E
w1,...,wn∼Pw
[(MFMCEh
( ˜Fn, x0) − E
w1,...,wn∼Pw
[MFMCEh
( ˜Fn, x0)])2
]
We provide bounds of the bias and variance of this
estimator.
16
Assumptions
1] The functions f, ρ and h are Lipschitz continuous:
∃Lf , Lρ, Lh ∈ R+
: ∀(x, x′
, u, u′
, w) ∈ X2
× U2
× W;
f(x, u, w) − f(x′
, u′
, w) X ≤ Lf ( x − x′
X + u − u′
U)
|ρ(x, u, w) − ρ(x′
, u′
, w)| ≤ Lρ( x − x′
X + u − u′
U)
h(t, x) − h(t, x′
) U ≤ Lh x − x′
∀t ∈ {0, 1, . . . , T − 1}.
2] The distance ∆ is chosen such that:
∆((x, u), (x′
, u′
)) = ( x − x′
X + u − u′
U).
17
Characterization of the bias and the variance
Theorem.
Bias MFMCEh
( ˜Fn, x0) ≤ C ∗ sparsity of Fn(nbTraj ∗ T)
Var. MFMCEh
( ˜Fn, x0) ≤
( Var. MCEh(x0) + 2C ∗ sparsity of Fn(nbTraj ∗ T))2
with C = Lρ
T−1
t=0
T−t−1
i=0 [Lf (1 + Lh)]i
and with the
sparsity of Fn(k) defined as the minimal radius r such that all
balls in X × U of radius r contain at least k state-action pairs
(xl
, ul
) of the set Fn = {(xl
, ul
, rl
, yl
)}n
l=1.
18
Test system
Discrete-time dynamics: xt+1 = sin(π
2 (xt + ut + wt )) with
X = [−1, 1], U = [−1
2 , 1
2 ], W = [−0.1
2 , −0.1
2 ] and Pw (·) a uniform
pdf.
Reward observed after each system transition:
rt = 1
2π e− 1
2 (x2
t +u2
t )
+ wt
Performance criterion: Expected sum of the rewards
observed over a 15-length horizon (T = 15).
We want to evaluate the performance of the policy
h(t, x) = −x
2 when x0 = −0.5.
19
Simulations for nbTraj = 10 and size of Fn = 100, ..., 10000.
Model-free Monte Carlo Estimator Monte Carlo Estimator
20
Simulations for nbTraj = 1, . . . , 100 and size of Fn = 10, 000.
Model-free Monte Carlo Estimator Monte Carlo Estimator
21
Remember what was said about RL + FAs:
1. not well adapted to risk sensitive performance criteria
Suppose the risk sensitive performance criterion:
PCh
(x) =
−∞ if P(Jh
(x) < b) > c
Jh
(x) otherwise
where Jh
(x) = E[
T−1
t=0 ρ(xt , h(t, xt ), wt )].
MFMCE adapted to this performance criterion:
Rebuilt nbTraj starting from x0 using the set Fn as done with the
MFMCE estimator. Let sum rew traj i be the sum of rewards
collected along the ith trajectory. Output as estimation of
PCh
(x0) :



−∞ if
nbTraj
i=1
I{sum rew traj i<b}
nbTraj
> c
nbTraj
i=1
sum rew traj i
nbTraj
otherwise.
22
MFMCE in the deterministic case
We consider from now on that: xt+1 = f(xt , ut ) and
rt = ρ(xt , ut ).
One single trajectory is sufficient to compute exactly Jh
(x0) by
Monte Carlo estimation.
Theorem. Let [(xlt
, ult
, rlt
, ylt
)]T−1
t=0 be the trajectory rebuilt by
the MFMCE when using the distance measure
∆((x, u), (x′
, u′
)) = x − x′
+ u − u′
. If f, ρ and h are
Lipschitz continuous, we have
|MFMCEh
(x0) − Jh
(x0)| ≤
T−1
t=0
LQT−t
∆((ylt−1
, h(t, ylt−1
)), (xlt
, ult
))
where yl−1
= x0 and LQN
= Lρ( N−1
t=0 [Lf (1 + Lh)]t
).
23
Previous theorem extends to whatever rebuilt trajectory:
Theorem. Let [(xlt
, ult
, rlt
, ylt
)]T−1
t=0 be any rebuilt trajectory. If f,
ρ and h are Lipschitz continuous, we have
|
T−1
t=0
rlt
− Jh
(x0)| ≤
T−1
t=0
LQT−t
∆((ylt−1
, h(t, ylt−1
)), (xlt
, ult
))
where ∆((x, u), (x′
, u′
)) = x − x′
+ u − u′
, yl−1
= x0 and
LQN
= Lρ(
N−1
t=0 [Lf (1 + Lh)]t
).
r2 = r(x2, h(2, x2))
r3
trajectory generated by policy h
∆2 = LQ3
( yl1 − xl2 + h(2, yl1 ) − ul2 )
rl4
rl1
xl2 , ul2
| 4
t=0 rt − 4
t=0 rlt | ≤ 4
t=0 ∆t
r1
r0x0
r5
rl0
rl2
rl3yl1
24
Computing a lower bound on a policy
From previous theorem, we have for any rebuilt trajectory
[(xlt
, ult
, rlt
, ylt
)]T−1
t=0 :
Jh
(x0) ≥
T−1
t=0
rlt
−
T−1
t=0
LQT−t
∆((ylt−1
, h(t, ylt−1
)), (xlt
, ult
))
This suggests to find the rebuilt trajectory that maximizes the
right-hand side of the inequality to compute a tight lower bound
on h. Let:
lower bound(h, x0, Fn)
max
[(xlt ,ult ,rlt ,ylt )]T−1
t=0
T−1
t=0 rlt
−
T−1
t=0 LQT−t
∆((ylt−1
, h(t, ylt−1
)), (xlt
, ult
))
25
A tight upper bound on Jh
(x) can be defined and computed in
a similar way:
upper bound(h, x0, Fn)
min
[(xlt ,ult ,rlt ,ylt )]T−1
t=0
T−1
t=0 rlt
+ T−1
t=0 LQT−t
∆((ylt−1
, h(t, ylt−1
)), (xlt
, ult
))
Why are these bounds tight ? Because:
∃C ∈ R+
: Jh
(x) − lower bound(h, x, Fn) ≤ C ∗ sparsity of Fn(1)
upper bound(h, x, Fn) − Jh
(x) ≤ C ∗ sparsity of Fn(1)
Functions lower bound(h, x0, Fn) and higher bound(h, x0, Fn)
can be implemented in a “smart way” by seeing the problem as
a problem of finding the shortest path in a graph. Complexity
linear with T and quadratic with |Fn|.
26
Remember what was said about RL + FAs:
2. may lead to unsafe policies - poor performance
guarantees
Let H be a set of candidate high-performance policies. To
obtain a policy with good performance guarantees, we suggest
to solve the following problem:
h ∈ arg max
h∈H
lower bound(h, x0, Fn)
If H is the set of open-loop policies, solving the above
optimization problem can be seen as identifying an “optimal”
rebuilt trajectory and outputting as open-loop policy the
sequence of actions taken along this rebuilt trajectory.
27
• Trajectory set covering the puddle:
FQI with trees h ∈ arg max
h∈H
lower bound(h, x0, Fn)
• Trajectory set not covering the puddle:
FQI with trees h ∈ arg max
h∈H
lower bound(h, x0, Fn)
28
Remember what was said about RL + FAs:
3. may make suboptimal use of near-optimal
trajectories
Suppose a deterministic batch mode RL problem and that in
Fn you have the elements of the trajectory:
(xopt. traj.
0 , u0, r0, x1, u1, r1, x2, . . . , xT−2, uT−2, rT−2, xT−1, uT−1, rT−1, xT )
where the ut s have been selected by an optimal policy.
Let H be the set of open-loop policies. Then, the sequence of
actions h ∈ arg max
h∈H
lower bound(h, xopt. traj.
0 , Fn) is an optimal
one whatever the other trajectories in the set.
Actually, the sequence of action h outputted by this algorithm
tends to be an append of subsequences of actions
belonging to optimal trajectories.
29
Remember what was said about RL + FAs:
4. offer little clues about how to generate new
experiments in an optimal way
The functions lower bound(h, x0, Fn) and
upper bound(h, x0, Fn) can be exploited for generating new
trajectories.
For example, suppose that you can sample the state-action
space several times so as to generate m new elementary
pieces of trajectories to enrich your initial set Fn. We have
proposed a technique to determine m “interesting” sampling
locations based on these bounds.
This technique - which is still preliminary - targets sampling
locations that lead to the largest bound width decrease for
candidate optimal policies.
30
Closure
Rebuilding trajectories: interesting concept for solving many
problems related to batch mode RL.
Actually, the solution outputted by many RL algorithms (e.g.,
model-learning with kNN, fitted Q iteration with trees) can be
characterized by a set of “rebuilt trajectories”.
⇒ I suspect that this concept of rebuilt trajectories could lead
to a general paradigm for analyzing and designing RL
algorithms.
31
Presentation based on (in order of appearance):
“Model-free Monte Carlo-like policy evaluation”. R. Fonteneau, S.A. Murphy, L.
Wehenkel and D. Ernst. In Proceedings of The Thirteenth International
Conference on Artificial Intelligence and Statistics (AISTATS 2010), JMLR
W&CP Volume 9, pages 217-224, Chia Laguna, Sardinia, Italy, May 2010.
“Inferring bounds on the performance of a control policy from a sample of
trajectories”. R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. In
Proceedings of the IEEE International Symposium on Adaptive Dynamic
Programming and Reinforcement Learning (ADPRL-09), pages 117-123.
Nashville, United States, March 30 - April 2, 2009.
“A cautious approach to generalization in reinforcement learning”. R.
Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. In Proceedings of the 2nd
International Conference on Agents and Artificial Intelligence (ICAART 2010),
Valencia, Spain, January 2010. (10 pages).
“Generating informative trajectories by using bounds on the return of control
policies”. R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. In
Proceedings of the Workshop on Active Learning and Experimental Design
2010 (in conjunction with AISTATS 2010), Italy, May 2010. (2 pages).
32

Weitere ähnliche Inhalte

Was ist angesagt?

. An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic .... An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic ...
butest
 
Random Matrix Theory in Array Signal Processing: Application Examples
Random Matrix Theory in Array Signal Processing: Application ExamplesRandom Matrix Theory in Array Signal Processing: Application Examples
Random Matrix Theory in Array Signal Processing: Application Examples
Förderverein Technische Fakultät
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 
Jörg Stelzer
Jörg StelzerJörg Stelzer
Jörg Stelzer
butest
 
System properties of random networks
System properties of random networksSystem properties of random networks
System properties of random networks
Marzieh Nabi
 
Kernel methods in machine learning
Kernel methods in machine learningKernel methods in machine learning
Kernel methods in machine learning
butest
 

Was ist angesagt? (20)

Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Sequential Monte Carlo algorithms for agent-based models of disease transmission
Sequential Monte Carlo algorithms for agent-based models of disease transmissionSequential Monte Carlo algorithms for agent-based models of disease transmission
Sequential Monte Carlo algorithms for agent-based models of disease transmission
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
. An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic .... An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic ...
 
Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methods
 
Random Matrix Theory in Array Signal Processing: Application Examples
Random Matrix Theory in Array Signal Processing: Application ExamplesRandom Matrix Theory in Array Signal Processing: Application Examples
Random Matrix Theory in Array Signal Processing: Application Examples
 
WSC 2011, advanced tutorial on simulation in Statistics
WSC 2011, advanced tutorial on simulation in StatisticsWSC 2011, advanced tutorial on simulation in Statistics
WSC 2011, advanced tutorial on simulation in Statistics
 
Presentacion limac-unc
Presentacion limac-uncPresentacion limac-unc
Presentacion limac-unc
 
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
 
Talk 5
Talk 5Talk 5
Talk 5
 
MUMS: Bayesian, Fiducial, and Frequentist Conference - Multidimensional Monot...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Multidimensional Monot...MUMS: Bayesian, Fiducial, and Frequentist Conference - Multidimensional Monot...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Multidimensional Monot...
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
mcmc
mcmcmcmc
mcmc
 
Jörg Stelzer
Jörg StelzerJörg Stelzer
Jörg Stelzer
 
Sufficiency
SufficiencySufficiency
Sufficiency
 
Estimating Space-Time Covariance from Finite Sample Sets
Estimating Space-Time Covariance from Finite Sample SetsEstimating Space-Time Covariance from Finite Sample Sets
Estimating Space-Time Covariance from Finite Sample Sets
 
Statistical inference for agent-based SIS and SIR models
Statistical inference for agent-based SIS and SIR modelsStatistical inference for agent-based SIS and SIR models
Statistical inference for agent-based SIS and SIR models
 
System properties of random networks
System properties of random networksSystem properties of random networks
System properties of random networks
 
Advanced Support Vector Machine for classification in Neural Network
Advanced Support Vector Machine for classification  in Neural NetworkAdvanced Support Vector Machine for classification  in Neural Network
Advanced Support Vector Machine for classification in Neural Network
 
Kernel methods in machine learning
Kernel methods in machine learningKernel methods in machine learning
Kernel methods in machine learning
 

Andere mochten auch

Lecture5 graphics
Lecture5   graphicsLecture5   graphics
Lecture5 graphics
Mr SMAK
 
World wide web with multimedia
World wide web with multimediaWorld wide web with multimedia
World wide web with multimedia
Afaq Siddiqui
 
Internet and World Wide Web
Internet and World Wide WebInternet and World Wide Web
Internet and World Wide Web
Samudin Kassan
 
Interactive multimedia and virtual reality
Interactive multimedia and virtual realityInteractive multimedia and virtual reality
Interactive multimedia and virtual reality
Suprabha B
 
Multimedia And Animation
Multimedia And AnimationMultimedia And Animation
Multimedia And Animation
Ram Dutt Shukla
 
Lecture 9 animation
Lecture 9 animationLecture 9 animation
Lecture 9 animation
Mr SMAK
 

Andere mochten auch (20)

Lecture5 graphics
Lecture5   graphicsLecture5   graphics
Lecture5 graphics
 
An introduction to column store indexes and batch mode
An introduction to column store indexes and batch modeAn introduction to column store indexes and batch mode
An introduction to column store indexes and batch mode
 
Animation
AnimationAnimation
Animation
 
Chapter 3 : IMAGE
Chapter 3 : IMAGEChapter 3 : IMAGE
Chapter 3 : IMAGE
 
World wide web with multimedia
World wide web with multimediaWorld wide web with multimedia
World wide web with multimedia
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
 
Internet and World Wide Web
Internet and World Wide WebInternet and World Wide Web
Internet and World Wide Web
 
Bitmap and vector
Bitmap and vectorBitmap and vector
Bitmap and vector
 
Animation & Animation Techniques
Animation & Animation TechniquesAnimation & Animation Techniques
Animation & Animation Techniques
 
An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms
 
Interactive multimedia and virtual reality
Interactive multimedia and virtual realityInteractive multimedia and virtual reality
Interactive multimedia and virtual reality
 
Back propagation
Back propagationBack propagation
Back propagation
 
Animation Techniques
Animation TechniquesAnimation Techniques
Animation Techniques
 
Color & light
Color & lightColor & light
Color & light
 
Multimedia And Animation
Multimedia And AnimationMultimedia And Animation
Multimedia And Animation
 
Lecture 9 animation
Lecture 9 animationLecture 9 animation
Lecture 9 animation
 
Chapter 3
Chapter 3Chapter 3
Chapter 3
 
The Internet and Multimedia
The Internet and Multimedia The Internet and Multimedia
The Internet and Multimedia
 
Digital imaging
Digital imagingDigital imaging
Digital imaging
 
Animation
AnimationAnimation
Animation
 

Ähnlich wie Beyond function approximators for batch mode reinforcement learning: rebuilding trajectories

Learning for exploration-exploitation in reinforcement learning. The dusk of ...
Learning for exploration-exploitation in reinforcement learning. The dusk of ...Learning for exploration-exploitation in reinforcement learning. The dusk of ...
Learning for exploration-exploitation in reinforcement learning. The dusk of ...
Université de Liège (ULg)
 
Computing near-optimal policies from trajectories by solving a sequence of st...
Computing near-optimal policies from trajectories by solving a sequence of st...Computing near-optimal policies from trajectories by solving a sequence of st...
Computing near-optimal policies from trajectories by solving a sequence of st...
Université de Liège (ULg)
 
Meta-learning of exploration-exploitation strategies in reinforcement learning
Meta-learning of exploration-exploitation strategies in reinforcement learningMeta-learning of exploration-exploitation strategies in reinforcement learning
Meta-learning of exploration-exploitation strategies in reinforcement learning
Université de Liège (ULg)
 
IFAC2008art
IFAC2008artIFAC2008art
IFAC2008art
Yuri Kim
 
Conditional Random Fields
Conditional Random FieldsConditional Random Fields
Conditional Random Fields
lswing
 
Machine Learning Notes for beginners ,Step by step
Machine Learning Notes for beginners ,Step by stepMachine Learning Notes for beginners ,Step by step
Machine Learning Notes for beginners ,Step by step
SanjanaSaxena17
 

Ähnlich wie Beyond function approximators for batch mode reinforcement learning: rebuilding trajectories (20)

Learning for exploration-exploitation in reinforcement learning. The dusk of ...
Learning for exploration-exploitation in reinforcement learning. The dusk of ...Learning for exploration-exploitation in reinforcement learning. The dusk of ...
Learning for exploration-exploitation in reinforcement learning. The dusk of ...
 
safe and efficient off policy reinforcement learning
safe and efficient off policy reinforcement learningsafe and efficient off policy reinforcement learning
safe and efficient off policy reinforcement learning
 
Computing near-optimal policies from trajectories by solving a sequence of st...
Computing near-optimal policies from trajectories by solving a sequence of st...Computing near-optimal policies from trajectories by solving a sequence of st...
Computing near-optimal policies from trajectories by solving a sequence of st...
 
Meta-learning of exploration-exploitation strategies in reinforcement learning
Meta-learning of exploration-exploitation strategies in reinforcement learningMeta-learning of exploration-exploitation strategies in reinforcement learning
Meta-learning of exploration-exploitation strategies in reinforcement learning
 
MLMM_16_08_2022.pdf
MLMM_16_08_2022.pdfMLMM_16_08_2022.pdf
MLMM_16_08_2022.pdf
 
Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...
 
Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...
 
Metody logiczne w analizie danych
Metody logiczne w analizie danych Metody logiczne w analizie danych
Metody logiczne w analizie danych
 
Lec1 01
Lec1 01Lec1 01
Lec1 01
 
IFAC2008art
IFAC2008artIFAC2008art
IFAC2008art
 
Conditional Random Fields
Conditional Random FieldsConditional Random Fields
Conditional Random Fields
 
lecture.ppt
lecture.pptlecture.ppt
lecture.ppt
 
CHAC Algorithm ECAL'07 Presentation
CHAC Algorithm ECAL'07 PresentationCHAC Algorithm ECAL'07 Presentation
CHAC Algorithm ECAL'07 Presentation
 
On the discretized algorithm for optimal proportional control problems constr...
On the discretized algorithm for optimal proportional control problems constr...On the discretized algorithm for optimal proportional control problems constr...
On the discretized algorithm for optimal proportional control problems constr...
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
Deep Reinforcement Learning Through Policy Optimization, John Schulman, OpenAI
Deep Reinforcement Learning Through Policy Optimization, John Schulman, OpenAIDeep Reinforcement Learning Through Policy Optimization, John Schulman, OpenAI
Deep Reinforcement Learning Through Policy Optimization, John Schulman, OpenAI
 
C025020029
C025020029C025020029
C025020029
 
A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...
 
Sampling-Based Planning Algorithms for Multi-Objective Missions
Sampling-Based Planning Algorithms for Multi-Objective MissionsSampling-Based Planning Algorithms for Multi-Objective Missions
Sampling-Based Planning Algorithms for Multi-Objective Missions
 
Machine Learning Notes for beginners ,Step by step
Machine Learning Notes for beginners ,Step by stepMachine Learning Notes for beginners ,Step by step
Machine Learning Notes for beginners ,Step by step
 

Mehr von Université de Liège (ULg)

Mehr von Université de Liège (ULg) (20)

Reinforcement learning for electrical markets and the energy transition
Reinforcement learning for electrical markets and the energy transitionReinforcement learning for electrical markets and the energy transition
Reinforcement learning for electrical markets and the energy transition
 
Algorithms for the control and sizing of renewable energy communities
Algorithms for the control and sizing of renewable energy communitiesAlgorithms for the control and sizing of renewable energy communities
Algorithms for the control and sizing of renewable energy communities
 
AI for energy: a bright and uncertain future ahead
AI for energy: a bright and uncertain future aheadAI for energy: a bright and uncertain future ahead
AI for energy: a bright and uncertain future ahead
 
Extreme engineering for fighting climate change and the Katabata project
Extreme engineering for fighting climate change and the Katabata projectExtreme engineering for fighting climate change and the Katabata project
Extreme engineering for fighting climate change and the Katabata project
 
Ex-post allocation of electricity and real-time control strategy for renewabl...
Ex-post allocation of electricity and real-time control strategy for renewabl...Ex-post allocation of electricity and real-time control strategy for renewabl...
Ex-post allocation of electricity and real-time control strategy for renewabl...
 
Big infrastructures for fighting climate change
Big infrastructures for fighting climate changeBig infrastructures for fighting climate change
Big infrastructures for fighting climate change
 
Harvesting wind energy in Greenland: a project for Europe and a huge step tow...
Harvesting wind energy in Greenland: a project for Europe and a huge step tow...Harvesting wind energy in Greenland: a project for Europe and a huge step tow...
Harvesting wind energy in Greenland: a project for Europe and a huge step tow...
 
Décret favorisant le développement des communautés d’énergie renouvelable
Décret favorisant le développement des communautés d’énergie renouvelableDécret favorisant le développement des communautés d’énergie renouvelable
Décret favorisant le développement des communautés d’énergie renouvelable
 
Harnessing the Potential of Power-to-Gas Technologies. Insights from a prelim...
Harnessing the Potential of Power-to-Gas Technologies. Insights from a prelim...Harnessing the Potential of Power-to-Gas Technologies. Insights from a prelim...
Harnessing the Potential of Power-to-Gas Technologies. Insights from a prelim...
 
Reinforcement learning, energy systems and deep neural nets
Reinforcement learning, energy systems and deep neural nets Reinforcement learning, energy systems and deep neural nets
Reinforcement learning, energy systems and deep neural nets
 
Soirée des Grands Prix SEE - A glimpse at the research work of the laureate o...
Soirée des Grands Prix SEE - A glimpse at the research work of the laureate o...Soirée des Grands Prix SEE - A glimpse at the research work of the laureate o...
Soirée des Grands Prix SEE - A glimpse at the research work of the laureate o...
 
Reinforcement learning for data-driven optimisation
Reinforcement learning for data-driven optimisationReinforcement learning for data-driven optimisation
Reinforcement learning for data-driven optimisation
 
Electricity retailing in Europe: remarkable events (with a special focus on B...
Electricity retailing in Europe: remarkable events (with a special focus on B...Electricity retailing in Europe: remarkable events (with a special focus on B...
Electricity retailing in Europe: remarkable events (with a special focus on B...
 
Projet de décret « GRD »: quelques remarques du Prof. Damien ERNST
Projet de décret « GRD »: quelques remarques du Prof. Damien ERNSTProjet de décret « GRD »: quelques remarques du Prof. Damien ERNST
Projet de décret « GRD »: quelques remarques du Prof. Damien ERNST
 
Belgian offshore wind potential
Belgian offshore wind potentialBelgian offshore wind potential
Belgian offshore wind potential
 
Time to make a choice between a fully liberal or fully regulated model for th...
Time to make a choice between a fully liberal or fully regulated model for th...Time to make a choice between a fully liberal or fully regulated model for th...
Time to make a choice between a fully liberal or fully regulated model for th...
 
Electrification and the Democratic Republic of the Congo
Electrification and the Democratic Republic of the CongoElectrification and the Democratic Republic of the Congo
Electrification and the Democratic Republic of the Congo
 
Energy: the clash of nations
Energy: the clash of nationsEnergy: the clash of nations
Energy: the clash of nations
 
Smart Grids Versus Microgrids
Smart Grids Versus MicrogridsSmart Grids Versus Microgrids
Smart Grids Versus Microgrids
 
Uber-like Models for the Electrical Industry
Uber-like Models for the Electrical IndustryUber-like Models for the Electrical Industry
Uber-like Models for the Electrical Industry
 

Kürzlich hochgeladen

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ssuser89054b
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
Neometrix_Engineering_Pvt_Ltd
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
dollysharma2066
 

Kürzlich hochgeladen (20)

A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna Municipality
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.
 
Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.ppt
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
 
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
 
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
 
chapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringchapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineering
 
22-prompt engineering noted slide shown.pdf
22-prompt engineering noted slide shown.pdf22-prompt engineering noted slide shown.pdf
22-prompt engineering noted slide shown.pdf
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torque
 
2016EF22_0 solar project report rooftop projects
2016EF22_0 solar project report rooftop projects2016EF22_0 solar project report rooftop projects
2016EF22_0 solar project report rooftop projects
 

Beyond function approximators for batch mode reinforcement learning: rebuilding trajectories

  • 1. Beyond function approximators for batch mode reinforcement learning: rebuilding trajectories Damien Ernst University of Li`ege, Belgium 2010 NIPS Workshop on “Learning and Planning from Batch Time Series Data”
  • 2. Batch mode Reinforcement Learning ≃ Learning a high-performance policy for a sequential decision problem where: • a numerical criterion is used to define the performance of a policy. (An optimal policy is the policy that maximizes this numerical criterion.) • “the only” (or “most of the”) information available on the sequential decision problem is contained in a set of trajectories. Batch mode RL stands at the intersection of three worlds: optimization (maximization of the numerical criterion), system/control theory (sequential decision problem) and machine learning (inference from a set of trajectories).
  • 3. A typical batch mode RL problem Discrete-time dynamics: xt+1 = f(xt , ut , wt ) t = 0, 1, . . . , T − 1 where xt ∈ X, ut ∈ U and wt ∈ W. wt is drawn at every time step according to Pw (·). Reward observed after each system transition: rt = ρ(xt , ut , wt ) where ρ : X × U × W → R is the reward function. Type of policies considered: h : {0, 1, . . . , T − 1} × X → U. Performance criterion: Expected sum of the rewards observed over the T-length horizon PCh (x) = Jh (x) = E w0,...,wT−1 [ T−1 t=0 ρ(xt , h(t, xt ), wt )] with x0 = x and xt+1 = f(xt , h(t, xt ), wt ). Available information: A set of elementary pieces of trajectories Fn = {(xl , ul , rl , yl )}n l=1 where yl is the state reached after taking action ul in state xl and rl the instantaneous reward associated with the transition. The functions f, ρ and Pw are unknown. 2
  • 4. Batch mode RL and function approximators Training function approximators (radial basis functions, neural nets, trees, etc) using the information contained in the set of trajectories is a key element to most of the resolution schemes for batch mode RL problems with state-action spaces having a large (infinite) number of elements. Two typical uses of FAs for batch mode RL: • the FAs model of the sequential decision problem (in our typical problem f, r and Pw ). The model is afterwards exploited as if it was the real problem to compute a high-performance policy. • the FAs represent (state-action) value functions which are used in iterative schemes so as to converge to a (state-action) value function from which a high-performance policy can be computed. Iterative schemes based on the dynamic programming principle (e.g., LSPI, FQI, Q-learning). 3
  • 5. Why look beyond function approximators ? FAs based techniques: mature, can successfully solve many real life problems but: 1. not well adapted to risk sensitive performance criteria 2. may lead to unsafe policies - poor performance guarantees 3. may make suboptimal use of near-optimal trajectories 4. offer little clues about how to generate new experiments in an optimal way 4
  • 6. 1. not well adapted to risk sensitive performance criteria An example of risk sensitive performance criterion: PCh (x) = −∞ if P( T−1 t=0 ρ(xt , h(t, xt ), wt ) < b) > c Jh (x) otherwise. FAs with dynamic programming: very problematic because (state-action) value functions need to become functions that take as values “probability distributions of future rewards” and not “expected rewards”. FAs with model learning: more likely to succeed; but what about the challenges of fitting the FAs to model the distribution of future states reached (rewards collected) by policies and not only an average behavior ? 5
  • 7. 2. may lead to unsafe policies - poor performance guarantees • Benchmark: • Trajectory set • Trajectory set not puddle world covering the puddle covering the puddle • RL algorithms: ⇒ Optimal policy ⇒ Suboptimal FQI with trees (unsafe) policy Typical performance guarantee in the deterministic case for FQI = (estimated return by FQI of the policy it outputs minus constant×(’size’ of the largest area of the state space not covered by the sample)). 6
  • 8. 3. may make suboptimal use of near-optimal trajectories Suppose a deterministic batch mode RL problem and that in the set of trajectory, there is a trajectory: (xopt. traj. 0 , u0, r0, x1, u1, r1, x2, . . . , xT−2, uT−2, rT−2, xT−1, uT−1, rT−1, xT ) where the ut s have been selected by an optimal policy. Question: Which batch mode RL algorithms will output a policy which is optimal for the initial state xopt. traj. 0 whatever the other trajectories in the set ? Answer: Not that many and certainly not those using parametric FAs. In my opinion: batch mode RL algorithms can only be successful on large-scale problems if (i) in the set of trajectories, many trajectories have been generated by (near-)optimal policies (ii) the algorithms exploit very well the information contained in those (near-)optimal trajectories. 7
  • 9. 4. offer little clues about how to generate new experiments in an optimal way Many real-life problems are variants of batch mode RL problems for which (a limited number of) additional trajectories can be generated (under various constraints) to enrich the initial set of trajectories. Question: How should these new trajectories be generated ? Many approaches based on the analysis of the FAs produced by batch mode RL methods have been proposed; results are mixed. 8
  • 10. Rebuilding trajectories We conjecture that mapping the set of trajectories into FAs generally lead to the loss of essential information for addressing these four issues ⇒ We have developed a new line of research for solving batch mode RL that does not use at all FAs. Line of research articulated around the the rebuilding of artificial (likely “broken”) trajectories by using the set of trajectories input of the batch mode RL problem; a rebuilt trajectory is defined by the elementary pieces of trajectory it is made of. The rebuilt trajectories are analysed to compute various things: a high-performance policy, performance guarantees, where to sample, etc. 9
  • 11. BLUE ARROW = elementary piece of trajectory Set of trajectories Examples of 5-length given as input of the rebuilt trajectories made batch RL problem from elements of this set 10
  • 12. Model-Free Monte Carlo Estimator Building an oracle that estimates the performance of a policy: important problem in batch mode RL. Indeed, if an oracle is available, problem of estimating a high-performance policy can be reduced to an optimization problem over a set of candidate policies. If a model of sequential decision problem is available, a Monte Carlo estimator (i.e., rollouts) can be used to estimate the performance of a policy. We detail an approach that estimates the performance of a policy by rebuilding trajectories so as to mimic the behavior of the Monte Carlo estimator. 11
  • 13. Context in which the approach is presented Discrete-time dynamics: xt+1 = f(xt , ut , wt ) t = 0, 1, . . . , T − 1 where xt ∈ X, ut ∈ U and wt ∈ W. wt is drawn at every time step according to Pw (·) Reward observed after each system transition: rt = ρ(xt , ut , wt ) where ρ : X × U × W → R is the reward function. Type of policies considered: h : {0, 1, . . . , T − 1} × X → U. Performance criterion: Expected sum of the rewards observed over the T-length horizon PCh (x) = Jh (x) = E w0,...,wT−1 [ T−1 t=0 ρ(xt , h(t, xt ), wt )] with x0 = x and xt+1 = f(xt , h(t, xt ), wt ). Available information: A set of elementary pieces of trajectories Fn = {(xl , ul , rl , yl )}n l=1. f, ρ and Pw are unknown. Approach aimed at estimating Jh (x) from Fn. 12
  • 14. Monte Carlo Estimator Generate nbTraj T-length trajectories by simulating the system starting from the initial state x0; for every trajectory compute the sum of rewards collected; average these sum of rewards over the nbTraj trajectories to get an estimate MCEh (x0) of Jh (x0). trajectory 2 r1 r2 r0 r4 r3 = r(x3, h(3, x3), w3)trajectory 1 sum rew. traj. 1 = 4 i=0 ri MCEh(x0) = 1 3 3 i=1 sum rew. traj. i w3 ∼ Pw (·) x4 = f(x3, h(3, x3), w3) x3 Illustration with and T = 5 nbTraj = 3 x0 trajectory 3 Bias MCEh (x0) = E nbTraj ∗T rand. var. w∼Pw (·) [MCEh (x0) − Jh (x0)]= 0 Var. MCEh (x0) = 1 nbTraj (Var. of the sum of rewards along a traj.) 13
  • 15. Description of Model-free Monte Carlo Estimator (MFMC) Principle: To rebuild nbTraj T-length trajectories using the elements of the set Fnand to average the sum of rewards collected along the rebuilt trajectories to get an estimate MFMCEh (Fn, x0) of Jh (x0). Trajectories rebuilding algorithm: Trajectories are sequentially rebuilt; an elementary piece of trajectory can only be used once; trajectories are grown in length by selecting at every instant t = 0, 1, . . . , T − 1 the elementary piece of trajectory (x, u, r, y) that minimizes the distance ∆((x, u), (xend , h(t, xend ))) where xend is the ending state of the already rebuilt part of the trajectory (xend = x0 if t = 0). 14
  • 16. Remark: When sequentially selecting the pieces of trajectories, no information on the value of the disturbance w “behind” the new piece of elementary trajectory (x, u, r = ρ(x, u, w), y = f(x, u, w)) that is going to be selected is given if only (x, u) and the previous elementary pieces of trajectories selected are known. Important for having a meaningful estimator !!! rebuilt trajectory 2 r3 r21 r9 rebuilt trajectory 1 x0 rebuilt trajectory 3 r18 Illustration with T = 5 and (x18, r18, u18, y18) sum rew. re. traj. 1 = r3 + r18 + r21 + r7 + r9 MFMCEh(Fn, x0) = 1 3 3 i=1 sum rew. re. traj. i nbTraj = 3, r7 F24 = {(xl , rl , ul , yl )}24 l=1 15
  • 17. Analysis of the MFMC Random set ˜Fn defined as follows: Made of n elementary pieces of trajectory where the first two components of an element (xl , ul ) are given by the first two element of the lth element of Fn and the last two are generated by drawing for each l a disturbance signal wl at random from PW (·) and taking rl = ρ(xl , ul , wl ) and yl = f(xl , ul , wl ). Fn is a realization of the random set ˜Fn. Bias and variance of MFMCE defined as: Bias MFMCEh ( ˜Fn, x0) = E w1,...,wn∼Pw [MFMCEh ( ˜Fn, x0) − Jh (x0)] Var. MFMCEh ( ˜Fn, x0) = E w1,...,wn∼Pw [(MFMCEh ( ˜Fn, x0) − E w1,...,wn∼Pw [MFMCEh ( ˜Fn, x0)])2 ] We provide bounds of the bias and variance of this estimator. 16
  • 18. Assumptions 1] The functions f, ρ and h are Lipschitz continuous: ∃Lf , Lρ, Lh ∈ R+ : ∀(x, x′ , u, u′ , w) ∈ X2 × U2 × W; f(x, u, w) − f(x′ , u′ , w) X ≤ Lf ( x − x′ X + u − u′ U) |ρ(x, u, w) − ρ(x′ , u′ , w)| ≤ Lρ( x − x′ X + u − u′ U) h(t, x) − h(t, x′ ) U ≤ Lh x − x′ ∀t ∈ {0, 1, . . . , T − 1}. 2] The distance ∆ is chosen such that: ∆((x, u), (x′ , u′ )) = ( x − x′ X + u − u′ U). 17
  • 19. Characterization of the bias and the variance Theorem. Bias MFMCEh ( ˜Fn, x0) ≤ C ∗ sparsity of Fn(nbTraj ∗ T) Var. MFMCEh ( ˜Fn, x0) ≤ ( Var. MCEh(x0) + 2C ∗ sparsity of Fn(nbTraj ∗ T))2 with C = Lρ T−1 t=0 T−t−1 i=0 [Lf (1 + Lh)]i and with the sparsity of Fn(k) defined as the minimal radius r such that all balls in X × U of radius r contain at least k state-action pairs (xl , ul ) of the set Fn = {(xl , ul , rl , yl )}n l=1. 18
  • 20. Test system Discrete-time dynamics: xt+1 = sin(π 2 (xt + ut + wt )) with X = [−1, 1], U = [−1 2 , 1 2 ], W = [−0.1 2 , −0.1 2 ] and Pw (·) a uniform pdf. Reward observed after each system transition: rt = 1 2π e− 1 2 (x2 t +u2 t ) + wt Performance criterion: Expected sum of the rewards observed over a 15-length horizon (T = 15). We want to evaluate the performance of the policy h(t, x) = −x 2 when x0 = −0.5. 19
  • 21. Simulations for nbTraj = 10 and size of Fn = 100, ..., 10000. Model-free Monte Carlo Estimator Monte Carlo Estimator 20
  • 22. Simulations for nbTraj = 1, . . . , 100 and size of Fn = 10, 000. Model-free Monte Carlo Estimator Monte Carlo Estimator 21
  • 23. Remember what was said about RL + FAs: 1. not well adapted to risk sensitive performance criteria Suppose the risk sensitive performance criterion: PCh (x) = −∞ if P(Jh (x) < b) > c Jh (x) otherwise where Jh (x) = E[ T−1 t=0 ρ(xt , h(t, xt ), wt )]. MFMCE adapted to this performance criterion: Rebuilt nbTraj starting from x0 using the set Fn as done with the MFMCE estimator. Let sum rew traj i be the sum of rewards collected along the ith trajectory. Output as estimation of PCh (x0) :    −∞ if nbTraj i=1 I{sum rew traj i<b} nbTraj > c nbTraj i=1 sum rew traj i nbTraj otherwise. 22
  • 24. MFMCE in the deterministic case We consider from now on that: xt+1 = f(xt , ut ) and rt = ρ(xt , ut ). One single trajectory is sufficient to compute exactly Jh (x0) by Monte Carlo estimation. Theorem. Let [(xlt , ult , rlt , ylt )]T−1 t=0 be the trajectory rebuilt by the MFMCE when using the distance measure ∆((x, u), (x′ , u′ )) = x − x′ + u − u′ . If f, ρ and h are Lipschitz continuous, we have |MFMCEh (x0) − Jh (x0)| ≤ T−1 t=0 LQT−t ∆((ylt−1 , h(t, ylt−1 )), (xlt , ult )) where yl−1 = x0 and LQN = Lρ( N−1 t=0 [Lf (1 + Lh)]t ). 23
  • 25. Previous theorem extends to whatever rebuilt trajectory: Theorem. Let [(xlt , ult , rlt , ylt )]T−1 t=0 be any rebuilt trajectory. If f, ρ and h are Lipschitz continuous, we have | T−1 t=0 rlt − Jh (x0)| ≤ T−1 t=0 LQT−t ∆((ylt−1 , h(t, ylt−1 )), (xlt , ult )) where ∆((x, u), (x′ , u′ )) = x − x′ + u − u′ , yl−1 = x0 and LQN = Lρ( N−1 t=0 [Lf (1 + Lh)]t ). r2 = r(x2, h(2, x2)) r3 trajectory generated by policy h ∆2 = LQ3 ( yl1 − xl2 + h(2, yl1 ) − ul2 ) rl4 rl1 xl2 , ul2 | 4 t=0 rt − 4 t=0 rlt | ≤ 4 t=0 ∆t r1 r0x0 r5 rl0 rl2 rl3yl1 24
  • 26. Computing a lower bound on a policy From previous theorem, we have for any rebuilt trajectory [(xlt , ult , rlt , ylt )]T−1 t=0 : Jh (x0) ≥ T−1 t=0 rlt − T−1 t=0 LQT−t ∆((ylt−1 , h(t, ylt−1 )), (xlt , ult )) This suggests to find the rebuilt trajectory that maximizes the right-hand side of the inequality to compute a tight lower bound on h. Let: lower bound(h, x0, Fn) max [(xlt ,ult ,rlt ,ylt )]T−1 t=0 T−1 t=0 rlt − T−1 t=0 LQT−t ∆((ylt−1 , h(t, ylt−1 )), (xlt , ult )) 25
  • 27. A tight upper bound on Jh (x) can be defined and computed in a similar way: upper bound(h, x0, Fn) min [(xlt ,ult ,rlt ,ylt )]T−1 t=0 T−1 t=0 rlt + T−1 t=0 LQT−t ∆((ylt−1 , h(t, ylt−1 )), (xlt , ult )) Why are these bounds tight ? Because: ∃C ∈ R+ : Jh (x) − lower bound(h, x, Fn) ≤ C ∗ sparsity of Fn(1) upper bound(h, x, Fn) − Jh (x) ≤ C ∗ sparsity of Fn(1) Functions lower bound(h, x0, Fn) and higher bound(h, x0, Fn) can be implemented in a “smart way” by seeing the problem as a problem of finding the shortest path in a graph. Complexity linear with T and quadratic with |Fn|. 26
  • 28. Remember what was said about RL + FAs: 2. may lead to unsafe policies - poor performance guarantees Let H be a set of candidate high-performance policies. To obtain a policy with good performance guarantees, we suggest to solve the following problem: h ∈ arg max h∈H lower bound(h, x0, Fn) If H is the set of open-loop policies, solving the above optimization problem can be seen as identifying an “optimal” rebuilt trajectory and outputting as open-loop policy the sequence of actions taken along this rebuilt trajectory. 27
  • 29. • Trajectory set covering the puddle: FQI with trees h ∈ arg max h∈H lower bound(h, x0, Fn) • Trajectory set not covering the puddle: FQI with trees h ∈ arg max h∈H lower bound(h, x0, Fn) 28
  • 30. Remember what was said about RL + FAs: 3. may make suboptimal use of near-optimal trajectories Suppose a deterministic batch mode RL problem and that in Fn you have the elements of the trajectory: (xopt. traj. 0 , u0, r0, x1, u1, r1, x2, . . . , xT−2, uT−2, rT−2, xT−1, uT−1, rT−1, xT ) where the ut s have been selected by an optimal policy. Let H be the set of open-loop policies. Then, the sequence of actions h ∈ arg max h∈H lower bound(h, xopt. traj. 0 , Fn) is an optimal one whatever the other trajectories in the set. Actually, the sequence of action h outputted by this algorithm tends to be an append of subsequences of actions belonging to optimal trajectories. 29
  • 31. Remember what was said about RL + FAs: 4. offer little clues about how to generate new experiments in an optimal way The functions lower bound(h, x0, Fn) and upper bound(h, x0, Fn) can be exploited for generating new trajectories. For example, suppose that you can sample the state-action space several times so as to generate m new elementary pieces of trajectories to enrich your initial set Fn. We have proposed a technique to determine m “interesting” sampling locations based on these bounds. This technique - which is still preliminary - targets sampling locations that lead to the largest bound width decrease for candidate optimal policies. 30
  • 32. Closure Rebuilding trajectories: interesting concept for solving many problems related to batch mode RL. Actually, the solution outputted by many RL algorithms (e.g., model-learning with kNN, fitted Q iteration with trees) can be characterized by a set of “rebuilt trajectories”. ⇒ I suspect that this concept of rebuilt trajectories could lead to a general paradigm for analyzing and designing RL algorithms. 31
  • 33. Presentation based on (in order of appearance): “Model-free Monte Carlo-like policy evaluation”. R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. In Proceedings of The Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010), JMLR W&CP Volume 9, pages 217-224, Chia Laguna, Sardinia, Italy, May 2010. “Inferring bounds on the performance of a control policy from a sample of trajectories”. R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. In Proceedings of the IEEE International Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL-09), pages 117-123. Nashville, United States, March 30 - April 2, 2009. “A cautious approach to generalization in reinforcement learning”. R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. In Proceedings of the 2nd International Conference on Agents and Artificial Intelligence (ICAART 2010), Valencia, Spain, January 2010. (10 pages). “Generating informative trajectories by using bounds on the return of control policies”. R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. In Proceedings of the Workshop on Active Learning and Experimental Design 2010 (in conjunction with AISTATS 2010), Italy, May 2010. (2 pages). 32