SlideShare ist ein Scribd-Unternehmen logo
1 von 66
100 things I know.
    Part I of III


  Reinaldo Uribe M


    Mar. 4, 2012
SMDP Problem Description.

  1. In a Markov Decision Process, a (learning) agent is embedded
  in an envionment and takes actions that affect that environment.

       States: s ∈ S.
       Actions: a ∈ As ; A =      s∈S   As .
       (Stationary) system dynamics:
       transition from s to s after taking a, with probability
         a
       Pss = p(s |s, a)
       Rewards: Ra . Def. r(s, a) = E Ra | s, a
                 ss                    ss


  At time t, the agent is in state st , takes action at , transitions to
  state st+1 and observes reinforcement rt+1 with expectation
  r(st , at ).
SMDP Problem Description.

  2. Policies, value and optimal policies.
      An element π of the policy space Π indicates what action,
      π(s), to take at each state.
      The value of a policy from a given state, v π (s) is the expected
      cumulative reward received starting in s and following π:
                               ∞
                 v π (s) = E         γ t r(st , π(st )) | s0 = s, π
                               t=0

      0 < γ ≤ 1 is a discount factor.
      An optimal policy, π ∗ has maximum value at every state:

                     π ∗ (s) ∈ argmax v π (s)       ∀s
                                π∈Π
                                π∗
                      v ∗ (s) = v (s) ≥ v π (s) ∀π ∈ Π
SMDP Problem Description.



  3. Discount
      Makes infinite-horizon value bounded if rewards are bounded.
      Ostensibly makes rewards received sooner more desirable than
      those received later.
      But, exponential terms make analysis awkward and harder...
      ... and γ has unexpected, undesirable effects, as shown in   Uribe

      et al. 2011

      Therefore, hereon γ = 1.
      See section Discount, at the end, for discussion.
SMDP Problem Description.


  4. Average reward models.
          A more natural long term measure of optimality exists
      for such cyclical tasks, based on maximizing the average
      reward per action. Mahadevan 1996

                               n−1
                        1
            ρπ (s) = lim  E          r(st , π(st )) | s0 = 0, π
                    n→∞ n
                               t=0

  Optimal policy:

                      ρ∗ (s) ≥ ρπ (s)    ∀s, π ∈ Π

  Remark: All actions equally costly.
SMDP Problem Description

  5. Semi-Markov Decision Process: usual approach, transition
  times.
      Agent is in state st and takes action π(st ) at decision epoch t.
      After an average of Nt units of time, the sistem evolves to
      state st+1 and the agent observes rt+1 with expectation
      r(st , π(st )).
      In general, Nt (st , at , st+1 ).
      Gain (of a policy at a state):

                                          n−1
                π
                                E         t=0 r(st , π(st )) | s0   = s, π
               ρ (s) = lim
                         n→∞                   n−1
                                     E         t=0 Nt | s0   = s, π

      Optimizing gain still maximizes average reward per action, but
      actions are no longer equally weighted. (Unless all Nt = 1)
SMDP Problem Description

  6.a Semi-Markov Decision Process: explicit action costs.
      Taking an action takes time, costs money, or consumes
      energy. (Or any combination thereof)
      Either way, real valued cost kt+1 not necessarily related to
      process rewards.
      Cost can depend on a, s and (less common in practice) s .
      Generally, actions have positive cost. We simply require all
      policies to have positive expected cost.
      Wlog the magnitude of the smallest nonzero average action
      cost is forced to be unity:

                        |k(a, s)| ≥ 1   ∀k(a, s) = 0
SMDP Problem Description

  6.b Semi-Markov Decision Process: explicit action costs.
      Cost of a policy from a state:
                                 n−1
               cπ (s) = lim E            k(st , π(st )) | s0 = s, π
                        n→∞
                                   t=0

      So cπ (s) > 0   ∀π ∈ Π, s.

      Nt = k(st , π(st )). Only their definition/interpretation
      changes.
      Gain
                                          v π (s)/n
                              ρπ (s) =
                                          cπ (s)/n
SMDP Problem Description

  7. Optimality of π ∗ :

  π ∗ ∈ Π with gain
                                n−1
                            E   t=0 r(st , π(st )) | s0   = s, π ∗       ∗
                                                                      v π (s)
   π∗        ∗
  ρ (s) = ρ (s) = lim                                                = π∗
                      n→∞       n−1
                                                          = s, π ∗    c (s)
                            E   t=0 k(st , π(st )) | s0

  is optimal if

                       ρ∗ (s) ≥ ρπ (s)   ∀s, π ∈ Π,

  as it was in ARRL.

  Notice that the optimal policy doesn’t necessarily maximize v π or
  minimize cπ . Only optimizes their ratio.
SMDP Problem Description

  8. Policies in ARRL and SMDPs are evaluated using the
  average-adjusted sum of rewards:
                                      n−1
          H π (s) = lim E                   (r(st , π(st )) − ρπ (s)) | s0 = s, π
                         n→∞
                                      t=0

  Puterman 1994, Abounadi et al. 2001, Ghavamzadeh & Mahadevan 2007




         This signals the existence of bias optimal policies that, while
         gain optimal, also maximize the transitory rewards received
         before entering recurrence.
         We are interested in gain-optimal policies only.
         (It is hard enough...)
SMDP Problem Description


  9. The Unichain Property
      A process is unichain if every policy has a single, unique
      recurrent class.
      I.e. if for every policy, all recurrent states communicate
      between them.
      All methods rely on the unichain property. (Because, if it
      holds:)
      ρπ (s) is constant for all s.
      ρπ (s) = ρπ
      Gain and value expressions simplify. (See next)
      However, deciding if a problem is unichain is NP-Hard.
      Tsitsiklis 2003
SMDP Problem Description

  10. Unichain property under recurrent states.   Feinberg & Yang, 2010

      A state is recurrent if it belongs to a recurrent class of every
      policy.
      A recurrent state can be found, or proven not to exist, in
      polynomial time.
      If a recurrent state exists, determining whether the unichain
      property holds can be done in polynomial time.
      (We are not going to actually do it–it requires knowledge of
      the system dynamics–but good to know!)
      Recurrent states seem useful. In fact, existence of a recurrent
      state is more critical to our purposes that the unichain
      property.
      Both will be required in principle for our methods/analysis,
      until their necessity is furher qualified in section Unichain
      Considerations below.
Intermission
Generic Learning Algorithm
   11. The relevant expressions under our assumptions simplify, losing
   dependence on s0

   The following Bellman equation holds for average-adjusted state
   value:

              H π (s) = r(s, π(s)) − k(s, π(s))(ρπ ) + Eπ H π (s )   (1)

   Ghavamzadeh & Mahadevan 2007



   Reinforcement Learning methods exploit Eq. (1), running the
   process and substituting:
         State for state-action pair value.
         Expected for obseved reward and cost.
         ρπ for an estimate.
         H π (s ) for its current estimate.
Generic Learning Algorithm



   12.

   Algorithm 1 Generic SMDP solver
     Initialize
     repeat forever
         Act
         Do RL to find value of current π   Usually 1-step Q-learning
         Update ρ.
Generic Learning Algorithm

   13.
         Model-based state value update:

                    H t+1 (st ) ← max r(st , a) + Ea H t (st+1 )
                                     a



             Ea emphasizes that expected value of next state depends on
             action chosen/taken.


         Model free state-action pair value update:

               Qt+1 (st , at ) ← (1 − γt ) Qt (st , at )+
                                γt rt+1 − ρt ct+1 + max Qt (st+1 , a)
                                                            a



             In ARRL, ct = 1 ∀t
Generic Learning Algorithm
   14.a Table of algorithms. ARRL

    Algorithm                        Gain update
                                                   t
    AAC                                                 r(si , π i (si ))
                                                  i=0
    Jalali and Ferguson 1989
                                     ρt+1 ←
                                                          t+1
                                         t+1
    R–Learning                       ρ         ← (1 − α)ρt +
    Schwartz 1993                                 α rt+1 + max Qt (st+1 , a) − max Qt (st , a)
                                                                     a           a

    H–Learning                       ρt+1 ← (1−αt )ρt +αt rt+1 − H t (st ) + H t (st+1 )
                                               αt
    Tadepalli and Ok 1998            αt+1 ←
                                             αt + 1
    SSP Q-Learning                   ρt+1 ← ρt + αt min Qt (ˆ, a)
                                                            s
    Abounadi et al. 2001                                       a

                                                   t
    HAR                                                 r(si , π i (si ))
                                                  i=0
    Ghavamzadeh and Mahadevan 2007
                                     ρt+1 ←
                                                          t+1
Generic Learning Algorithm


   14.b Table of algorithms. SMDPRL


    Algorithm                        Gain update

    SMART                                      t
    Das et al. 1999
                                                    r(si , π i (si ))
                                              i=0
                                     ρt+1 ←    t
    MAX-Q                                           c(si , π i (si ))
    Ghavamzadeh and Mahadevan 2001            i=0
SSP Q-Learning
  15. Stochastic Shortest Path Q-Learning
      Most interesting. ARRL
      If unichain and exists s recurrent (Assumption 2.1 ):
                             ˆ
               SSP Q-learning is based on the observation that
           the average cost under any stationary policy is
           simply the ratio of expected total cost and expected
           time between two successive visits to the reference
           state [ˆ]
                  s

      Thus, they propose (after Bertsekas 1998) making the process
      episodic, splitting s into the (unique) initial and terminal
                          ˆ
      states.
      If the Assumption holds, termination has probability 1.
      Only the value/cost of the initial state are important.
      Optimal solution “can be shown to happen” when H(ˆ) = 0.
                                                       s
      (See next section)
SSP Q-Learning
  16. SSPQ ρ update.

                      ρt+1 ← ρt + αt min Qt (ˆ, a),
                                             s
                                          a


  where

                                           2
                          αt → ∞;         αt < ∞.
                      t               t


  But it is hard to prove boundedness of {ρt }, so suggested instead

                      ρt+1 ← Γ ρt + αt min Qt (ˆ, a) ,
                                               s
                                              a


  with Γ(·) a projection to [−K, K] and ρ∗ ∈ (−K, K).
A Critique


   17. Complexity.
       Unknown.
       While RL is PAC.


   18. Convergence.
       Not always guaranteed (ex. R-Learning).
       When proven, asymptotic:
       convergence to the optimal policy/value if all state-action
       pairs are visited infinite times.
       Usually proven depending on decaying learning rates, which
       make learning even slower.
A Critique


   19. Convergence of ρ updates.
            ... while the second “slow” iteration gradually guides
       [ρt ] to the desired value.
          Abounadi et al. 2001




       It is the slow one!
       Must be so for sufficient approximation of current policy value
       for improvement.
       Initially biased towards (likely poor) observed returns at the
       start.
       A long time must probably pass following the optimal policy
       for ρ to converge to actual value.
Our method

  20.
        Favours an understanding of the −ρ term, either alone in
        ARRL or as a factor of costs in SMDPs, not so much as an
        approximation to average rewards but as a punishment for
        taking actions, which must be made “worth it” by the rewards.
        I.e. nudging.
        Exploits the splitting of SSP Q-Learning, in order to focus on
        the value/cost of a single state, s.
                                          ˆ
        Thus, also assumes the existence of a recurrent state, and
        that the unichain policy holds. (For the time being)

        Attempts to ensure an accelerated convergence of ρ updates.
        In a context in which certain, efficient convergence can be
        easily introduced.
Intermission
Fractional programming

   21. So, ‘Bertsekas splitting’ of s into initial sI and terminal sT .
                                    ˆ
   Then, from sI
        Any policy π ∈ Π has an expected return until termination
        v π (sI ),
        and an expected cost until termination cπ (sI ).
                                                 v π (sI )
        The ARRL problem, then, becomes max π
                                            π∈Π c (sI )


   Lemma

                       v π (sI )
              argmax             = argmax v π (s) + ρ∗ (−cπ (s))
                π∈Π    cπ (sI )      π∈Π

   For ρ∗ such that max v π (s) + ρ(−cπ (s)) = 0
                      π∈Π
Fractional programming




   22. Implications.
       Assume the gain, ρ∗ is known.
       Then, the nonlinear SMDP problem reduces to RL.
       Which is better studied, well understood, simpler, and for
       which sophisticated, efficient algorithms exist.
       If we only use (r − ρ∗ c)(s, a, s ).
       Problem: ρ∗ is usually not known.
Nudging

  23. Idea:
      Separate reinforcement learning (leave it to the pros) from
      updating ρ.
      Thus, value-learning becomes method-free.
      We can use any old RL method.

      Gain update is actually the most critical step.
      Punish too little, and the agent will not care about hurrying,
      only collecting reward.
      Punish too much, and the agent will only care about finishing
      already.

      In that sense, (r − ρc) is like picking fruit inside a maze.
Nudging



  24. The problem reduces to a sequence of RL problems.
      For a sequence of (temporarily fixed) ρk
      Some of the methods already provide an indication of the sign
      of ρ updates.
      We just don’t hurry to update ρ after taking a single action.

      Plus the method comes armed with a termination condition:
      As soon as H k (sI ) = 0 then π k = π ∗ .
Nudging



  25.

  Algorithm 2 Nudged SSP Learning
    Initialize
    repeat
        Set reward scheme to (r − cρ)
        Solve by any RL method.
        Update ρ                        From current H π (sI )
    until H π (sI ) = 0
w − l space


   26. D
   We will propose a method for updating ρ and show that it
   minimizes uncertainty between steps. For that, we will use a
   transformation that extends the work of our CIG paper. But First.

   Let D be a bound on the magnitude of unnudged reward

                      D ≥ lim sup{H π (sI ) | ρ = 0}
                             π∈Π
                      D ≤ lim inf {H π (sI ) | ρ = 0}
                             π∈Π

   Observe interval (−D, D) bounds ρ∗ but the upper bound is tight
   only in ARRL if all of D reward is received in a single step from sI .
w − l space



   27. All policies π ∈ Π, from (that is, at) sI have:
       real expected value |v π (sI )| ≤ D.
       positive cost cπ (sI ) ≥ 1



   28.a w − l transformation:
                            D+v π (sI )        D−v π (sI )
                      w=     2cπ (sI )    l=    2cπ (sI )
w − l space

   28.b w − l plane.

                  D




                   l




                       w   D
w − l space

   29. Properties:
       w, l ≥ 0
       w, l ≤ D
                      D
       w+l =                     ≤D
                    cπ (s   I)
       v π (sI ) = D         ⇒       l=0
       v π (sI ) = −D            ⇒    w=0
          lim     (w, l) = (0, 0)
       cπ (sI )→∞


   30. Inverse transformation:
                                      π    π
                     v π (sI ) = D wπ −lπ
                                   w +l        cπ (sI ) = D wπ1 π
                                                              +l
Intermission
w − l space

 31. Value.
                                                    w π − lπ
                                       v π (sI ) = D
  D                                                 w π + lπ
                                       Level sets are lines.
                                       w–axis, expected D.
                                       l–axis, expected −D.
                                       w = l, expected 0.
               5D
      −D

           −0.




  l                                    Optimization → fanning
                                       from l = 0.
                    0




                                       Not convex, but splits the
                    0.5D
                                       space.
                                       So optimizers are vertices of
                               D
                           w       D   convex hull of policies.
w − l space

 32. Cost.

  D
                                      1
                      cπ (sI ) = D
                                     wπ+ lπ
                      Level sets are lnes with slope
                      −1.
                      w + l = D, expected cost 1.
  l
                      Cost decreases with distance
              1




                      to the origin.
                      Cost optimizers (both max
          2




                      and min) also vertices.
      4
      8




              w   D
w − l space




   33. The origin.
       Policies of infinite expected cost.
       Mean the problem is not unichain or sI is not recurrent.
       And are troublesome for optimizing value.

       So under our assumptions, the origin does not belong to the
       space
Nudged value in the w − l space
   34. SMDP problem in w − l.
                                                                                               π   π
                v π (sI )         D wπ −lπ
          argmax π        = argmax w 1 +l
                                           = argmax wπ − lπ
            π∈Π  c (sI        π∈Π D wπ +lπ     π∈Π


                D



                                        /2
                               D
                              −




                 l
                                                     ● ●
                                                      ●           ●●         ●
                                                         ●
                                     ● ● ● ●● ●● ● ● ● ●
                                            ● ●● ● ● ●●●●● ●
                                      ● ● ● ● ●●●●● ●●●●● ●
                                             ●        ● ● ●●
                                                       ● ● ●●
                                                      ● ●            ●       ●●
                                         ● ● ● ● ●●● ● ●●● ● ●
                                         ● ● ● ●●●●● ●●● ●● ● ●● ●
                                                      ●
                              ● ● ● ●●● ● ●●●●●●●● ●●● ●● ●● ●● ● ●
                               ●                      ● ● ●● ● ●
                                                       ● ●                    ●
                              ●                ●      ● ●● ●● ●● ●
                                          ●● ●● ●●●●●●●●● ● ● ● ●
                                                           ●
                                                       ● ●●●● ●
                                               ● ● ●●●●● ● ●●●● ●
                             ● ● ● ● ●●● ●●●● ●●●●●●●●●●●●●●●● ● ●
                                        ● ● ●●●●● ●●●●●●●● ●●●● ● ●
                                       ●●●●●●● ●●●●●●●●●●●● ● ●●●●
                                                ●                  ●
                                              ●●●●●● ●● ● ●●● ● ●
                                                ● ● ●●● ● ● ● ●
                                          ●●●● ●● ●●● ● ● ●● ● ●
                                  ● ●● ●●●●●●●●●●●●●●●●●● ● ●● ●
                                       ● ●●●●●●●●●●●●●●●●●●● ● ●
                                             ● ●●● ●●●● ●●●● ● ● ● ●
                                                                                     ●
                                         ● ● ●●● ●●●●●●●●● ● ●● ●
                                                        ● ● ●●
                                 ● ● ● ● ●●●●●●●●●● ●●●●●●● ●●● ●● ●
                                        ● ●●●●●●●●●●●●●●●● ● ● ●
                              ● ●●● ●● ●●●●●●●●●●●●●●●●●●●●●● ●●●
                                                   ● ●●● ●●●●
                                                 ●●●●● ●● ●●●● ● ●
                                                        ●
                                                   ● ● ● ●●●●     ●
                                                                                   ●
                              ● ●● ●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ●
                                      ● ●●●●●●●●●●●●●●●●●●●●● ● ●
                                            ●●●● ●●●● ●●●●●●● ●●
                                                    ● ●
                                          ●●●●●●●●●●●●●● ●●●●● ●
                                           ●● ●●●●●●●●●●●●●●● ● ●
                                                                 ●
                                      ● ●●●●● ●●●●●●●●●●●●●●●●● ● ●
                                                   ● ●●
                                                ● ● ●●●● ●● ● ●
                                                                        ●●
                             ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ●
                                                                ●●●
                                                                ●● ● ●
                                     ● ● ● ●●●●●●●●●●●●●●●●●●●● ● ●
                                             ● ●●● ●●●● ●●● ●
                                         ●● ●● ●●●●●●●●●●●●●● ● ● ●
                                                       ●●
                                     ● ●●●●●●●●●●●●●●●●●●●●● ●●●
                              ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●
                                                        ●●
                                                   ● ●●●● ● ● ●●● ●
                                ●● ●● ●●●●●●●●●●●●● ●●●●●●●●●●●●●●●
                                    ● ● ●●●●●●●●●●●●●●●●●●●●●●●●
                                             ● ●●●●● ● ●●●●●● ● ●
                                         ● ●●●●●●●●●●●●●●●●● ● ●
                                                   ● ●●●●● ● ●● ●
                                                              ● ● ●●
                                                                                       0

                                     ● ●●●●●●●●●●●●●●●●●●●●●● ●●●●
                                     ● ● ●●●●●●●●●●●●●●●●●●● ● ●●
                                                         ●       ●
                                          ● ●●●●●●●●●●●●●●● ●● ● ●
                                             ●
                                 ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●● ●
                                             ●●●●●●● ●●●●●●●●●●
                                                       ● ●●●●●●●● ●
                                          ● ●●●●●●●●●●●●●●● ●● ● ●
                                                  ● ●●● ●●●● ● ● ●
                                  ● ●● ●●●●●●●●●●●●●●●●●●●●●●● ●●●
                                     ● ● ●●●●●●●●●●●●●●●●●● ● ● ●
                           ● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
                                                           ●
                                                          ● ● ●●
                                                                    ●
                                                 ● ● ●●●●●●● ● ● ● ● ●
                                                  ● ●● ●● ●● ●●
                                 ● ●●●●●●●●●●●●●●●●●●●● ●●●●●●● ●
                                     ●● ● ●●●●●●●●●●●●●●●● ●●●● ●●
                                       ● ● ● ●● ● ●●●●●●●● ●● ●●●●●
                                ● ●●●●●●●●●●●●●●●●●●●●●●● ●● ●●●●
                                                                    ●
                                ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ● ●
                                      ● ● ●●●●●●●●●●●●●●●● ●●●●●●
                                                         ●● ● ●
                                                ● ●●●●●●●● ● ●● ●
                                       ●● ● ●●●●●●●●●● ●●●●● ●● ● ●
                                                                                       ●
                      ●   ●                               ● ●● ●
                                  ●● ●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●
                                                 ● ● ●●●● ● ●
                                                           ●
                                             ● ●●●●●●●●●●●● ● ● ●
                                             ●●●●●●●●●● ●● ●●● ● ●
                                       ● ● ●●●●●●●●●●●●●●●●●●● ● ● ●
                                   ●● ●● ●●●●●●●●●●●●●●●●●●●● ●●●●
                                              ● ● ●●●●●●●● ●
                                 ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●
                                                        ● ●● ●
                          ●                            ● ●● ● ●
                                    ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●
                                              ●● ● ●●●●● ●●●●● ●
                                        ● ●●●●●●●●●●●●●●●●●●● ●
                                            ● ●●● ●●●● ●●●●●●●●●●
                                    ● ● ●●●● ●●●●●●●●●●●●●●●●●●● ●
                                                                                    ●
                                ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●
                                                            ● ● ●
                                       ● ●●●●●●●●●●●●●●●●●●●● ●● ●
                                   ● ●●●●●●●●●●●●●●●●●●●●● ●●●●●●● ●
                                        ● ●●●●●●●●●●●●●●● ●●●● ● ●
                                               ● ● ●● ● ●● ●●
                                            ●●● ●● ● ●● ●● ●●●●●
                                                ● ● ●●
                                          ●●●●● ●●●●●●●●● ●●● ●●
                                           ●●●●● ●●●●●●●● ●●● ●
                                   ● ● ●●●●●● ●●●●●●●●●●●●●●●● ● ●
                                  ●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ●
                                                                ● ●
                                                  ●●● ● ● ● ● ●●●
                                                  ●              ●
                                        ● ●●●●●●●●●●●●●●●●●●●●●●●●
                              ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ●
                                         ● ●●●● ●● ●●●●●●●●●●●●● ● ●
                                  ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ●
                                        ● ●●●●●●● ●●●●●●●●●●●●●●●●
                                            ● ● ●●●●●●●●●●●●● ●● ●
                                                     ● ● ●● ●● ●●
                                                  ●●●●●●●● ●● ●●●
                                                     ● ●●● ● ● ●
                                                   ● ● ● ●● ●
                                                 ●●● ●● ● ●● ● ● ●
                                     ● ● ●● ●●●● ●●●● ●●● ●●● ●
                            ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●● ●
                               ●● ● ●●● ●●●●●●●●●●●●●●●●●● ●●
                                                            ●
                                ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●
                                           ● ● ●● ●●●●●●●●●●●●●●                ●●
                                 ●          ● ●●●●●● ●●●●●●● ● ●
                                             ● ●●●●● ● ● ● ●●● ●
                                           ●●●●●●●●●●●●●●●● ●●●●● ●
                                          ●●●●●●●●●●●●●●●●●●●●● ●
                                           ● ● ●●●●● ● ●●●●● ●● ●
                                                    ●
                                       ●●●● ●●●●●●●●●●●●●●●●●●●●● ●
                                                                   ●
                           ● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ●
                                                 ●●● ●●●● ● ●●●●
                                                  ● ● ●● ● ● ● ●
                                  ●●● ●●● ●●●●●●●●●●●●●●●●●●●●● ●●●
                                      ● ●● ●●●●●●●●●●●●●●●●●● ●●
                                      ● ●●●●●●●●●●●●●●●●●●●●● ●●● ●
                                     ● ●●●●●●●●●●●●●●●●●●●●●●●●● ● ●
                                                  ●       ●● ●
                            ●                ● ●● ●●●●●●●●●●●●
                                  ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●
                                       ● ●●●●●●●●●●●●●●●●● ● ●●
                                            ● ●● ● ●●●●●●● ●●●● ●
                                      ● ●● ●●● ●●●●●●●●●●●●●●●● ●
                                                ●● ●●●●●● ●● ●●●
                                                  ● ● ●● ●●● ●
                                            ● ● ●●●●●●●●● ●● ●●● ●● ●
                                       ●● ●●●●●●●●●●●●●● ●●● ●●●
                                      ●●●●●●●●●●●●●●●●●●●●●●●●● ●
                                                      ●●
                                     ●●●●●●●●●●●●●●●●●●●●●●●●●●●●
                                  ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
                                      ● ● ●●●●●●●●●●●●●●●●●●●●●● ●
                                            ●●●●●●●●●●●●●●●●●●●●● ●
                                                ●● ● ●● ●●● ● ●● ●●
                           ● ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ● ●
                                              ●●●●●●●●● ●●● ●●●●●● ●
                                                   ● ●● ● ●● ● ●
                            ●                ●●●●●●●●●●●●● ●●●● ● ●
                                                 ●●● ● ●●●●●● ● ●
                                                       ● ●● ● ●
                                        ●● ●● ●● ●●●●●●●● ●● ●● ●
                                  ●●●●● ●● ●●●●●● ●●●●● ●●●●● ● ●● ●●  ●
                                      ●●● ●● ●●●●●●●●●●●●●●● ●● ●
                                   ●●● ● ●●●●●●●●●●●●●●●●●●●● ● ●
                                  ● ●●●● ●●●●●●●●●●●●●●●●●●●●● ● ●● ●
                                          ● ●●●● ●●● ● ● ● ● ●
                                                       ● ● ●
                                        ●● ●●●●●●●●● ●●●● ●●●●●●
                                          ● ●● ●● ●●● ●● ● ●●●● ●●●
                                               ●●       ● ●●            ●
                          ● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ●●●
                                                   ● ●●●●
                                      ●●● ●●●●●●●●●●●●●●●●●●●●● ●
                                                ● ● ●● ●● ● ●
                                                 ●
                                     ●●●●●●●●●●●●●●●●●●●●●●●●● ●●
                                        ● ● ●●●●●●●●● ●●●●●● ●●
                                               ●●●●●●●●●●●●●●● ●
                                     ● ●●●●●●●●●●●●●●●●●●●●●●● ● ●
                                    ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●
                                                ● ● ●●●●●●●●● ●● ●
                                                   ● ●● ● ●
                                                                        ●
                                                                        ●●
                                    ● ●●●●●●●●●●●●●●●●●●●●●●● ● ●
                                                   ● ●●
                                            ●● ●●●●●●● ●● ●● ●● ●● ● ●
                                    ● ● ●●●●●●●●●●●●●●●●●●●●●●●● ●
                                             ● ●● ●●●●●●●●●● ●●● ●
                              ● ●● ●●● ●●●●●●●●●●●●●●●●●●●●●●●● ● ●●
                                                   ●
                                                   ●● ● ● ● ● ●   ●
                                       ● ● ●● ●●●●●●●●●●●● ●● ● ●
                                      ● ● ● ●●●●●●●●●●●●●●●●● ●●
                              ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●
                                                           ● ●●
                                                     ● ●●●●● ● ●
                               ● ●●● ●●●●●●●●●●●●●●●●●●●● ●●●●
                                       ● ● ● ●●●●●●●●●● ●●●● ● ●●
                                      ●● ● ● ●●●●●●●●●●●●●●●●●● ●
                                                           ●
                                                 ● ●● ●●●●●● ● ●●
                                                           ● ●
                                                 ●● ●●●● ●● ● ● ●
                                      ●● ● ●●●●●●●●●●●●●●●●●●●●●● ●
                                       ● ● ● ●●●●●●●●●●● ●●●● ●● ●
                                                   ● ● ●● ●
                                              ● ●● ●●● ●●●● ●● ●
                                        ● ●●● ●●●●●●●●●●●●● ●● ●
                               ● ●●● ●●●●●●●●●●●●●●●●●●●●● ●● ●●
                                             ●● ●●●●●●● ●● ● ● ●●
                                                   ● ● ●
                               ● ● ● ●●●●●●●●●●●●●●● ● ●●●●● ● ●
                                                   ● ●  ●
                               ● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●● ●●● ● ●
                                          ● ● ●●●●●●●●● ●●● ●●●
                               ● ●●● ●●●●●●●●●●●●●●●●● ●●●●●●●● ●●
                                           ● ● ●● ●●● ●●●● ● ●●●
                                         ● ● ●●● ●●●●●●●●●●●●
                                   ● ● ●●●●●●●●●●●●●●●●●●● ●● ●
                                                  ● ● ●● ●●
                                           ● ● ●● ●● ●● ●●● ●● ●
                                                        ●
                                                 ● ●●●●● ● ● ●
                               ● ●●● ●●●●●●●●●●●●●●●●●●●● ●●● ● ●●●
                                           ● ●● ●●●●●●●● ●●● ●
                                       ● ●●● ●●●●●●●●●●●●●●● ●●
                                          ● ●●●●● ●●●●●● ●● ●
                                      ● ●● ●●●●●● ●●●●●●●●●●●● ●
                                                       ● ●●●
                                          ●●● ●●●●●●●●●●●●●● ●
                                                      ● ●●●● ● ●
                                   ● ● ●●●●●●●●●●●●●●●●●●●●● ● ●
                                   ● ● ● ●●●●●●●●●●●●●●●●●●● ● ● ● ●
                                                             ● ● ●●
                                  ● ●● ●●●●●●●●●●●●●●●●●●●●●● ●● ● ●
                                      ● ●●●●●●●●● ●●●●●●●●●●●●● ●
                                               ●●●● ● ●●●●●●●● ● ●
                                                       ●
                                                   ●● ● ●●●● ●●
                                               ●●●●● ●●● ● ●●● ●
                                       ● ● ●●●●● ●●●●● ●●●● ●● ● ●
                                                       ●     ●● ● ●
                                                     ● ● ●●● ● ●●
                                                             ●
                                     ●●        ●●●●●●●●●●●●●● ● ●
                                   ● ● ● ●● ●● ●● ● ● ●● ● ●● ●
                                               ● ●●●●
                                   ● ● ● ●●●● ●●●●●●●●●●●●● ● ● ● ● ●
                                             ● ● ● ●●●●● ● ●●●     ●
                                          ● ●●● ● ● ●●●●●● ●● ●● ● ● ●
                                                    ● ●● ● ● ●●●
                                      ● ●●●●●●● ●●● ●●● ● ●● ● ● ●
                             ● ● ● ●●●●●●●●●●●●●●●●●● ●●● ● ● ● ● ●
                                    ●● ●●●●●●● ●●●●●●●● ● ●● ● ● ●
                                              ●●● ● ●●● ●●●● ●
                                                ●●
                                                 ●       ●●●
                                                          ●       ● ●
                                                                   ●
                                         ● ●●● ● ●●●●●●●●●● ●● ● ●
                                             ● ● ●● ●● ● ● ● ● ● ●
                                     ● ●●● ●●●●●●●●●●● ●●● ●● ● ●● ● ●
                                                      ●● ●● ● ●
                                                            ●
                                         ●●●●●●●●●●●●●●●● ●●● ● ●
                                             ●● ●● ●● ●●●● ●●● ●● ●
                                                       ● ●
                              ●         ● ● ●●●●●●●●●●●●● ● ●●● ●●
                                               ●●●●● ●● ● ● ●● ● ●
                                                 ● ●●                 ●     ●
                                                 ● ●●● ● ● ● ● ●
                                ● ● ● ●●●●●●●●●● ●●● ●●●●● ●●●
                                                ● ● ●● ● ● ●● ●● ● ● ● ●
                                                     ●●
                                                      ●●      ● ● ●●
                                                     ●● ● ● ●● ●
                                     ●●  ● ● ●●●●● ● ● ● ● ●●● ●●
                                         ●                       ●
                                                                 ● ●
                                        ● ● ● ● ●● ●●●●● ●●●●● ● ●
                                                    ● ● ●● ● ●● ● ● ●
                                     ● ●● ●● ●●●●●●●●●● ●● ●● ● ● ● ●●
                                                    ●● ● ● ● ●●
                                                    ●● ● ● ●●●● ●● ● ●
                               ● ● ●●●● ● ●● ● ●● ●●●●● ● ●●●● ●
                                     ● ● ●
                                         ●           ● ●● ● ● ●  ● ●
                                                      ●
                                        ●● ● ● ● ●●● ●● ●● ● ●
                                                      ●
                                        ●● ●● ●●●●●●●●●● ● ●
                                           ●● ● ● ● ●● ●● ●●
                                              ● ● ● ● ●● ●● ●        ●      ●
                                                                            ●
                                             ●      ●      ● ● ●● ●
                                     ●● ●●● ●●●● ●●●● ●●●● ●● ●
                                                          ●●        ● ●      ●
                                    ●
                                    ●● ●  ● ●●●●●●●●●●● ●●● ● ●
                                                ●● ● ●●● ● ● ●● ● ● ●
                                                  ●          ●
                                                            ● ●        ●
                                    ● ●          ●     ● ●●● ● ● ● ●
                                                              ●
                                            ● ●● ● ● ●
                                                    ●               ●
                                              ●        ●        ●● ●
                                                                   ●     ●●
                                                      ●●
                                                                                                   /2
                                              ●
                                                                                                   D




                                                                                           w            D
Nudged value in the w − l space



   35. Nudged value, for some ρ.

                    argmax v π (sI ) − ρcπ (sI )
                      π∈Π
                             w π − lπ        1
                  = argmax D   π + lπ
                                      − ρD π
                      π∈Π    w            w + lπ
                             w π − lπ − ρ
                  = argmax D
                      π∈Π      w π + lπ
Nudged value in the w − l space



   36. Nudged value level sets
                                                ˆ
   (For a set ρ and all policies π with a given h)
                                 ˆ

                                 ˆ
                               D−h π     D
                        lπ =
                         ˆ
                                   wˆ −      ρ
                                 ˆ
                               D+h      D+hˆ

   Lines!

                         ˆ
   Slope depends only on h (i.e., not on ρ)
Nudged value in the w − l space




   37. Pencil of lines
                        ˆ     ˇ
   For a set ρ, any two h and h level set lines have intersection:
                                   ρ    ρ
                                     ,−
                                   2    2
   Pencil of lines with that vertex.
Nudged value in the w − l space


     D                              38. Zero nudged value.
                                            D−0 π        D
                                       lπ =
                                        ˆ
                                                   wˆ −     ρ
                                            D+0         D+0
      l                                lπ = w π − ρ
                                        ˆ     ˆ

           −D
                                    Unity slope.
                        0           Negative values above, positive
                                    below.
                    w           D
                            D
          
          ρ
          
          
              ρ
                
                
                
                                    If whole cloud above w = l, some
           ,− 
               
          
            2 2
                
                                    negative nudging is the optimizer.
                                    (Encouragement)
Nudged value in the w − l space


                  D




                   l
                                              ● ●
                                              ●       ●●      ●
                                   ● ● ● ●● ●● ● ●●
                                          ●
                                  ● ●● ●● ● ● ● ●
                                        ● ● ●●●● ● ●
                                                   ●
                                       ● ● ● ●●● ● ● ● ●
                                               ● ●
                              ● ● ●●●●●●●●● ●●●●● ● ●
                                               ● ●● ●
                                        ● ● ● ● ●● ●
                                               ● ●
                                                ●
                             ● ● ●●●●●●●●●●●●●●● ●
                             ● ● ●●●●●●●●● ●●●●● ● ●
                                                              ●
                                     ●●●●●●●●●●●● ● ●●
                                     ● ●●●●●●●●●● ●●●● ●
                                               ●●
                                                ●●
                                          ● ●●●● ●● ●
                            ●● ● ●●●●●●●●●●●●●●●●●● ● ●●
                                      ● ● ●●●●●●●●● ●
                                    ●● ●●● ●●●●●●●●●● ● ●
                                     ●●● ●●●●●●●●●●●●●
                                        ●● ● ●● ●●● ● ●●
                                 ● ● ●●●●●●●●●●●●●●●●●●
                                           ● ● ● ●●
                                  ●● ●●●●●●●●●●●●●●● ●
                                 ● ●●●●●●●●●●●●●●●● ● ● ●
                                          ●● ●●●●●● ●●
                                    ●●●●●●●●●●●●●●●●● ●
                                          ●● ● ●●● ●
                             ●●●●●●●●●●●●●●●●●●●●●●●
                                          ●●●●● ●●● ●●
                                        ●●●●●●●●●●● ●
                                                ●● ●●
                                     ●●●●●●●●●●●●●●●●
                                            ● ●●● ●
                                      ● ●●●●●●●●●●●● ●
                                   ●● ●●●●●●●●●●●●●● ●
                                  ● ●●●●●●●●●●●●●●●●●●
                               ● ●●●●●●●●●●●●●●●●● ●●●
                                           ●● ● ●● ● ● ●
                            ●●● ● ●●●●●●●●●●●●●●●●●●●●●●●
                                      ●● ●●●●●●●●●●●●●●
                                            ●     ●●
                                                  ●● ●
                                   ●● ●●●●●●●●●●●●●●●
                                            ● ●●●● ●
                                      ● ●●●●●●●●● ●
                             ● ●●●●●●●●●●●●●●●●●●●●●●●●
                                      ●●●●●●●●●●● ● ●
                                      ●●●●●●●●●●●●● ●
                              ● ●●●●●●●●●●●●●●●●●●●● ●
                                          ● ●●●● ●●●●
                                   ● ●●●●●●●●●●●●●●● ●●
                                             ● ●●
                                  ●●●●●●●●●●●●●●●●●●●● ●
                                      ●●●●● ●●●● ●● ● ●
                               ● ●●●●●●●●●●●●●●●●●●●●●
                                  ● ●●●●●●●●●●●●●●●●●●
                                             ● ● ●●●
                                   ● ●●●●●●●●●●●●●● ●
                                  ●●●●●●●●●●●●●●●●●● ●
                                ● ●●●●●●●●●●●●●●●●●● ●●●
                           ● ●● ●●●●●●●●●●●●●●●●●●●●●
                                   ●●●●●●●●●●●●●●● ● ●
                                             ● ● ●● ●●
                                            ●● ●●●●●●●
                              ● ●●●●●●●●●●●●●●●●●●●●●
                               ● ●●●●●●●●●●●●●●●●●●●●●
                                  ●●●●●●●●●●●●●●●●●●●
                                   ●● ●●●●●●●●●●●●●● ●
                                        ● ● ●●●● ●●●
                                                ● ●●
                       ● ●● ● ●●●●●●●●●●●●●●●●●●●●●● ● ●
                                       ● ● ●●●●●●●●
                                               ● ● ●
                                ● ●●●●●●●●●●●●●●●●●●●●
                                                                ●
                                           ● ●●● ●●●●
                                     ●●●●●●●●●●●●● ●● ●
                                 ● ●●●●●●●●●●●●●●●●● ●●
                                       ●● ●●●●●●●●●● ●●
                               ●●●●●●●●●●●●●●●●●●●●●● ●
                                      ●●● ●●●●●●●●● ●● ●
                                       ● ●●●●●● ●●●● ●
                                                ●
                                    ●●●●●●●●●●●●●●●●● ●
                                          ● ●●● ●●●
                                ● ●●●●●●●●●●●●●●●●●● ●
                                                ●
                                       ●● ●●●●●●●●●●●
                              ●●●●●●●●●●●●●●●●●●●●●●● ●● ●
                                       ● ●●●●●●● ●●●●●
                                ● ●●●●●●●●●●●●●●●● ●●●
                                    ●●●●●●●●●●● ●●●● ●
                                 ● ● ●●●●●●●●●●●●●●● ●
                                ●●●●●●●●●●●●●●●●●●●● ● ● ●
                                        ● ●●●●● ● ●●
                                ● ●●●●●●●●●●●●●●●●● ●●●
                                    ●●● ● ●●●●●●●●●● ●
                                       ● ● ●● ●●● ●●● ●●
                                             ● ●●● ● ●
                             ●● ●●●●●●●●●●●●●●●●●●●● ●●● ●●
                                ●●●●●●●●●●●●●●●●●● ●●
                                      ● ●●●●● ●● ●● ●● ●
                                    ●●●●●●●●●●●●● ●●●●
                                                    ●
                               ● ●●●●●●●●●●●●●●●●●●●●
                              ●●●●●●●●●●●●●●●●●●●●●● ●●●
                                  ●●●●●●●●●●●●●●●●● ●
                                                   ●
                              ● ●●●●●●●●●●●●●●●●●●●●● ●
                                              ●
                                              ● ●●
                           ●●●●●●●●●●●●●●●●●●●●●●●● ●●
                                    ●● ●●●●●●●●●●●●● ●
                                      ●●●●●●●●●●●● ●● ●
                                           ●●
                                          ●●●● ●● ● ●
                                       ●●●●●●●●●●●● ●
                                          ● ● ●● ● ●
                                     ●●●●●●●●●●●●●●●●
                                 ●●●●●●●●●●●●●●●●●●● ●
                                   ●●●●●●●●●●●●●●●● ●
                           ● ●● ●●●●●●●●●●●●●●●●●●●●●● ● ●
                                ●● ●●●●●●●●●●●●●●●●●●●● ●
                                        ●●●●●●●●●● ● ●●
                                      ●●● ●●●●●●●●●●● ●
                                              ●
                               ●●●●●●●●●●●●●●●●●●●●●●●●
                                              ●●
                                ●● ●●●●●●●●●●●●●●●●●● ●
                                          ●●●●● ● ●●●●●●●
                                  ● ●●●●●●●●●●●●●●●●●●
                                   ● ●●●●●●●●●●●●●●●● ● ● ●
                                     ●●●●●●●●●●●●●●●●●
                                          ● ● ● ●● ● ●
                                          ●●●●●●● ●●● ●● ●
                                          ●● ●●●●●● ●●●●
                                                   ●● ●
                                       ●● ● ● ● ●● ●●
                                   ● ●●●●●●●● ●●●● ●
                                ● ●●●●●●●●●●●●●●●●●●
                                  ●●●●●●●●●●●● ● ● ●
                           ● ● ● ●●●●●●●●●●●●●●●●●●●●● ● ●●
                                ●●●●●●●●●●●●●●●●●●●●●●
                                   ● ●●●●●●●●●●●● ●
                                    ● ● ● ●●●●●●●●● ●
                           ● ●●●●●●●●●●●●●●●●●●●●●● ●
                                   ●● ●●●●●●●●●●●●●● ●
                                ● ●●●●●●●●●●●●●●●●●●●●● ●
                                  ●●●●●●●●●●●●●●●●● ● ●
                                          ● ● ●● ●●● ● ●
                                   ●● ●●●●●●●●●●●●●● ●●
                                        ● ● ●●●●●●●● ●
                                                ●
                                         ● ● ●●●●●●●●●
                                               ● ●
                                   ●●●●● ●●●●●●●●●●●
                          ● ● ●●●●●●●●●●●●●●●●●●●●●●●●
                                 ●●●●●●●●●●●●●●●●●●●● ● ●
                                                ●
                                                         ●
                                               ● ●● ●
                                          ●● ●●●●● ●●
                                        ●●●●●●●●●●● ●
                                     ●●●● ●●●●●●●●●●●
                                  ●●●●●●●●●●●●●●●●●● ● ●
                                          ●● ●●●●●●●●
                                       ● ●●●●●●●●●●●
                                                         ●
                                       ● ●●●●●●●●●●●●
                                 ● ●●●●●●●●●●●●●●●●●●● ●
                                 ●●●●●●●●●●●●●●●●●●●●
                             ● ●●●●●●●●●●●●●●●●●●●●●● ●
                                  ●●●●●●●●●●●●●●●●●●●
                                   ● ● ●●●●●●●●●●●●●● ●
                                            ●● ●● ● ●
                                              ● ●
                                           ●●●●● ● ●● ●
                             ● ●●●●●●●●●●●●●●●●●●●●● ● ●●
                                             ● ●●
                              ● ●●●●●●●●●●●●●●●●●●●●●●
                                   ● ●●●●●●●●●●●●●●●●
                                    ● ● ●●●●●●●● ●●●● ●
                                            ●●●● ● ●
                                    ● ●●●●●●●●●●● ●● ●
                                        ●●●●●●●●●●●●●
                                   ●●● ●●●●●●●●●●●●●●●
                              ●●●●●●●●●●●●●●●●●●●●●●● ●●
                                    ● ●●●●●●●●●●●●●● ● ●
                                 ●●●●●●●●●●●●●●●●●●● ● ●
                                ●●●●●●●●●●●●●●●●●●●●●●
                                             ● ●●
                                           ●● ● ●● ●●● ●●
                                                ●●
                                             ●●●●● ●
                             ● ● ●●●●●●●●●●●●●●● ●● ●
                               ●●● ●● ●●●●●● ●●●● ●
                              ● ●● ●●●●●●●●●●●●●● ●● ●
                                           ● ●● ● ● ●●●
                                                  ●
                                                        ●
                                                          ●
                                      ●● ●●●●●●●●● ●●
                              ● ●●●●●●●●●●●●●●●●●● ● ●
                                     ●●●●●●●● ●●●●● ●
                                              ● ●●
                                        ● ● ●●●●●● ●●
                                 ● ●●●●●●●●●●●●●●● ●
                                            ● ●●●●● ● ●
                                   ● ●●●● ●●●●●●●●●● ●
                                          ●● ●●● ● ●● ●
                                ● ● ●●●●●●●●●●●●●●●● ● ●
                                       ●●●●●● ●●●● ●●
                                ● ●●●●●●●●●●●●●●●●●●●●●●
                                   ● ●●●●●●●●●●●●●●●●●
                                          ● ● ●●● ● ●
                                         ●●● ●●● ●●●●
                                               ● ●
                                   ● ●●●●●●●●●●●● ● ●
                                ● ● ●●●●●●●●●●●●●●●●● ●●
                                       ●●●●●●● ●●● ●
                                   ● ● ●●●●●●●●●●●● ●● ●
                                 ●● ●●●●●●●●●●●●●● ●●●
                                           ●● ●● ● ●
                                         ●●●● ●● ● ●
                                      ●●●●● ●●●●●●●
                                      ●●●●●●●●●●●● ● ●
                            ● ●●●●●●●●●●●●●●●●●●●●●● ● ●
                                                  ●
                                     ●●●●●●●●●●●●● ●● ●
                                            ●
                                 ● ●●●●●●●●●●●●●●●●● ●● ●
                                        ●●●●●●● ● ● ● ●
                                          ●●● ● ● ● ●
                                  ●●●●●●●●●●●●●●●●●● ●
                                      ●●●●●●●●●●●●● ● ●
                                           ●●●●●● ● ●●● ●
                                            ● ●●  ● ● ●●
                                                   ● ●●
                             ● ●●●●●●●●●●●●●●● ●●●
                                         ● ● ●●● ● ● ●
                                    ● ●●● ●● ● ●●● ●● ●
                                                  ● ●●
                                    ● ●●●●●●●●● ●●● ●●
                              ● ●● ●● ●●●●●●● ●●● ●●●
                                     ● ● ●●●● ●●● ●●
                                            ●● ● ● ● ●
                                              ● ●
                                     ● ●● ●● ● ● ●●●●● ●
                                              ● ● ●● ●
                                    ● ●●● ●●●●●●●● ● ●
                                              ● ● ● ●● ●
                                  ●● ●●●●● ●●● ●●● ●● ●
                                    ● ●●●●● ● ● ● ●
                                                         ●
                                  ●● ● ● ●●●●● ● ●●●●●
                              ● ● ● ●●●●●●●● ● ●●● ●●●
                                     ●        ●
                                              ●● ●● ●
                                     ● ● ●●●●● ● ●
                                   ●●●●● ●●●●●● ●● ●
                                                      ●
                                    ●● ● ● ●●●● ● ● ●
                                     ●● ● ●●●●● ●
                                  ● ●● ●●●●●●● ●●● ●
                                             ● ● ●●●
                                                  ●●
                                  ● ●●●● ● ●●●●● ●● ● ●
                                            ● ●● ● ● ●
                                                              ●
                                           ● ●● ●● ●● ● ●
                                 ●● ● ●●● ●●●●● ●●● ● ●
                                           ●     ● ● ●
                                 ●●               ●       ●
                                       ● ● ● ● ● ●● ●●
                                           ●● ●
                                                   ●
                                                      ●●●
                                        ● ●
                                         ●     ●
                                               ●     ●● ●   ●




                                                                    w   D
Nudged value in the w − l space




   40. Initial bounds on ρ∗ .

                                −D ≤ ρ∗ ≤ D

   (Duh! but nice geometry)
Enclosing triangle

                                   42. Nomenclature
                                   D

 41. Definition.
   ABC such that:
       ABC ⊂ w − l space.
     (w∗ , l∗ ) ∈   ABC.            l                 B
                                                      ●
     Slope of AB segment, unity.                           mβ
                                             mα
     wA ≤ wB                                      1
                                                            ●   C
     wA ≤ wC                            A●            mγ


                                                            w       D
Enclosing Triangle


   43. (New) bounds on ρ.
   Def. Slope mζ projection of point X(wX , lX ) to w = −l line.

                                  mζ wX − lX
                           Xζ =
                                   mζ + 1

   Bounds:

                            Aα = Bα ≤ ρ∗ ≤ Cα
                  wA − lA = wB − lB ≤ ρ∗ ≤ wC − lC


   44. So, collinearity (of A, B and C) implies optimality.
   (Even if there are multiple optima)
Right and left uncertainty



   45. Iterating inside an enclosing triangle.
     1   Set ρ to some value within the bounds
             ˆ
         (wA − lA ≤ ρ ≤ wC − lC ).
                     ˆ
     2   Solve problem with rewards (r − ρc).
                                         ˆ


   46. Optimality.
   If h(sI ) = 0
   Done!
   Optimal policy found for current problem solves SMDP and
   termination condition has been met.
Right and left uncertainty

   47.a If h(sI ) > 0
   Right uncertainty.


                        l
                                 B
                                 ●




                                         ●
                                             S
                                     T
                                     ●
                                             ●   C
                            A●




                                             w
                            y1
Right and left uncertainty

   47.b Right uncertainty.
   Derivation:

          y1 = Sα − Tα
               1
             = ((1 − mβ )wS − (1 − mγ )wT − (mγ − mβ )wC )
               2
   Maximization:

    ∗     2s ab(ρ/2 − Cβ )(ρ/2 − Cγ ) + a(ρ/2 − Cβ ) + b(ρ/2 − Cγ )
   y1 =
                                     c
    s = sign(mβ − mγ )
    a = (1 − mγ )(mβ + 1)
    b = (1 − mβ )(mγ + 1)
    c = (b − a) = 2(mγ − mβ )
Right and left uncertainty

   48.a If h(sI ) < 0
   Left uncertainty.


                             l
                                            B
                                            ●

                                        ●




                                                ●   C
                                 A● R
                                    ●




                        y2
                                                w
Right and left uncertainty




   48. Left uncertainty.
   Is maximum where expected.
   (When value level set crosses B)

                     y2 = Rα − Qα
                      ∗   (ρ/2 − Bα )
                     y2 =             (Bα − Bγ )
                          (ρ/2 − Bγ )
Right and left uncertainty




   49. Fundamental lemma.

   As ρ grows, maximal right uncertainty is monotonically
       ˆ
   decreasing and maximal left uncertainty is monotonically
   increasing, and both are non-negative with minimum 0.
Optimal nudging

   50.
         Find ρ (between the bounds, obviously) such that the
              ˆ
         maximum resulting uncertainty, either left or right, is min.
         Since both are monotonic and have minimum 0, min max
         when maximum left and right uncertainties are equal.
         Remark: bear in mind this (↑) is the worst case. It can
         terminate immediately.
         ρ is gain, but neither biased towards observations (initial or
         ˆ
         otherwise), nor slowly updated.

         Optimal nudging is “optimal” in the sense that with this
         update the maximum uncertainty range of resulting ρ values is
         minimum.
Optimal nudging




   51. Enclosing triangle into enclosing triangle.


   52. Strictly smaller (both    area and, importantly, resulting
   uncertainty)
Obtaining an initial enclosing triangle


   53. Setting ρ(0) = 0 and solving.
       Maximizes reward irrespective of cost. (Usual RL problem)
       Can be interpreted geometrically as fanning from the w axis
       to find the policy with w, l coordinates that subtends the
       smallest angle.
       The resulting optimizer maps to a point somewhere along a
       line with intercept at the origin.

   54. Optimum of the SMDP problem above but not behind that
   line.
   Else, contradiction.
Obtaining an initial enclosing triangle




   56. Either way, after iter. 0, uncertainty reduces in at least half.
Conic intersection


   57. Maximum right uncertainty is a conic!
                                                              
                       c       −(b + a)       −Cα c           r
          ∗        −(b + a)                                   ∗
     r   y1   1                    c       (Cβ a + Cγ b)   y1  = 0
                                                 2
                     −Cα c   (Cβ a + Cγ b)     Cα c          1


   58. Maximum left uncertainty is a conic!
                                                           
                        0       1       (Bγ − Cγ )         r
          ∗                                                 ∗
     r   y2   1        1       0         −Bγ           y2  = 0
                    (Bγ − Cγ ) −Bγ    −2Bα (Bγ − Cγ )     1
Conic intersection




   59. Intersecting them is easy.

   60. And cheap. (Requires in principle constant time and simple
   matrix operations)

   61. So plug it in!
Termination Criteria



   62.
         We want to reduce uncertainty to ε.
         Because it is a good idea. (Right?)
         So there’s your termination condition right there.

   63. Alternatively, stop when |h(k) (sI )| < .

   64. In any case, if the same policy remains optimal and the sign of
   its nudged value changes between iterations, stop:
   It is the optimal solution of the SMDP problem.
Finding D



  65. A quick and dirty method:
    1   Maximize cost (or episode length, all costs equal 1).
    2   Multiply by largest unsigned reinforcement.
  66. So, at most one RL problem more.

  67. If D is estimated too large, wider initial bounds and longer
  computation, but ok.
  68. If D is estimated too small (by other methods, of course),
  points outside the triangle in w − l space. (But where?)
Recurring state + unichain considerations

   69. Feinberg and Yang: Deciding whether the unichain condition
   holds can be done in polynomial time if a recurring state exists.

   70. Existence of a recurring state is common in practice.

   71. (Future work) It can maybe be induced using ε–MDPs.
   (Maybe).

   72. At least one case in which no unichain is no problem: games.
       Certainty of positive policies.
       Non-positive chains.

   73. Happens! (See experiments)
Complexity



   74. Discounted RL is PAC (–efficient).

   75. In problem size parameters (|S|, |A|) and 1/γ.

   76. Episodic undiscounted RL is also PAC.
   (Following similar arguments, but slightly more intricate
   derivations)

   77. So we call a PAC (–efficient) method a number of times.
Complexity



   78. Most worstest case foreverest when choosing ρ(k) is not
   reducing uncertainty.

   79. Reducing it in half is a better bound for our method.

   80. ... and it is a tight bound...

   81. ... in cases that are nearly optimal from the outset.

   82. So, at worst, log 1 calls to PAC:
                         ε
   PAC!
Complexity




   83. Whoops, we proved complexity! That’s a first for SMDP
   (or ARRL, for that matter).


   84. And we inherit convergence from invoked RL, so there’s
   also that.
Tipically much faster



   85. Worst case happens when we are ”already there.

   86. Otherwise, depends, but certainly better.

   87. Multi-iteration reduction in uncertainty way better than 0.5· ,
   because it accumulates geometrically.

   88. Empirical complexity better than the already very good upper
   bound.
Bibliography I




   S. Mahadevan. Average reward reinforcement learning: Foundations, algorithms, and empirical results. Machine
       Learning, 22(1):159–195, 1996.
   Reinaldo Uribe, Fernando Lozano, Katsunari Shibata, and Charles Anderson. Discount and speed/execution
       tradeoffs in markov decision process games. In Computational Intelligence and Games (CIG), 2011 IEEE
       Conference on, pages 79–86. IEEE, 2011.

Weitere ähnliche Inhalte

Was ist angesagt?

Random Matrix Theory and Machine Learning - Part 1
Random Matrix Theory and Machine Learning - Part 1Random Matrix Theory and Machine Learning - Part 1
Random Matrix Theory and Machine Learning - Part 1Fabian Pedregosa
 
Runtime Analysis of Population-based Evolutionary Algorithms
Runtime Analysis of Population-based Evolutionary AlgorithmsRuntime Analysis of Population-based Evolutionary Algorithms
Runtime Analysis of Population-based Evolutionary AlgorithmsPK Lehre
 
The Black-Litterman model in the light of Bayesian portfolio analysis
The Black-Litterman model in the light of Bayesian portfolio analysisThe Black-Litterman model in the light of Bayesian portfolio analysis
The Black-Litterman model in the light of Bayesian portfolio analysisDaniel Bruggisser
 
Slides université Laval, Actuariat, Avril 2011
Slides université Laval, Actuariat, Avril 2011Slides université Laval, Actuariat, Avril 2011
Slides université Laval, Actuariat, Avril 2011Arthur Charpentier
 
Optimalpolicyhandout
OptimalpolicyhandoutOptimalpolicyhandout
OptimalpolicyhandoutNBER
 
Martin Roth: A spatial peaks-over-threshold model in a nonstationary climate
Martin Roth: A spatial peaks-over-threshold model in a nonstationary climateMartin Roth: A spatial peaks-over-threshold model in a nonstationary climate
Martin Roth: A spatial peaks-over-threshold model in a nonstationary climateJiří Šmída
 
Relaxed Utility Maximization in Complete Markets
Relaxed Utility Maximization in Complete MarketsRelaxed Utility Maximization in Complete Markets
Relaxed Utility Maximization in Complete Marketsguasoni
 
Chester Nov08 Terry Lynch
Chester Nov08 Terry LynchChester Nov08 Terry Lynch
Chester Nov08 Terry LynchTerry Lynch
 
MM framework for RL
MM framework for RLMM framework for RL
MM framework for RLSung Yub Kim
 
Signal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse ProblemsSignal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse ProblemsGabriel Peyré
 
SLC 2015 talk improved version
SLC 2015 talk improved versionSLC 2015 talk improved version
SLC 2015 talk improved versionZheng Mengdi
 
Exchange confirm
Exchange confirmExchange confirm
Exchange confirmNBER
 
PAWL - GPU meeting @ Warwick
PAWL - GPU meeting @ WarwickPAWL - GPU meeting @ Warwick
PAWL - GPU meeting @ WarwickPierre Jacob
 
Lecture on nk [compatibility mode]
Lecture on nk [compatibility mode]Lecture on nk [compatibility mode]
Lecture on nk [compatibility mode]NBER
 
some thoughts on divergent series
some thoughts on divergent seriessome thoughts on divergent series
some thoughts on divergent seriesgenius98
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big DataChristian Robert
 

Was ist angesagt? (20)

Random Matrix Theory and Machine Learning - Part 1
Random Matrix Theory and Machine Learning - Part 1Random Matrix Theory and Machine Learning - Part 1
Random Matrix Theory and Machine Learning - Part 1
 
Runtime Analysis of Population-based Evolutionary Algorithms
Runtime Analysis of Population-based Evolutionary AlgorithmsRuntime Analysis of Population-based Evolutionary Algorithms
Runtime Analysis of Population-based Evolutionary Algorithms
 
Basic concepts and how to measure price volatility
Basic concepts and how to measure price volatility Basic concepts and how to measure price volatility
Basic concepts and how to measure price volatility
 
The Black-Litterman model in the light of Bayesian portfolio analysis
The Black-Litterman model in the light of Bayesian portfolio analysisThe Black-Litterman model in the light of Bayesian portfolio analysis
The Black-Litterman model in the light of Bayesian portfolio analysis
 
Slides université Laval, Actuariat, Avril 2011
Slides université Laval, Actuariat, Avril 2011Slides université Laval, Actuariat, Avril 2011
Slides université Laval, Actuariat, Avril 2011
 
Optimalpolicyhandout
OptimalpolicyhandoutOptimalpolicyhandout
Optimalpolicyhandout
 
Martin Roth: A spatial peaks-over-threshold model in a nonstationary climate
Martin Roth: A spatial peaks-over-threshold model in a nonstationary climateMartin Roth: A spatial peaks-over-threshold model in a nonstationary climate
Martin Roth: A spatial peaks-over-threshold model in a nonstationary climate
 
Relaxed Utility Maximization in Complete Markets
Relaxed Utility Maximization in Complete MarketsRelaxed Utility Maximization in Complete Markets
Relaxed Utility Maximization in Complete Markets
 
Chester Nov08 Terry Lynch
Chester Nov08 Terry LynchChester Nov08 Terry Lynch
Chester Nov08 Terry Lynch
 
MM framework for RL
MM framework for RLMM framework for RL
MM framework for RL
 
Signal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse ProblemsSignal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse Problems
 
SLC 2015 talk improved version
SLC 2015 talk improved versionSLC 2015 talk improved version
SLC 2015 talk improved version
 
Exchange confirm
Exchange confirmExchange confirm
Exchange confirm
 
PAWL - GPU meeting @ Warwick
PAWL - GPU meeting @ WarwickPAWL - GPU meeting @ Warwick
PAWL - GPU meeting @ Warwick
 
Slides smart-2015
Slides smart-2015Slides smart-2015
Slides smart-2015
 
Savage-Dickey paradox
Savage-Dickey paradoxSavage-Dickey paradox
Savage-Dickey paradox
 
Lecture on nk [compatibility mode]
Lecture on nk [compatibility mode]Lecture on nk [compatibility mode]
Lecture on nk [compatibility mode]
 
some thoughts on divergent series
some thoughts on divergent seriessome thoughts on divergent series
some thoughts on divergent series
 
1
11
1
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
 

Andere mochten auch

Hardware hacking for software people
Hardware hacking for software peopleHardware hacking for software people
Hardware hacking for software peopleDobrica Pavlinušić
 
SRS-PDR-CDR_Rev_1_3
SRS-PDR-CDR_Rev_1_3SRS-PDR-CDR_Rev_1_3
SRS-PDR-CDR_Rev_1_3Zack Lyman
 
Between winning slow and losing fast.
Between winning slow and losing fast.Between winning slow and losing fast.
Between winning slow and losing fast.r-uribe
 
Punishment and Grace: On the Economics of Tax Amnesties
Punishment and Grace: On the Economics of Tax AmnestiesPunishment and Grace: On the Economics of Tax Amnesties
Punishment and Grace: On the Economics of Tax AmnestiesNugroho Adi
 
Rtl sdr software defined radio
Rtl sdr   software defined radioRtl sdr   software defined radio
Rtl sdr software defined radioEueung Mulyana
 
Capital Asset Pricing Model (CAPM)
Capital Asset Pricing Model (CAPM)Capital Asset Pricing Model (CAPM)
Capital Asset Pricing Model (CAPM)Heickal Pradinanta
 
Financial Crime Compliance at Standard Chartered
Financial Crime Compliance at Standard CharteredFinancial Crime Compliance at Standard Chartered
Financial Crime Compliance at Standard CharteredTEDxMongKok
 
A Glimpse into Developing Software-Defined Radio by Python
A Glimpse into Developing Software-Defined Radio by PythonA Glimpse into Developing Software-Defined Radio by Python
A Glimpse into Developing Software-Defined Radio by PythonAlbert Huang
 
Economics of crime model
Economics of crime modelEconomics of crime model
Economics of crime modelHa Bui
 
Raspberry Pi and Amateur Radio
Raspberry Pi and Amateur RadioRaspberry Pi and Amateur Radio
Raspberry Pi and Amateur RadioKevin Hooke
 
Capital Asset Pricing Model
Capital Asset Pricing ModelCapital Asset Pricing Model
Capital Asset Pricing ModelRod Medallon
 
British economy presentation
British economy presentationBritish economy presentation
British economy presentationCOLUMDAE
 

Andere mochten auch (19)

Hardware hacking for software people
Hardware hacking for software peopleHardware hacking for software people
Hardware hacking for software people
 
SRS-PDR-CDR_Rev_1_3
SRS-PDR-CDR_Rev_1_3SRS-PDR-CDR_Rev_1_3
SRS-PDR-CDR_Rev_1_3
 
Between winning slow and losing fast.
Between winning slow and losing fast.Between winning slow and losing fast.
Between winning slow and losing fast.
 
Punishment and Grace: On the Economics of Tax Amnesties
Punishment and Grace: On the Economics of Tax AmnestiesPunishment and Grace: On the Economics of Tax Amnesties
Punishment and Grace: On the Economics of Tax Amnesties
 
Rtl sdr software defined radio
Rtl sdr   software defined radioRtl sdr   software defined radio
Rtl sdr software defined radio
 
Capital Asset Pricing Model (CAPM)
Capital Asset Pricing Model (CAPM)Capital Asset Pricing Model (CAPM)
Capital Asset Pricing Model (CAPM)
 
Financial Crime Compliance at Standard Chartered
Financial Crime Compliance at Standard CharteredFinancial Crime Compliance at Standard Chartered
Financial Crime Compliance at Standard Chartered
 
A Glimpse into Developing Software-Defined Radio by Python
A Glimpse into Developing Software-Defined Radio by PythonA Glimpse into Developing Software-Defined Radio by Python
A Glimpse into Developing Software-Defined Radio by Python
 
Economic crimes
Economic crimesEconomic crimes
Economic crimes
 
Financial Crimes
Financial CrimesFinancial Crimes
Financial Crimes
 
Economics of crime model
Economics of crime modelEconomics of crime model
Economics of crime model
 
Raspberry Pi and Amateur Radio
Raspberry Pi and Amateur RadioRaspberry Pi and Amateur Radio
Raspberry Pi and Amateur Radio
 
Sanctions in Anti-trust cases – Prof. John M. Connor – Purdue University, US ...
Sanctions in Anti-trust cases – Prof. John M. Connor – Purdue University, US ...Sanctions in Anti-trust cases – Prof. John M. Connor – Purdue University, US ...
Sanctions in Anti-trust cases – Prof. John M. Connor – Purdue University, US ...
 
Economic offences
Economic offencesEconomic offences
Economic offences
 
Capital Asset Pricing Model
Capital Asset Pricing ModelCapital Asset Pricing Model
Capital Asset Pricing Model
 
British economy presentation
British economy presentationBritish economy presentation
British economy presentation
 
Sanctions in Anti-trust cases – Prof. Hwang LEE – Korean University School of...
Sanctions in Anti-trust cases – Prof. Hwang LEE – Korean University School of...Sanctions in Anti-trust cases – Prof. Hwang LEE – Korean University School of...
Sanctions in Anti-trust cases – Prof. Hwang LEE – Korean University School of...
 
British Economy
British EconomyBritish Economy
British Economy
 
Micro and Macro Economics
Micro and Macro EconomicsMicro and Macro Economics
Micro and Macro Economics
 

Ähnlich wie 100 things I know

Introducing Zap Q-Learning
Introducing Zap Q-Learning   Introducing Zap Q-Learning
Introducing Zap Q-Learning Sean Meyn
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPer Kristian Lehre
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPK Lehre
 
week10_Reinforce.pdf
week10_Reinforce.pdfweek10_Reinforce.pdf
week10_Reinforce.pdfYuChianWu
 
Lec5 advanced-policy-gradient-methods
Lec5 advanced-policy-gradient-methodsLec5 advanced-policy-gradient-methods
Lec5 advanced-policy-gradient-methodsRonald Teo
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsStefano Cabras
 
MAPE regression, seminar @ QUT (Brisbane)
MAPE regression, seminar @ QUT (Brisbane)MAPE regression, seminar @ QUT (Brisbane)
MAPE regression, seminar @ QUT (Brisbane)Arnaud de Myttenaere
 
A (1 )-Approximation Algorithm For The Generalized Assignment Problem
A (1 )-Approximation Algorithm For The Generalized Assignment ProblemA (1 )-Approximation Algorithm For The Generalized Assignment Problem
A (1 )-Approximation Algorithm For The Generalized Assignment ProblemCarrie Cox
 
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESNONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESTahia ZERIZER
 
aads_assignment2_answer.pdf
aads_assignment2_answer.pdfaads_assignment2_answer.pdf
aads_assignment2_answer.pdfNanaKoori
 
Numerical Methods
Numerical MethodsNumerical Methods
Numerical MethodsTeja Ande
 
A New Nonlinear Reinforcement Scheme for Stochastic Learning Automata
A New Nonlinear Reinforcement Scheme for Stochastic Learning AutomataA New Nonlinear Reinforcement Scheme for Stochastic Learning Automata
A New Nonlinear Reinforcement Scheme for Stochastic Learning Automatainfopapers
 
Unbiased Hamiltonian Monte Carlo
Unbiased Hamiltonian Monte CarloUnbiased Hamiltonian Monte Carlo
Unbiased Hamiltonian Monte CarloJeremyHeng10
 
Financial Trading as a Game: A Deep Reinforcement Learning Approach
Financial Trading as a Game: A Deep Reinforcement Learning ApproachFinancial Trading as a Game: A Deep Reinforcement Learning Approach
Financial Trading as a Game: A Deep Reinforcement Learning Approach謙益 黃
 

Ähnlich wie 100 things I know (20)

Introducing Zap Q-Learning
Introducing Zap Q-Learning   Introducing Zap Q-Learning
Introducing Zap Q-Learning
 
lecture6.ppt
lecture6.pptlecture6.ppt
lecture6.ppt
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
 
week10_Reinforce.pdf
week10_Reinforce.pdfweek10_Reinforce.pdf
week10_Reinforce.pdf
 
stochastic processes assignment help
stochastic processes assignment helpstochastic processes assignment help
stochastic processes assignment help
 
Lec5 advanced-policy-gradient-methods
Lec5 advanced-policy-gradient-methodsLec5 advanced-policy-gradient-methods
Lec5 advanced-policy-gradient-methods
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-Likelihoods
 
MAPE regression, seminar @ QUT (Brisbane)
MAPE regression, seminar @ QUT (Brisbane)MAPE regression, seminar @ QUT (Brisbane)
MAPE regression, seminar @ QUT (Brisbane)
 
A (1 )-Approximation Algorithm For The Generalized Assignment Problem
A (1 )-Approximation Algorithm For The Generalized Assignment ProblemA (1 )-Approximation Algorithm For The Generalized Assignment Problem
A (1 )-Approximation Algorithm For The Generalized Assignment Problem
 
RL unit 5 part 1.pdf
RL unit 5 part 1.pdfRL unit 5 part 1.pdf
RL unit 5 part 1.pdf
 
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESNONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
 
Deep Learning Opening Workshop - Statistical and Computational Guarantees of ...
Deep Learning Opening Workshop - Statistical and Computational Guarantees of ...Deep Learning Opening Workshop - Statistical and Computational Guarantees of ...
Deep Learning Opening Workshop - Statistical and Computational Guarantees of ...
 
aads_assignment2_answer.pdf
aads_assignment2_answer.pdfaads_assignment2_answer.pdf
aads_assignment2_answer.pdf
 
Numerical Methods
Numerical MethodsNumerical Methods
Numerical Methods
 
A New Nonlinear Reinforcement Scheme for Stochastic Learning Automata
A New Nonlinear Reinforcement Scheme for Stochastic Learning AutomataA New Nonlinear Reinforcement Scheme for Stochastic Learning Automata
A New Nonlinear Reinforcement Scheme for Stochastic Learning Automata
 
Lecture_9.pdf
Lecture_9.pdfLecture_9.pdf
Lecture_9.pdf
 
Unbiased Hamiltonian Monte Carlo
Unbiased Hamiltonian Monte CarloUnbiased Hamiltonian Monte Carlo
Unbiased Hamiltonian Monte Carlo
 
Financial Trading as a Game: A Deep Reinforcement Learning Approach
Financial Trading as a Game: A Deep Reinforcement Learning ApproachFinancial Trading as a Game: A Deep Reinforcement Learning Approach
Financial Trading as a Game: A Deep Reinforcement Learning Approach
 

Kürzlich hochgeladen

UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfUiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfDianaGray10
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Adtran
 
Building AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxBuilding AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxUdaiappa Ramachandran
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...DianaGray10
 
Nanopower In Semiconductor Industry.pdf
Nanopower  In Semiconductor Industry.pdfNanopower  In Semiconductor Industry.pdf
Nanopower In Semiconductor Industry.pdfPedro Manuel
 
How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?IES VE
 
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostKubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostMatt Ray
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxGDSC PJATK
 
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...Aggregage
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsSafe Software
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8DianaGray10
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024SkyPlanner
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Commit University
 
Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Brian Pichman
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfJamie (Taka) Wang
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfAijun Zhang
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UbiTrack UK
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioChristian Posta
 
20230202 - Introduction to tis-py
20230202 - Introduction to tis-py20230202 - Introduction to tis-py
20230202 - Introduction to tis-pyJamie (Taka) Wang
 

Kürzlich hochgeladen (20)

UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfUiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
 
201610817 - edge part1
201610817 - edge part1201610817 - edge part1
201610817 - edge part1
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™
 
Building AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxBuilding AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptx
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
 
Nanopower In Semiconductor Industry.pdf
Nanopower  In Semiconductor Industry.pdfNanopower  In Semiconductor Industry.pdf
Nanopower In Semiconductor Industry.pdf
 
How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?
 
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostKubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptx
 
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)
 
Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdf
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and Istio
 
20230202 - Introduction to tis-py
20230202 - Introduction to tis-py20230202 - Introduction to tis-py
20230202 - Introduction to tis-py
 

100 things I know

  • 1. 100 things I know. Part I of III Reinaldo Uribe M Mar. 4, 2012
  • 2. SMDP Problem Description. 1. In a Markov Decision Process, a (learning) agent is embedded in an envionment and takes actions that affect that environment. States: s ∈ S. Actions: a ∈ As ; A = s∈S As . (Stationary) system dynamics: transition from s to s after taking a, with probability a Pss = p(s |s, a) Rewards: Ra . Def. r(s, a) = E Ra | s, a ss ss At time t, the agent is in state st , takes action at , transitions to state st+1 and observes reinforcement rt+1 with expectation r(st , at ).
  • 3. SMDP Problem Description. 2. Policies, value and optimal policies. An element π of the policy space Π indicates what action, π(s), to take at each state. The value of a policy from a given state, v π (s) is the expected cumulative reward received starting in s and following π: ∞ v π (s) = E γ t r(st , π(st )) | s0 = s, π t=0 0 < γ ≤ 1 is a discount factor. An optimal policy, π ∗ has maximum value at every state: π ∗ (s) ∈ argmax v π (s) ∀s π∈Π π∗ v ∗ (s) = v (s) ≥ v π (s) ∀π ∈ Π
  • 4. SMDP Problem Description. 3. Discount Makes infinite-horizon value bounded if rewards are bounded. Ostensibly makes rewards received sooner more desirable than those received later. But, exponential terms make analysis awkward and harder... ... and γ has unexpected, undesirable effects, as shown in Uribe et al. 2011 Therefore, hereon γ = 1. See section Discount, at the end, for discussion.
  • 5. SMDP Problem Description. 4. Average reward models. A more natural long term measure of optimality exists for such cyclical tasks, based on maximizing the average reward per action. Mahadevan 1996 n−1 1 ρπ (s) = lim E r(st , π(st )) | s0 = 0, π n→∞ n t=0 Optimal policy: ρ∗ (s) ≥ ρπ (s) ∀s, π ∈ Π Remark: All actions equally costly.
  • 6. SMDP Problem Description 5. Semi-Markov Decision Process: usual approach, transition times. Agent is in state st and takes action π(st ) at decision epoch t. After an average of Nt units of time, the sistem evolves to state st+1 and the agent observes rt+1 with expectation r(st , π(st )). In general, Nt (st , at , st+1 ). Gain (of a policy at a state): n−1 π E t=0 r(st , π(st )) | s0 = s, π ρ (s) = lim n→∞ n−1 E t=0 Nt | s0 = s, π Optimizing gain still maximizes average reward per action, but actions are no longer equally weighted. (Unless all Nt = 1)
  • 7. SMDP Problem Description 6.a Semi-Markov Decision Process: explicit action costs. Taking an action takes time, costs money, or consumes energy. (Or any combination thereof) Either way, real valued cost kt+1 not necessarily related to process rewards. Cost can depend on a, s and (less common in practice) s . Generally, actions have positive cost. We simply require all policies to have positive expected cost. Wlog the magnitude of the smallest nonzero average action cost is forced to be unity: |k(a, s)| ≥ 1 ∀k(a, s) = 0
  • 8. SMDP Problem Description 6.b Semi-Markov Decision Process: explicit action costs. Cost of a policy from a state: n−1 cπ (s) = lim E k(st , π(st )) | s0 = s, π n→∞ t=0 So cπ (s) > 0 ∀π ∈ Π, s. Nt = k(st , π(st )). Only their definition/interpretation changes. Gain v π (s)/n ρπ (s) = cπ (s)/n
  • 9. SMDP Problem Description 7. Optimality of π ∗ : π ∗ ∈ Π with gain n−1 E t=0 r(st , π(st )) | s0 = s, π ∗ ∗ v π (s) π∗ ∗ ρ (s) = ρ (s) = lim = π∗ n→∞ n−1 = s, π ∗ c (s) E t=0 k(st , π(st )) | s0 is optimal if ρ∗ (s) ≥ ρπ (s) ∀s, π ∈ Π, as it was in ARRL. Notice that the optimal policy doesn’t necessarily maximize v π or minimize cπ . Only optimizes their ratio.
  • 10. SMDP Problem Description 8. Policies in ARRL and SMDPs are evaluated using the average-adjusted sum of rewards: n−1 H π (s) = lim E (r(st , π(st )) − ρπ (s)) | s0 = s, π n→∞ t=0 Puterman 1994, Abounadi et al. 2001, Ghavamzadeh & Mahadevan 2007 This signals the existence of bias optimal policies that, while gain optimal, also maximize the transitory rewards received before entering recurrence. We are interested in gain-optimal policies only. (It is hard enough...)
  • 11. SMDP Problem Description 9. The Unichain Property A process is unichain if every policy has a single, unique recurrent class. I.e. if for every policy, all recurrent states communicate between them. All methods rely on the unichain property. (Because, if it holds:) ρπ (s) is constant for all s. ρπ (s) = ρπ Gain and value expressions simplify. (See next) However, deciding if a problem is unichain is NP-Hard. Tsitsiklis 2003
  • 12. SMDP Problem Description 10. Unichain property under recurrent states. Feinberg & Yang, 2010 A state is recurrent if it belongs to a recurrent class of every policy. A recurrent state can be found, or proven not to exist, in polynomial time. If a recurrent state exists, determining whether the unichain property holds can be done in polynomial time. (We are not going to actually do it–it requires knowledge of the system dynamics–but good to know!) Recurrent states seem useful. In fact, existence of a recurrent state is more critical to our purposes that the unichain property. Both will be required in principle for our methods/analysis, until their necessity is furher qualified in section Unichain Considerations below.
  • 14. Generic Learning Algorithm 11. The relevant expressions under our assumptions simplify, losing dependence on s0 The following Bellman equation holds for average-adjusted state value: H π (s) = r(s, π(s)) − k(s, π(s))(ρπ ) + Eπ H π (s ) (1) Ghavamzadeh & Mahadevan 2007 Reinforcement Learning methods exploit Eq. (1), running the process and substituting: State for state-action pair value. Expected for obseved reward and cost. ρπ for an estimate. H π (s ) for its current estimate.
  • 15. Generic Learning Algorithm 12. Algorithm 1 Generic SMDP solver Initialize repeat forever Act Do RL to find value of current π Usually 1-step Q-learning Update ρ.
  • 16. Generic Learning Algorithm 13. Model-based state value update: H t+1 (st ) ← max r(st , a) + Ea H t (st+1 ) a Ea emphasizes that expected value of next state depends on action chosen/taken. Model free state-action pair value update: Qt+1 (st , at ) ← (1 − γt ) Qt (st , at )+ γt rt+1 − ρt ct+1 + max Qt (st+1 , a) a In ARRL, ct = 1 ∀t
  • 17. Generic Learning Algorithm 14.a Table of algorithms. ARRL Algorithm Gain update t AAC r(si , π i (si )) i=0 Jalali and Ferguson 1989 ρt+1 ← t+1 t+1 R–Learning ρ ← (1 − α)ρt + Schwartz 1993 α rt+1 + max Qt (st+1 , a) − max Qt (st , a) a a H–Learning ρt+1 ← (1−αt )ρt +αt rt+1 − H t (st ) + H t (st+1 ) αt Tadepalli and Ok 1998 αt+1 ← αt + 1 SSP Q-Learning ρt+1 ← ρt + αt min Qt (ˆ, a) s Abounadi et al. 2001 a t HAR r(si , π i (si )) i=0 Ghavamzadeh and Mahadevan 2007 ρt+1 ← t+1
  • 18. Generic Learning Algorithm 14.b Table of algorithms. SMDPRL Algorithm Gain update SMART t Das et al. 1999 r(si , π i (si )) i=0 ρt+1 ← t MAX-Q c(si , π i (si )) Ghavamzadeh and Mahadevan 2001 i=0
  • 19. SSP Q-Learning 15. Stochastic Shortest Path Q-Learning Most interesting. ARRL If unichain and exists s recurrent (Assumption 2.1 ): ˆ SSP Q-learning is based on the observation that the average cost under any stationary policy is simply the ratio of expected total cost and expected time between two successive visits to the reference state [ˆ] s Thus, they propose (after Bertsekas 1998) making the process episodic, splitting s into the (unique) initial and terminal ˆ states. If the Assumption holds, termination has probability 1. Only the value/cost of the initial state are important. Optimal solution “can be shown to happen” when H(ˆ) = 0. s (See next section)
  • 20. SSP Q-Learning 16. SSPQ ρ update. ρt+1 ← ρt + αt min Qt (ˆ, a), s a where 2 αt → ∞; αt < ∞. t t But it is hard to prove boundedness of {ρt }, so suggested instead ρt+1 ← Γ ρt + αt min Qt (ˆ, a) , s a with Γ(·) a projection to [−K, K] and ρ∗ ∈ (−K, K).
  • 21. A Critique 17. Complexity. Unknown. While RL is PAC. 18. Convergence. Not always guaranteed (ex. R-Learning). When proven, asymptotic: convergence to the optimal policy/value if all state-action pairs are visited infinite times. Usually proven depending on decaying learning rates, which make learning even slower.
  • 22. A Critique 19. Convergence of ρ updates. ... while the second “slow” iteration gradually guides [ρt ] to the desired value. Abounadi et al. 2001 It is the slow one! Must be so for sufficient approximation of current policy value for improvement. Initially biased towards (likely poor) observed returns at the start. A long time must probably pass following the optimal policy for ρ to converge to actual value.
  • 23. Our method 20. Favours an understanding of the −ρ term, either alone in ARRL or as a factor of costs in SMDPs, not so much as an approximation to average rewards but as a punishment for taking actions, which must be made “worth it” by the rewards. I.e. nudging. Exploits the splitting of SSP Q-Learning, in order to focus on the value/cost of a single state, s. ˆ Thus, also assumes the existence of a recurrent state, and that the unichain policy holds. (For the time being) Attempts to ensure an accelerated convergence of ρ updates. In a context in which certain, efficient convergence can be easily introduced.
  • 25. Fractional programming 21. So, ‘Bertsekas splitting’ of s into initial sI and terminal sT . ˆ Then, from sI Any policy π ∈ Π has an expected return until termination v π (sI ), and an expected cost until termination cπ (sI ). v π (sI ) The ARRL problem, then, becomes max π π∈Π c (sI ) Lemma v π (sI ) argmax = argmax v π (s) + ρ∗ (−cπ (s)) π∈Π cπ (sI ) π∈Π For ρ∗ such that max v π (s) + ρ(−cπ (s)) = 0 π∈Π
  • 26. Fractional programming 22. Implications. Assume the gain, ρ∗ is known. Then, the nonlinear SMDP problem reduces to RL. Which is better studied, well understood, simpler, and for which sophisticated, efficient algorithms exist. If we only use (r − ρ∗ c)(s, a, s ). Problem: ρ∗ is usually not known.
  • 27. Nudging 23. Idea: Separate reinforcement learning (leave it to the pros) from updating ρ. Thus, value-learning becomes method-free. We can use any old RL method. Gain update is actually the most critical step. Punish too little, and the agent will not care about hurrying, only collecting reward. Punish too much, and the agent will only care about finishing already. In that sense, (r − ρc) is like picking fruit inside a maze.
  • 28. Nudging 24. The problem reduces to a sequence of RL problems. For a sequence of (temporarily fixed) ρk Some of the methods already provide an indication of the sign of ρ updates. We just don’t hurry to update ρ after taking a single action. Plus the method comes armed with a termination condition: As soon as H k (sI ) = 0 then π k = π ∗ .
  • 29. Nudging 25. Algorithm 2 Nudged SSP Learning Initialize repeat Set reward scheme to (r − cρ) Solve by any RL method. Update ρ From current H π (sI ) until H π (sI ) = 0
  • 30. w − l space 26. D We will propose a method for updating ρ and show that it minimizes uncertainty between steps. For that, we will use a transformation that extends the work of our CIG paper. But First. Let D be a bound on the magnitude of unnudged reward D ≥ lim sup{H π (sI ) | ρ = 0} π∈Π D ≤ lim inf {H π (sI ) | ρ = 0} π∈Π Observe interval (−D, D) bounds ρ∗ but the upper bound is tight only in ARRL if all of D reward is received in a single step from sI .
  • 31. w − l space 27. All policies π ∈ Π, from (that is, at) sI have: real expected value |v π (sI )| ≤ D. positive cost cπ (sI ) ≥ 1 28.a w − l transformation: D+v π (sI ) D−v π (sI ) w= 2cπ (sI ) l= 2cπ (sI )
  • 32. w − l space 28.b w − l plane. D l w D
  • 33. w − l space 29. Properties: w, l ≥ 0 w, l ≤ D D w+l = ≤D cπ (s I) v π (sI ) = D ⇒ l=0 v π (sI ) = −D ⇒ w=0 lim (w, l) = (0, 0) cπ (sI )→∞ 30. Inverse transformation: π π v π (sI ) = D wπ −lπ w +l cπ (sI ) = D wπ1 π +l
  • 35. w − l space 31. Value. w π − lπ v π (sI ) = D D w π + lπ Level sets are lines. w–axis, expected D. l–axis, expected −D. w = l, expected 0. 5D −D −0. l Optimization → fanning from l = 0. 0 Not convex, but splits the 0.5D space. So optimizers are vertices of D w D convex hull of policies.
  • 36. w − l space 32. Cost. D 1 cπ (sI ) = D wπ+ lπ Level sets are lnes with slope −1. w + l = D, expected cost 1. l Cost decreases with distance 1 to the origin. Cost optimizers (both max 2 and min) also vertices. 4 8 w D
  • 37. w − l space 33. The origin. Policies of infinite expected cost. Mean the problem is not unichain or sI is not recurrent. And are troublesome for optimizing value. So under our assumptions, the origin does not belong to the space
  • 38. Nudged value in the w − l space 34. SMDP problem in w − l. π π v π (sI ) D wπ −lπ argmax π = argmax w 1 +l = argmax wπ − lπ π∈Π c (sI π∈Π D wπ +lπ π∈Π D /2 D − l ● ● ● ●● ● ● ● ● ● ●● ●● ● ● ● ● ● ●● ● ● ●●●●● ● ● ● ● ● ●●●●● ●●●●● ● ● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ● ●●● ● ●●● ● ● ● ● ● ●●●●● ●●● ●● ● ●● ● ● ● ● ● ●●● ● ●●●●●●●● ●●● ●● ●● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ●● ●● ● ●● ●● ●●●●●●●●● ● ● ● ● ● ● ●●●● ● ● ● ●●●●● ● ●●●● ● ● ● ● ● ●●● ●●●● ●●●●●●●●●●●●●●●● ● ● ● ● ●●●●● ●●●●●●●● ●●●● ● ● ●●●●●●● ●●●●●●●●●●●● ● ●●●● ● ● ●●●●●● ●● ● ●●● ● ● ● ● ●●● ● ● ● ● ●●●● ●● ●●● ● ● ●● ● ● ● ●● ●●●●●●●●●●●●●●●●●● ● ●● ● ● ●●●●●●●●●●●●●●●●●●● ● ● ● ●●● ●●●● ●●●● ● ● ● ● ● ● ● ●●● ●●●●●●●●● ● ●● ● ● ● ●● ● ● ● ● ●●●●●●●●●● ●●●●●●● ●●● ●● ● ● ●●●●●●●●●●●●●●●● ● ● ● ● ●●● ●● ●●●●●●●●●●●●●●●●●●●●●● ●●● ● ●●● ●●●● ●●●●● ●● ●●●● ● ● ● ● ● ● ●●●● ● ● ● ●● ●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●● ● ● ●●●● ●●●● ●●●●●●● ●● ● ● ●●●●●●●●●●●●●● ●●●●● ● ●● ●●●●●●●●●●●●●●● ● ● ● ● ●●●●● ●●●●●●●●●●●●●●●●● ● ● ● ●● ● ● ●●●● ●● ● ● ●● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ● ●●● ●● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●●● ● ● ● ●●● ●●●● ●●● ● ●● ●● ●●●●●●●●●●●●●● ● ● ● ●● ● ●●●●●●●●●●●●●●●●●●●●● ●●● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●● ● ●●●● ● ● ●●● ● ●● ●● ●●●●●●●●●●●●● ●●●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●●●●●●● ● ●●●●● ● ●●●●●● ● ● ● ●●●●●●●●●●●●●●●●● ● ● ● ●●●●● ● ●● ● ● ● ●● 0 ● ●●●●●●●●●●●●●●●●●●●●●● ●●●● ● ● ●●●●●●●●●●●●●●●●●●● ● ●● ● ● ● ●●●●●●●●●●●●●●● ●● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●● ●●●●●●●●●● ● ●●●●●●●● ● ● ●●●●●●●●●●●●●●● ●● ● ● ● ●●● ●●●● ● ● ● ● ●● ●●●●●●●●●●●●●●●●●●●●●●● ●●● ● ● ●●●●●●●●●●●●●●●●●● ● ● ● ● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ●● ● ● ● ●●●●●●● ● ● ● ● ● ● ●● ●● ●● ●● ● ●●●●●●●●●●●●●●●●●●●● ●●●●●●● ● ●● ● ●●●●●●●●●●●●●●●● ●●●● ●● ● ● ● ●● ● ●●●●●●●● ●● ●●●●● ● ●●●●●●●●●●●●●●●●●●●●●●● ●● ●●●● ● ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ● ● ● ● ●●●●●●●●●●●●●●●● ●●●●●● ●● ● ● ● ●●●●●●●● ● ●● ● ●● ● ●●●●●●●●●● ●●●●● ●● ● ● ● ● ● ● ●● ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ●●●● ● ● ● ● ●●●●●●●●●●●● ● ● ● ●●●●●●●●●● ●● ●●● ● ● ● ● ●●●●●●●●●●●●●●●●●●● ● ● ● ●● ●● ●●●●●●●●●●●●●●●●●●●● ●●●● ● ● ●●●●●●●● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ●● ● ● ● ●● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ● ●●●●● ●●●●● ● ● ●●●●●●●●●●●●●●●●●●● ● ● ●●● ●●●● ●●●●●●●●●● ● ● ●●●● ●●●●●●●●●●●●●●●●●●● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ● ● ● ● ●●●●●●●●●●●●●●●●●●●● ●● ● ● ●●●●●●●●●●●●●●●●●●●●● ●●●●●●● ● ● ●●●●●●●●●●●●●●● ●●●● ● ● ● ● ●● ● ●● ●● ●●● ●● ● ●● ●● ●●●●● ● ● ●● ●●●●● ●●●●●●●●● ●●● ●● ●●●●● ●●●●●●●● ●●● ● ● ● ●●●●●● ●●●●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ● ● ●●● ● ● ● ● ●●● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ● ●●●● ●● ●●●●●●●●●●●●● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ●●●●●●● ●●●●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●● ●● ● ● ● ●● ●● ●● ●●●●●●●● ●● ●●● ● ●●● ● ● ● ● ● ● ●● ● ●●● ●● ● ●● ● ● ● ● ● ●● ●●●● ●●●● ●●● ●●● ● ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●● ● ●●● ●●●●●●●●●●●●●●●●●● ●● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ●● ●●●●●●●●●●●●●● ●● ● ● ●●●●●● ●●●●●●● ● ● ● ●●●●● ● ● ● ●●● ● ●●●●●●●●●●●●●●●● ●●●●● ● ●●●●●●●●●●●●●●●●●●●●● ● ● ● ●●●●● ● ●●●●● ●● ● ● ●●●● ●●●●●●●●●●●●●●●●●●●●● ● ● ● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ● ●●● ●●●● ● ●●●● ● ● ●● ● ● ● ● ●●● ●●● ●●●●●●●●●●●●●●●●●●●●● ●●● ● ●● ●●●●●●●●●●●●●●●●●● ●● ● ●●●●●●●●●●●●●●●●●●●●● ●●● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ●● ● ● ● ●● ●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●● ● ●● ● ●● ● ●●●●●●● ●●●● ● ● ●● ●●● ●●●●●●●●●●●●●●●● ● ●● ●●●●●● ●● ●●● ● ● ●● ●●● ● ● ● ●●●●●●●●● ●● ●●● ●● ● ●● ●●●●●●●●●●●●●● ●●● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●● ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●●●●●● ● ●● ● ●● ●●● ● ●● ●● ● ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ● ● ●●●●●●●●● ●●● ●●●●●● ● ● ●● ● ●● ● ● ● ●●●●●●●●●●●●● ●●●● ● ● ●●● ● ●●●●●● ● ● ● ●● ● ● ●● ●● ●● ●●●●●●●● ●● ●● ● ●●●●● ●● ●●●●●● ●●●●● ●●●●● ● ●● ●● ● ●●● ●● ●●●●●●●●●●●●●●● ●● ● ●●● ● ●●●●●●●●●●●●●●●●●●●● ● ● ● ●●●● ●●●●●●●●●●●●●●●●●●●●● ● ●● ● ● ●●●● ●●● ● ● ● ● ● ● ● ● ●● ●●●●●●●●● ●●●● ●●●●●● ● ●● ●● ●●● ●● ● ●●●● ●●● ●● ● ●● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ●●● ● ●●●● ●●● ●●●●●●●●●●●●●●●●●●●●● ● ● ● ●● ●● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●● ●● ● ● ●●●●●●●●● ●●●●●● ●● ●●●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●●●●●● ● ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ●●●●●●●●● ●● ● ● ●● ● ● ● ●● ● ●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ●● ●● ●●●●●●● ●● ●● ●● ●● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●● ● ● ●● ●●●●●●●●●● ●●● ● ● ●● ●●● ●●●●●●●●●●●●●●●●●●●●●●●● ● ●● ● ●● ● ● ● ● ● ● ● ● ●● ●●●●●●●●●●●● ●● ● ● ● ● ● ●●●●●●●●●●●●●●●●● ●● ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ●● ● ●●●●● ● ● ● ●●● ●●●●●●●●●●●●●●●●●●●● ●●●● ● ● ● ●●●●●●●●●● ●●●● ● ●● ●● ● ● ●●●●●●●●●●●●●●●●●● ● ● ● ●● ●●●●●● ● ●● ● ● ●● ●●●● ●● ● ● ● ●● ● ●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ●●●●●●●●●●● ●●●● ●● ● ● ● ●● ● ● ●● ●●● ●●●● ●● ● ● ●●● ●●●●●●●●●●●●● ●● ● ● ●●● ●●●●●●●●●●●●●●●●●●●●● ●● ●● ●● ●●●●●●● ●● ● ● ●● ● ● ● ● ● ● ●●●●●●●●●●●●●●● ● ●●●●● ● ● ● ● ● ● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●● ●●● ● ● ● ● ●●●●●●●●● ●●● ●●● ● ●●● ●●●●●●●●●●●●●●●●● ●●●●●●●● ●● ● ● ●● ●●● ●●●● ● ●●● ● ● ●●● ●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●● ●● ● ● ● ●● ●● ● ● ●● ●● ●● ●●● ●● ● ● ● ●●●●● ● ● ● ● ●●● ●●●●●●●●●●●●●●●●●●●● ●●● ● ●●● ● ●● ●●●●●●●● ●●● ● ● ●●● ●●●●●●●●●●●●●●● ●● ● ●●●●● ●●●●●● ●● ● ● ●● ●●●●●● ●●●●●●●●●●●● ● ● ●●● ●●● ●●●●●●●●●●●●●● ● ● ●●●● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●● ● ● ● ● ● ● ●● ● ●● ●●●●●●●●●●●●●●●●●●●●●● ●● ● ● ● ●●●●●●●●● ●●●●●●●●●●●●● ● ●●●● ● ●●●●●●●● ● ● ● ●● ● ●●●● ●● ●●●●● ●●● ● ●●● ● ● ● ●●●●● ●●●●● ●●●● ●● ● ● ● ●● ● ● ● ● ●●● ● ●● ● ●● ●●●●●●●●●●●●●● ● ● ● ● ● ●● ●● ●● ● ● ●● ● ●● ● ● ●●●● ● ● ● ●●●● ●●●●●●●●●●●●● ● ● ● ● ● ● ● ● ●●●●● ● ●●● ● ● ●●● ● ● ●●●●●● ●● ●● ● ● ● ● ●● ● ● ●●● ● ●●●●●●● ●●● ●●● ● ●● ● ● ● ● ● ● ●●●●●●●●●●●●●●●●●● ●●● ● ● ● ● ● ●● ●●●●●●● ●●●●●●●● ● ●● ● ● ● ●●● ● ●●● ●●●● ● ●● ● ●●● ● ● ● ● ● ●●● ● ●●●●●●●●●● ●● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ●●● ●●●●●●●●●●● ●●● ●● ● ●● ● ● ●● ●● ● ● ● ●●●●●●●●●●●●●●●● ●●● ● ● ●● ●● ●● ●●●● ●●● ●● ● ● ● ● ● ● ●●●●●●●●●●●●● ● ●●● ●● ●●●●● ●● ● ● ●● ● ● ● ●● ● ● ● ●●● ● ● ● ● ● ● ● ● ●●●●●●●●●● ●●● ●●●●● ●●● ● ● ●● ● ● ●● ●● ● ● ● ● ●● ●● ● ● ●● ●● ● ● ●● ● ●● ● ● ●●●●● ● ● ● ● ●●● ●● ● ● ● ● ● ● ● ● ●● ●●●●● ●●●●● ● ● ● ● ●● ● ●● ● ● ● ● ●● ●● ●●●●●●●●●● ●● ●● ● ● ● ●● ●● ● ● ● ●● ●● ● ● ●●●● ●● ● ● ● ● ●●●● ● ●● ● ●● ●●●●● ● ●●●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ●●● ●● ●● ● ● ● ●● ●● ●●●●●●●●●● ● ● ●● ● ● ● ●● ●● ●● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ●● ● ●● ●●● ●●●● ●●●● ●●●● ●● ● ●● ● ● ● ● ●● ● ● ●●●●●●●●●●● ●●● ● ● ●● ● ●●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ●● /2 ● D w D
  • 39. Nudged value in the w − l space 35. Nudged value, for some ρ. argmax v π (sI ) − ρcπ (sI ) π∈Π w π − lπ 1 = argmax D π + lπ − ρD π π∈Π w w + lπ w π − lπ − ρ = argmax D π∈Π w π + lπ
  • 40. Nudged value in the w − l space 36. Nudged value level sets ˆ (For a set ρ and all policies π with a given h) ˆ ˆ D−h π D lπ = ˆ wˆ − ρ ˆ D+h D+hˆ Lines! ˆ Slope depends only on h (i.e., not on ρ)
  • 41. Nudged value in the w − l space 37. Pencil of lines ˆ ˇ For a set ρ, any two h and h level set lines have intersection: ρ ρ ,− 2 2 Pencil of lines with that vertex.
  • 42. Nudged value in the w − l space D 38. Zero nudged value. D−0 π D lπ = ˆ wˆ − ρ D+0 D+0 l lπ = w π − ρ ˆ ˆ −D Unity slope. 0 Negative values above, positive below. w D D  ρ   ρ    If whole cloud above w = l, some  ,−     2 2  negative nudging is the optimizer. (Encouragement)
  • 43. Nudged value in the w − l space D l ● ● ● ●● ● ● ● ● ●● ●● ● ●● ● ● ●● ●● ● ● ● ● ● ● ●●●● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ●●●●●●●●● ●●●●● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ●●●●●●●●●●●●●●● ● ● ● ●●●●●●●●● ●●●●● ● ● ● ●●●●●●●●●●●● ● ●● ● ●●●●●●●●●● ●●●● ● ●● ●● ● ●●●● ●● ● ●● ● ●●●●●●●●●●●●●●●●●● ● ●● ● ● ●●●●●●●●● ● ●● ●●● ●●●●●●●●●● ● ● ●●● ●●●●●●●●●●●●● ●● ● ●● ●●● ● ●● ● ● ●●●●●●●●●●●●●●●●●● ● ● ● ●● ●● ●●●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●● ● ● ● ●● ●●●●●● ●● ●●●●●●●●●●●●●●●●● ● ●● ● ●●● ● ●●●●●●●●●●●●●●●●●●●●●●● ●●●●● ●●● ●● ●●●●●●●●●●● ● ●● ●● ●●●●●●●●●●●●●●●● ● ●●● ● ● ●●●●●●●●●●●● ● ●● ●●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●● ●●● ●● ● ●● ● ● ● ●●● ● ●●●●●●●●●●●●●●●●●●●●●●● ●● ●●●●●●●●●●●●●● ● ●● ●● ● ●● ●●●●●●●●●●●●●●● ● ●●●● ● ● ●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●● ● ● ●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●●● ● ● ●●●● ●●●● ● ●●●●●●●●●●●●●●● ●● ● ●● ●●●●●●●●●●●●●●●●●●●● ● ●●●●● ●●●● ●● ● ● ● ●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●●● ● ● ●●● ● ●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●● ●●● ● ●● ●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●● ● ● ● ● ●● ●● ●● ●●●●●●● ● ●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ●● ●●●●●●●●●●●●●● ● ● ● ●●●● ●●● ● ●● ● ●● ● ●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ●●●●●●●● ● ● ● ● ●●●●●●●●●●●●●●●●●●●● ● ● ●●● ●●●● ●●●●●●●●●●●●● ●● ● ● ●●●●●●●●●●●●●●●●● ●● ●● ●●●●●●●●●● ●● ●●●●●●●●●●●●●●●●●●●●●● ● ●●● ●●●●●●●●● ●● ● ● ●●●●●● ●●●● ● ● ●●●●●●●●●●●●●●●●● ● ● ●●● ●●● ● ●●●●●●●●●●●●●●●●●● ● ● ●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●● ●● ● ● ●●●●●●● ●●●●● ● ●●●●●●●●●●●●●●●● ●●● ●●●●●●●●●●● ●●●● ● ● ● ●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●●●●● ● ● ● ● ●●●●● ● ●● ● ●●●●●●●●●●●●●●●●● ●●● ●●● ● ●●●●●●●●●● ● ● ● ●● ●●● ●●● ●● ● ●●● ● ● ●● ●●●●●●●●●●●●●●●●●●●● ●●● ●● ●●●●●●●●●●●●●●●●●● ●● ● ●●●●● ●● ●● ●● ● ●●●●●●●●●●●●● ●●●● ● ● ●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●● ●●● ●●●●●●●●●●●●●●●●● ● ● ● ●●●●●●●●●●●●●●●●●●●●● ● ● ● ●● ●●●●●●●●●●●●●●●●●●●●●●●● ●● ●● ●●●●●●●●●●●●● ● ●●●●●●●●●●●● ●● ● ●● ●●●● ●● ● ● ●●●●●●●●●●●● ● ● ● ●● ● ● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●● ● ● ●● ●●●●●●●●●●●●●●●●●●●●●● ● ● ●● ●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●● ● ●● ●●● ●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●●●●●●● ●● ●● ●●●●●●●●●●●●●●●●●● ● ●●●●● ● ●●●●●●● ● ●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●● ● ● ● ●●●●●●●●●●●●●●●●● ● ● ● ●● ● ● ●●●●●●● ●●● ●● ● ●● ●●●●●● ●●●● ●● ● ●● ● ● ● ●● ●● ● ●●●●●●●● ●●●● ● ● ●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●● ● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●● ● ●● ●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●● ● ● ● ● ●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●●●●● ● ●● ●●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●● ● ● ● ● ●● ●●● ● ● ●● ●●●●●●●●●●●●●● ●● ● ● ●●●●●●●● ● ● ● ● ●●●●●●●●● ● ● ●●●●● ●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●● ● ● ● ● ● ●● ● ●● ●●●●● ●● ●●●●●●●●●●● ● ●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●●●● ● ● ●● ●●●●●●●● ● ●●●●●●●●●●● ● ● ●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●● ● ●● ●● ● ● ● ● ●●●●● ● ●● ● ● ●●●●●●●●●●●●●●●●●●●●● ● ●● ● ●● ● ●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●● ● ● ●●●●●●●● ●●●● ● ●●●● ● ● ● ●●●●●●●●●●● ●● ● ●●●●●●●●●●●●● ●●● ●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●● ●● ● ●●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●●●●● ● ●● ●● ● ●● ●●● ●● ●● ●●●●● ● ● ● ●●●●●●●●●●●●●●● ●● ● ●●● ●● ●●●●●● ●●●● ● ● ●● ●●●●●●●●●●●●●● ●● ● ● ●● ● ● ●●● ● ● ● ●● ●●●●●●●●● ●● ● ●●●●●●●●●●●●●●●●●● ● ● ●●●●●●●● ●●●●● ● ● ●● ● ● ●●●●●● ●● ● ●●●●●●●●●●●●●●● ● ● ●●●●● ● ● ● ●●●● ●●●●●●●●●● ● ●● ●●● ● ●● ● ● ● ●●●●●●●●●●●●●●●● ● ● ●●●●●● ●●●● ●● ● ●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●● ● ● ●●● ● ● ●●● ●●● ●●●● ● ● ● ●●●●●●●●●●●● ● ● ● ● ●●●●●●●●●●●●●●●●● ●● ●●●●●●● ●●● ● ● ● ●●●●●●●●●●●● ●● ● ●● ●●●●●●●●●●●●●● ●●● ●● ●● ● ● ●●●● ●● ● ● ●●●●● ●●●●●●● ●●●●●●●●●●●● ● ● ● ●●●●●●●●●●●●●●●●●●●●●● ● ● ● ●●●●●●●●●●●●● ●● ● ● ● ●●●●●●●●●●●●●●●●● ●● ● ●●●●●●● ● ● ● ● ●●● ● ● ● ● ●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●● ● ● ●●●●●● ● ●●● ● ● ●● ● ● ●● ● ●● ● ●●●●●●●●●●●●●●● ●●● ● ● ●●● ● ● ● ● ●●● ●● ● ●●● ●● ● ● ●● ● ●●●●●●●●● ●●● ●● ● ●● ●● ●●●●●●● ●●● ●●● ● ● ●●●● ●●● ●● ●● ● ● ● ● ● ● ● ●● ●● ● ● ●●●●● ● ● ● ●● ● ● ●●● ●●●●●●●● ● ● ● ● ● ●● ● ●● ●●●●● ●●● ●●● ●● ● ● ●●●●● ● ● ● ● ● ●● ● ● ●●●●● ● ●●●●● ● ● ● ●●●●●●●● ● ●●● ●●● ● ● ●● ●● ● ● ● ●●●●● ● ● ●●●●● ●●●●●● ●● ● ● ●● ● ● ●●●● ● ● ● ●● ● ●●●●● ● ● ●● ●●●●●●● ●●● ● ● ● ●●● ●● ● ●●●● ● ●●●●● ●● ● ● ● ●● ● ● ● ● ● ●● ●● ●● ● ● ●● ● ●●● ●●●●● ●●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ●● ●● ● ● ●●● ● ● ● ● ● ●● ● ● w D
  • 44. Nudged value in the w − l space 40. Initial bounds on ρ∗ . −D ≤ ρ∗ ≤ D (Duh! but nice geometry)
  • 45. Enclosing triangle 42. Nomenclature D 41. Definition. ABC such that: ABC ⊂ w − l space. (w∗ , l∗ ) ∈ ABC. l B ● Slope of AB segment, unity. mβ mα wA ≤ wB 1 ● C wA ≤ wC A● mγ w D
  • 46. Enclosing Triangle 43. (New) bounds on ρ. Def. Slope mζ projection of point X(wX , lX ) to w = −l line. mζ wX − lX Xζ = mζ + 1 Bounds: Aα = Bα ≤ ρ∗ ≤ Cα wA − lA = wB − lB ≤ ρ∗ ≤ wC − lC 44. So, collinearity (of A, B and C) implies optimality. (Even if there are multiple optima)
  • 47. Right and left uncertainty 45. Iterating inside an enclosing triangle. 1 Set ρ to some value within the bounds ˆ (wA − lA ≤ ρ ≤ wC − lC ). ˆ 2 Solve problem with rewards (r − ρc). ˆ 46. Optimality. If h(sI ) = 0 Done! Optimal policy found for current problem solves SMDP and termination condition has been met.
  • 48. Right and left uncertainty 47.a If h(sI ) > 0 Right uncertainty. l B ● ● S T ● ● C A● w y1
  • 49. Right and left uncertainty 47.b Right uncertainty. Derivation: y1 = Sα − Tα 1 = ((1 − mβ )wS − (1 − mγ )wT − (mγ − mβ )wC ) 2 Maximization: ∗ 2s ab(ρ/2 − Cβ )(ρ/2 − Cγ ) + a(ρ/2 − Cβ ) + b(ρ/2 − Cγ ) y1 = c s = sign(mβ − mγ ) a = (1 − mγ )(mβ + 1) b = (1 − mβ )(mγ + 1) c = (b − a) = 2(mγ − mβ )
  • 50. Right and left uncertainty 48.a If h(sI ) < 0 Left uncertainty. l B ● ● ● C A● R ● y2 w
  • 51. Right and left uncertainty 48. Left uncertainty. Is maximum where expected. (When value level set crosses B) y2 = Rα − Qα ∗ (ρ/2 − Bα ) y2 = (Bα − Bγ ) (ρ/2 − Bγ )
  • 52. Right and left uncertainty 49. Fundamental lemma. As ρ grows, maximal right uncertainty is monotonically ˆ decreasing and maximal left uncertainty is monotonically increasing, and both are non-negative with minimum 0.
  • 53. Optimal nudging 50. Find ρ (between the bounds, obviously) such that the ˆ maximum resulting uncertainty, either left or right, is min. Since both are monotonic and have minimum 0, min max when maximum left and right uncertainties are equal. Remark: bear in mind this (↑) is the worst case. It can terminate immediately. ρ is gain, but neither biased towards observations (initial or ˆ otherwise), nor slowly updated. Optimal nudging is “optimal” in the sense that with this update the maximum uncertainty range of resulting ρ values is minimum.
  • 54. Optimal nudging 51. Enclosing triangle into enclosing triangle. 52. Strictly smaller (both area and, importantly, resulting uncertainty)
  • 55. Obtaining an initial enclosing triangle 53. Setting ρ(0) = 0 and solving. Maximizes reward irrespective of cost. (Usual RL problem) Can be interpreted geometrically as fanning from the w axis to find the policy with w, l coordinates that subtends the smallest angle. The resulting optimizer maps to a point somewhere along a line with intercept at the origin. 54. Optimum of the SMDP problem above but not behind that line. Else, contradiction.
  • 56. Obtaining an initial enclosing triangle 56. Either way, after iter. 0, uncertainty reduces in at least half.
  • 57. Conic intersection 57. Maximum right uncertainty is a conic!    c −(b + a) −Cα c r ∗  −(b + a) ∗ r y1 1 c (Cβ a + Cγ b)   y1  = 0 2 −Cα c (Cβ a + Cγ b) Cα c 1 58. Maximum left uncertainty is a conic!    0 1 (Bγ − Cγ ) r ∗ ∗ r y2 1  1 0 −Bγ   y2  = 0 (Bγ − Cγ ) −Bγ −2Bα (Bγ − Cγ ) 1
  • 58. Conic intersection 59. Intersecting them is easy. 60. And cheap. (Requires in principle constant time and simple matrix operations) 61. So plug it in!
  • 59. Termination Criteria 62. We want to reduce uncertainty to ε. Because it is a good idea. (Right?) So there’s your termination condition right there. 63. Alternatively, stop when |h(k) (sI )| < . 64. In any case, if the same policy remains optimal and the sign of its nudged value changes between iterations, stop: It is the optimal solution of the SMDP problem.
  • 60. Finding D 65. A quick and dirty method: 1 Maximize cost (or episode length, all costs equal 1). 2 Multiply by largest unsigned reinforcement. 66. So, at most one RL problem more. 67. If D is estimated too large, wider initial bounds and longer computation, but ok. 68. If D is estimated too small (by other methods, of course), points outside the triangle in w − l space. (But where?)
  • 61. Recurring state + unichain considerations 69. Feinberg and Yang: Deciding whether the unichain condition holds can be done in polynomial time if a recurring state exists. 70. Existence of a recurring state is common in practice. 71. (Future work) It can maybe be induced using ε–MDPs. (Maybe). 72. At least one case in which no unichain is no problem: games. Certainty of positive policies. Non-positive chains. 73. Happens! (See experiments)
  • 62. Complexity 74. Discounted RL is PAC (–efficient). 75. In problem size parameters (|S|, |A|) and 1/γ. 76. Episodic undiscounted RL is also PAC. (Following similar arguments, but slightly more intricate derivations) 77. So we call a PAC (–efficient) method a number of times.
  • 63. Complexity 78. Most worstest case foreverest when choosing ρ(k) is not reducing uncertainty. 79. Reducing it in half is a better bound for our method. 80. ... and it is a tight bound... 81. ... in cases that are nearly optimal from the outset. 82. So, at worst, log 1 calls to PAC: ε PAC!
  • 64. Complexity 83. Whoops, we proved complexity! That’s a first for SMDP (or ARRL, for that matter). 84. And we inherit convergence from invoked RL, so there’s also that.
  • 65. Tipically much faster 85. Worst case happens when we are ”already there. 86. Otherwise, depends, but certainly better. 87. Multi-iteration reduction in uncertainty way better than 0.5· , because it accumulates geometrically. 88. Empirical complexity better than the already very good upper bound.
  • 66. Bibliography I S. Mahadevan. Average reward reinforcement learning: Foundations, algorithms, and empirical results. Machine Learning, 22(1):159–195, 1996. Reinaldo Uribe, Fernando Lozano, Katsunari Shibata, and Charles Anderson. Discount and speed/execution tradeoffs in markov decision process games. In Computational Intelligence and Games (CIG), 2011 IEEE Conference on, pages 79–86. IEEE, 2011.