SlideShare ist ein Scribd-Unternehmen logo
1 von 30
Downloaden Sie, um offline zu lesen
Solution Methods for DSGE Models 
and Applications using Linearization

        Lawrence J. Christiano
Overall Outline
• Perturbation and Projection Methods for DSGE 
  Models: an Overview

• Simple New Keynesian model
   – Formulation and log‐linear solution.
   – Ramsey‐optimal policy.
   – Using Dynare to solve the model by log‐linearization:
      • Taylor principle, implications of working capital, News shocks, 
        monetary policy with the long rate.

• Financial Frictions as in BGG
   – Risk shocks and the CKM critique of intertemporal shocks.
   – Dynare exercise.

• Ramsey Optimal Policy, Time Consistency, Timeless 
  Perspective.
Perturbation and Projection 
Methods for Solving DSGE Models
       Lawrence J. Christiano
Outline
• A Simple Example to Illustrate the basic ideas.
  – Functional form characterization of model 
    solution.
  – Use of Projections and Perturbations.

• Neoclassical model.
  – Projection methods
  – Perturbation methods
     • Make sense of the proposition, ‘to a first order 
       approximation, can replace equilibrium conditions with 
       linear expansion about nonstochastic steady state and 
       solve the resulting system using certainty equivalence’ 
Simple Example
• Suppose that x is some exogenous variable 
  and that the following equation implicitly 
  defines y:
               hx, y  0, for all x ∈ X
• Let the solution be defined by the ‘policy rule’, 
  g:
                     y  gx
                                      ‘Error function’
• satisfying
               Rx; g ≡ hx, gx  0
• for all  x ∈ X
The Need to Approximate
• Finding the policy rule, g, is a big problem 
  outside special cases

  – ‘Infinite number of unknowns (i.e., one value of g
    for each possible x) in an infinite number of 
    equations (i.e., one equation for each possible x).’


• Two approaches: 

  – projection and perturbation 
Projection
                                 ĝx;            
• Find a parametric function,             , where     is a 
  vector of parameters chosen so that it imitates 
                                               Rx; g  0
  the property of the exact solution, i.e.,                     
  for all x ∈ X , as well as possible. 
                       
• Choose values for     so that    
                 ̂
                 Rx;   hx, ĝx; 
                        x∈X
• is close to zero for             .

• The method is defined by how ‘close to zero’ is 
                                           ĝx; 
  defined and by the parametric function,              , 
  that is used.
Projection, continued
• Spectral and finite element approximations
                                   ĝx; 
  – Spectral functions: functions,            , in which 
                                      ĝx;             x∈X
    each parameter in     influences              for all            
    example:     
             n                  1
ĝx;     ∑  i Hi x,      
            i0
                                n
 H i x  x i ~ordinary polynominal (not computationaly efficient)
 H i x  T i x,
 T i z : −1, 1 → −1, 1, i th order Chebyshev polynomial
      : X → −1, 1
Projection, continued
                                               ĝx; 
– Finite element approximations: functions,             , 
                                           ĝx; 
  in which each parameter in     influences              
  over only a subinterval of  x ∈ X
ĝx;                    1 2 3 4 5 6 7
                               4

             2




                           X
Projection, continued 
• ‘Close to zero’: collocation and Galerkin
                                  x : x1 , x2 , . . . , xn ∈ X
• Collocation, for n values of                                   
                            1  n
  choose n elements of                                so that   
         ̂
         Rx i ;   hx i , ĝx i ;   0, i  1, . . . , n

   – how you choose the grid of x’s matters…
• Galerkin, for m>n values of                                     
                               x : x1 , x2 , . . . , xm ∈ X
  choose the n elements of      1   n
             m

            ∑ wij hxj , ĝxj ;   0, i  1, . . . , n
             j1
Perturbation
• Projection uses the ‘global’ behavior of the functional 
  equation to approximate solution.
   – Problem: requires finding zeros of non‐linear equations. 
     Iterative methods for doing this are a pain.
   – Advantage: can easily adapt to situations the policy rule is 
     not continuous or simply non‐differentiable (e.g., 
     occasionally binding zero lower bound on interest rate).

• Perturbation method uses local properties of 
  functional equation and Implicit Function/Taylor’s 
  theorem to approximate solution.
   – Advantage:  can implement it using non‐iterative methods. 
   – Possible disadvantages: 
      • may require enormously high derivatives to achieve a  decent 
        global approximation.
      • Does not work when there are important non‐differentiabilities
        (e.g., occasionally binding zero lower bound on interest rate).
Perturbation, cnt’d
                            x∗ ∈ X
• Suppose there is a point,           , where we 
  know the value taken on by the function, g, 
  that we wish to approximate:
               gx ∗   g ∗ , some x ∗
• Use the implicit function theorem to 
  approximate g in a neighborhood of  x ∗
• Note:
          Rx; g  0 for all x ∈ X
                  →
         j
        R x; g ≡ d j Rx; g  0 for all j, all x ∈ X.
                   dx j
Perturbation, cnt’d
• Differentiate R with respect to   and evaluate 
                                  x
                  x∗
  the result at       :
R 1 x ∗   d hx, gx|xx ∗  h 1 x ∗ , g ∗   h 2 x ∗ , g ∗ g ′ x ∗   0
               dx


                 ′   ∗   h 1 x ∗ , g ∗ 
            → g x   −
                         h 2 x ∗ , g ∗ 
• Do it again!
  2
R x  ∗ d 2 hx, gx| ∗  h x ∗ , g ∗   2h x ∗ , g ∗ g ′ x ∗ 
                                xx        11                      12
          dx  2

         h 22 x ∗ , g ∗ g ′ x ∗  2  h 2 x ∗ , g ∗ g ′′ x ∗ .


            → Solve this linearly for g ′′ x ∗ .
Perturbation, cnt’d
• Preceding calculations deliver (assuming 
  enough differentiability, appropriate 
  invertibility, a high tolerance for painful 
  notation!), recursively:
                 g ′ x ∗ , g ′′ x ∗ , . . . , g n x ∗ 
• Then, have the following Taylor’s series 
  approximation:
 gx ≈ ĝx
 ĝx  g ∗  g ′ x ∗   x − x ∗ 
         1 g ′′ x ∗   x − x ∗  2 . . .  1 g n x ∗   x − x ∗  n
            2                                   n!
Perturbation, cnt’d
• Check….
• Study the graph of
                        Rx; ĝ

           x∈X
  – over                to verify that it is everywhere close 
    to zero (or, at least in the region of interest). 
Example of Implicit Function Theorem
                                     y
                                              hx, y  1 x 2  y 2  − 8  0
                                                          2
                                4


                       gx ≃   g∗     x ∗ x − x ∗ 
                                     − ∗
                                      g
           ‐4
                                                        4
                                                                   x



                                         ‐4




              h 1 x ∗ , g ∗    x ∗ h 2 had better not be zero!
g′     ∗
     x   −                  − ∗
                     ∗ ∗
              h 2 x , g        g
Neoclassical Growth Model
• Objective:
                                                1−
                                            c −1
               E 0 ∑  t uc t , uc t   t
                                             1−
                  t0

• Constraints:
            c t  expk t1  ≤ fk t , a t , t  0, 1, 2, . . . .


                a t  a t−1   t .


         fk t , a t   expk t  expa t   1 −  expk t .
Efficiency Condition
                     ct1

E t u ′ fk t , a t  − expk t1 

                                ct1                period t1 marginal product of capital

    − u ′ fk t1 , a t   t1  − expk t2          f K k t1 , a t   t1       0.

 • Here,                    k t , a t ~given numbers
                             t1 ~ iid, mean zero variance V 
                            time t choice variable, k t1

 • Convenient to suppose the model is the limit 
                             →1                     
   of a sequence of models,             , indexed by   
                                 t1 ~ 2 V  ,   1.
Solution
   • A policy rule,
                                             k t1  gk t , a t , .

   • With the property:
                                                  ct

Rk t , a t , ; g ≡ E t u ′ fk t , a t  − expgk t , a t , 

                                                            ct1

                     k t1           a t1                                 k t1           a t1

 − u ′     f gk t , a t , , a t   t1          − exp g gk t , a t , , a t   t1 , 


                                                                            k t1           a t1

                                                              fK      gk t , a t , , a t   t1     0,

   • for all   a t , k t and   1.
Projection Methods
• Let                     
                              ĝk t , a t , ; 

    – be a function with finite parameters (could be either 
      spectral or finite element, as before).

                    
• Choose parameters,   , to make
                             Rk t , a t , ; ĝ
    – as close to zero as possible, over a range of values of 
      the state.
    – use Galerkin or Collocation. 
Occasionally Binding Constraints
• Suppose we add the non‐negativity constraint on 
  investment:
          expgk t , a t ,  − 1 −  expk t  ≥ 0
• Express problem in Lagrangian form and optimum is 
  characterized in terms of equality conditions with a 
  multiplier and with a complementary slackness condition 
  associated with the constraint.

• Conceptually straightforward to apply preceding method. 
  For details, see Christiano‐Fisher, ‘Algorithms for Solving 
  Dynamic Models with Occasionally Binding Constraints’, 
  2000, Journal of Economic Dynamics and Control.
   – This paper describes alternative strategies, based on 
     parameterizing the expectation function, that may be easier, 
     when constraints are occasionally binding constraints.
Perturbation Approach
•   Straightforward application of the perturbation approach, as in the simple 
    example, requires knowing the value taken on by the policy rule at a point.

•   The overwhelming majority of models used in macro do have this 
    property. 

     – In these models, can compute non‐stochastic steady state without any 
       knowledge of the policy rule, g.
                                       k∗
     – Non‐stochastic steady state is      such that


                a0 (nonstochastic steady state in no uncertainty case) 0 (no uncertainty)
∗           ∗
                                                                             
k g k ,                               0                              ,       0


                                                                 1
     – and                                                    1−
                      k∗    log                                          .
                                         1 − 1 − 
Perturbation
    • Error function:
                                                 ct

Rk t , a t , ; g ≡ E t u ′ fk t , a t  − expgk t , a t , 


                                                       ct1

  − u ′ fgk t , a t , , a t   t1  − expggk t , a t , , a t   t1 , 

                                                       f K gk t , a t , , a t   t1   0,

                              k t , a t , .
         – for all values of                
    • So, all order derivatives of R with respect to its 
      arguments are zero (assuming they exist!).
Four (Easy to Show) Results About 
                        Perturbations
    • Taylor series expansion of policy rule:
                   linear component of policy rule

gk t , a t ,  ≃ k  g k k t − k  g a a t  g  

                                          second and higher order terms

  1 g kk k t − k 2  g aa a 2  g   2   g ka k t − ka t  g k k t − k  g a a t  . . .
                                t
   2

          –   g   0 : to a first order approximation, ‘certainty equivalence’ 
          – All terms found by solving linear equations, except coefficient on past 
                                  gk
            endogenous variable,        ,which requires solving for eigenvalues

          – To second order approximation: slope terms certainty equivalent –
                                     g k  g a  0
          – Quadratic, higher order terms computed recursively.
First Order Perturbation
  • Working out the following derivatives and 
    evaluating at  k t  k ∗ , a t    0
        R k k t , a t , ; g  R a k t , a t , ; g  R  k t , a t , ; g  0

                                   ‘problematic term’                      Source of certainty equivalence
  • Implies:                                                               In linear approximation


R k  u ′′ f k − e g g k  − u ′ f Kk g k − u ′′ f k g k − e g g 2 f K  0
                                                                     k



R a  u ′′ f a − e g g a  − u ′ f Kk g a  f Ka  − u ′′ f k g a  f a  − e g g k g a  g a f K  0


R   −u ′ e g  u ′′ f k − e g g k f K g   0
Technical notes for following slide
               u ′′ f k − e g g k  − u ′ f Kk g k − u ′′ f k g k − e g g 2 f K
                                                                              k        0
                       1 f − e g g  − u ′ f Kk g − f g − e g g 2 f K               0
                        k            k
                                                 u ′′
                                                        k       k k           k

                       1 f − 1 e g  u ′ f Kk  f f K g  e g g 2 f K                  0
                        k                       u ′′
                                                             k       k          k


                  1 fk −               1  u ′ f Kk  f k g  g 2                      0
                                                                           k
                   eg fK             f K       u ′′ e g f K      eg              k


                                1 − 1  1  u ′ f Kk                     gk  g2  0
                                                                               k
                                          u ′′ e g f K

• Simplify this further using:
       f K  K−1 expa  1 − , K ≡ expk
            exp − 1k  a  1 − 
       f k   expk  a  1 −  expk  f K e g
      f Kk   − 1 exp − 1k  a
      f KK   − 1K−2 expa   − 1 exp − 2k  a  f Kk e −g


• to obtain polynomial on next slide. 
First Order, cont’d
             Rk  0
• Rewriting                term:
           1 − 1  1  u ′ f KK          gk  g2  0
                                             k
                       u ′′ f K

                            0  g k  1, g k  1
• There are two solutions,                                   
                                                            
   – Theory (see Stokey‐Lucas) tells us to pick the smaller 
     one. 
   – In general, could be more than one eigenvalue less 
     than unity: multiple solutions.
                                    gk ga
• Conditional on solution to     ,         solved for 
                  Ra  0
  linearly using                equation.
• These results all generalize to multidimensional 
  case
Numerical Example
• Parameters taken from Prescott (1986):
   2 20,   0. 36,   0. 02,   0. 95, Ve  0. 01 2


• Second order approximation:
                           3.88    0.98 0.996                          0.06 0.07         0
                                                                              
ĝk t , a t−1 ,  t ,   k ∗        gk          k t − k ∗           ga at  g 
         0.014 0.00017                   0.067 0.079            0.000024 0.00068   1
                                                                                    
 1         g kk          k t − k  g aa
                                   2
                                                           a2
                                                            t              g        2 
  2
    −0.035 −0.028                    0                             0
                                                    
       g ka          k t − ka t  g k k t − k  g a a t 
Conclusion
• For modest US‐sized fluctuations and for 
  aggregate quantities, it is reasonable to work 
  with first order perturbations.

• First order perturbation: linearize (or, log‐
  linearize) equilibrium conditions around non‐
  stochastic steady state and solve the resulting 
  system. 
   – This approach assumes ‘certainty equivalence’. Ok, as 
     a first order approximation.
List of endogenous variables determined at t
              Solution by Linearization
 • (log) Linearized Equilibrium Conditions:
          E t  0 z t1   1 z t   2 z t−1   0 s t1   1 s t   0

 • Posit Linear Solution:
                                                s t − Ps t−1 −  t  0.
                   z t  Az t−1  Bs t                             Exogenous shocks
 • To satisfy equil conditions, A and B must:
 0 A2   1 A   2 I  0, F   0   0 BP   1   0 A   1 B  0

 • If there is exactly one A with eigenvalues less 
   than unity in absolute value, that’s the solution. 
   Otherwise, multiple solutions.

 • Conditional on A, solve linear system for B. 

Weitere ähnliche Inhalte

Was ist angesagt?

Stochastic Control and Information Theoretic Dualities (Complete Version)
Stochastic Control and Information Theoretic Dualities (Complete Version)Stochastic Control and Information Theoretic Dualities (Complete Version)
Stochastic Control and Information Theoretic Dualities (Complete Version)
Haruki Nishimura
 
Hybrid Atlas Models of Financial Equity Market
Hybrid Atlas Models of Financial Equity MarketHybrid Atlas Models of Financial Equity Market
Hybrid Atlas Models of Financial Equity Market
tomoyukiichiba
 
Cs229 cvxopt
Cs229 cvxoptCs229 cvxopt
Cs229 cvxopt
cerezaso
 
Chapter 3 projection
Chapter 3 projectionChapter 3 projection
Chapter 3 projection
NBER
 
NIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learningNIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learning
zukun
 
Chapter 2 pertubation
Chapter 2 pertubationChapter 2 pertubation
Chapter 2 pertubation
NBER
 
Optimalpolicyhandout
OptimalpolicyhandoutOptimalpolicyhandout
Optimalpolicyhandout
NBER
 

Was ist angesagt? (20)

Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
Random Matrix Theory and Machine Learning - Part 1
Random Matrix Theory and Machine Learning - Part 1Random Matrix Theory and Machine Learning - Part 1
Random Matrix Theory and Machine Learning - Part 1
 
Stochastic Control and Information Theoretic Dualities (Complete Version)
Stochastic Control and Information Theoretic Dualities (Complete Version)Stochastic Control and Information Theoretic Dualities (Complete Version)
Stochastic Control and Information Theoretic Dualities (Complete Version)
 
Hybrid Atlas Models of Financial Equity Market
Hybrid Atlas Models of Financial Equity MarketHybrid Atlas Models of Financial Equity Market
Hybrid Atlas Models of Financial Equity Market
 
Ssp notes
Ssp notesSsp notes
Ssp notes
 
optimal control principle slided
optimal control principle slidedoptimal control principle slided
optimal control principle slided
 
Cs229 cvxopt
Cs229 cvxoptCs229 cvxopt
Cs229 cvxopt
 
Chapter 3 projection
Chapter 3 projectionChapter 3 projection
Chapter 3 projection
 
NIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learningNIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learning
 
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
 
Numerical smoothing and hierarchical approximations for efficient option pric...
Numerical smoothing and hierarchical approximations for efficient option pric...Numerical smoothing and hierarchical approximations for efficient option pric...
Numerical smoothing and hierarchical approximations for efficient option pric...
 
Geometric and viscosity solutions for the Cauchy problem of first order
Geometric and viscosity solutions for the Cauchy problem of first orderGeometric and viscosity solutions for the Cauchy problem of first order
Geometric and viscosity solutions for the Cauchy problem of first order
 
Chapter 2 pertubation
Chapter 2 pertubationChapter 2 pertubation
Chapter 2 pertubation
 
Optimalpolicyhandout
OptimalpolicyhandoutOptimalpolicyhandout
Optimalpolicyhandout
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
 
Basic concepts and how to measure price volatility
Basic concepts and how to measure price volatility Basic concepts and how to measure price volatility
Basic concepts and how to measure price volatility
 
11.generalized and subset integrated autoregressive moving average bilinear t...
11.generalized and subset integrated autoregressive moving average bilinear t...11.generalized and subset integrated autoregressive moving average bilinear t...
11.generalized and subset integrated autoregressive moving average bilinear t...
 
YSC 2013
YSC 2013YSC 2013
YSC 2013
 
kinks and cusps in the transition dynamics of a bloch state
kinks and cusps in the transition dynamics of a bloch statekinks and cusps in the transition dynamics of a bloch state
kinks and cusps in the transition dynamics of a bloch state
 

Andere mochten auch

Redistribution through Minimum Wage Regulation: An Analysis of Program Linkag...
Redistribution through Minimum Wage Regulation: An Analysis of Program Linkag...Redistribution through Minimum Wage Regulation: An Analysis of Program Linkag...
Redistribution through Minimum Wage Regulation: An Analysis of Program Linkag...
NBER
 
Business in the United States Who Owns it and How Much Tax They Pay
Business in the United States Who Owns it and How Much Tax They PayBusiness in the United States Who Owns it and How Much Tax They Pay
Business in the United States Who Owns it and How Much Tax They Pay
NBER
 
FISCAL STIMULUS IN ECONOMIC UNIONS: WHAT ROLE FOR STATES
FISCAL STIMULUS IN ECONOMIC UNIONS: WHAT ROLE FOR STATESFISCAL STIMULUS IN ECONOMIC UNIONS: WHAT ROLE FOR STATES
FISCAL STIMULUS IN ECONOMIC UNIONS: WHAT ROLE FOR STATES
NBER
 
Acemoglu lecture2
Acemoglu lecture2Acemoglu lecture2
Acemoglu lecture2
NBER
 
Csvfrictions
CsvfrictionsCsvfrictions
Csvfrictions
NBER
 
Chapter 5 heterogeneous
Chapter 5 heterogeneousChapter 5 heterogeneous
Chapter 5 heterogeneous
NBER
 
Chapter 1 nonlinear
Chapter 1 nonlinearChapter 1 nonlinear
Chapter 1 nonlinear
NBER
 
Lecture on nk [compatibility mode]
Lecture on nk [compatibility mode]Lecture on nk [compatibility mode]
Lecture on nk [compatibility mode]
NBER
 
Chapter 4 likelihood
Chapter 4 likelihoodChapter 4 likelihood
Chapter 4 likelihood
NBER
 
Lecture on solving1
Lecture on solving1Lecture on solving1
Lecture on solving1
NBER
 
Dynare exercise
Dynare exerciseDynare exercise
Dynare exercise
NBER
 
Estimation of Static Discrete Choice Models Using Market Level Data
Estimation of Static Discrete Choice Models Using Market Level DataEstimation of Static Discrete Choice Models Using Market Level Data
Estimation of Static Discrete Choice Models Using Market Level Data
NBER
 

Andere mochten auch (14)

Redistribution through Minimum Wage Regulation: An Analysis of Program Linkag...
Redistribution through Minimum Wage Regulation: An Analysis of Program Linkag...Redistribution through Minimum Wage Regulation: An Analysis of Program Linkag...
Redistribution through Minimum Wage Regulation: An Analysis of Program Linkag...
 
The NBER Working Paper Series at 20,000 - Joshua Gans
The NBER Working Paper Series at 20,000 - Joshua GansThe NBER Working Paper Series at 20,000 - Joshua Gans
The NBER Working Paper Series at 20,000 - Joshua Gans
 
Business in the United States Who Owns it and How Much Tax They Pay
Business in the United States Who Owns it and How Much Tax They PayBusiness in the United States Who Owns it and How Much Tax They Pay
Business in the United States Who Owns it and How Much Tax They Pay
 
FISCAL STIMULUS IN ECONOMIC UNIONS: WHAT ROLE FOR STATES
FISCAL STIMULUS IN ECONOMIC UNIONS: WHAT ROLE FOR STATESFISCAL STIMULUS IN ECONOMIC UNIONS: WHAT ROLE FOR STATES
FISCAL STIMULUS IN ECONOMIC UNIONS: WHAT ROLE FOR STATES
 
Acemoglu lecture2
Acemoglu lecture2Acemoglu lecture2
Acemoglu lecture2
 
Csvfrictions
CsvfrictionsCsvfrictions
Csvfrictions
 
Chapter 5 heterogeneous
Chapter 5 heterogeneousChapter 5 heterogeneous
Chapter 5 heterogeneous
 
Chapter 1 nonlinear
Chapter 1 nonlinearChapter 1 nonlinear
Chapter 1 nonlinear
 
Lecture on nk [compatibility mode]
Lecture on nk [compatibility mode]Lecture on nk [compatibility mode]
Lecture on nk [compatibility mode]
 
Chapter 4 likelihood
Chapter 4 likelihoodChapter 4 likelihood
Chapter 4 likelihood
 
Lecture on solving1
Lecture on solving1Lecture on solving1
Lecture on solving1
 
Dynare exercise
Dynare exerciseDynare exercise
Dynare exercise
 
Estimation of Static Discrete Choice Models Using Market Level Data
Estimation of Static Discrete Choice Models Using Market Level DataEstimation of Static Discrete Choice Models Using Market Level Data
Estimation of Static Discrete Choice Models Using Market Level Data
 
L3 1b
L3 1bL3 1b
L3 1b
 

Ähnlich wie Exchange confirm

Randomness conductors
Randomness conductorsRandomness conductors
Randomness conductors
wtyru1989
 
Transaction Costs Made Tractable
Transaction Costs Made TractableTransaction Costs Made Tractable
Transaction Costs Made Tractable
guasoni
 
Likelihood survey-nber-0713101
Likelihood survey-nber-0713101Likelihood survey-nber-0713101
Likelihood survey-nber-0713101
NBER
 

Ähnlich wie Exchange confirm (20)

Monte-Carlo method for Two-Stage SLP
Monte-Carlo method for Two-Stage SLPMonte-Carlo method for Two-Stage SLP
Monte-Carlo method for Two-Stage SLP
 
B. Sazdovic - Noncommutativity and T-duality
B. Sazdovic - Noncommutativity and T-dualityB. Sazdovic - Noncommutativity and T-duality
B. Sazdovic - Noncommutativity and T-duality
 
PhD thesis presentation of Nguyen Bich Van
PhD thesis presentation of Nguyen Bich VanPhD thesis presentation of Nguyen Bich Van
PhD thesis presentation of Nguyen Bich Van
 
Randomness conductors
Randomness conductorsRandomness conductors
Randomness conductors
 
Transaction Costs Made Tractable
Transaction Costs Made TractableTransaction Costs Made Tractable
Transaction Costs Made Tractable
 
Convex optimization methods
Convex optimization methodsConvex optimization methods
Convex optimization methods
 
Stochastic Approximation and Simulated Annealing
Stochastic Approximation and Simulated AnnealingStochastic Approximation and Simulated Annealing
Stochastic Approximation and Simulated Annealing
 
02 newton-raphson
02 newton-raphson02 newton-raphson
02 newton-raphson
 
stochastic processes-2.ppt
stochastic processes-2.pptstochastic processes-2.ppt
stochastic processes-2.ppt
 
Constrained Maximization
Constrained MaximizationConstrained Maximization
Constrained Maximization
 
Gaussian Integration
Gaussian IntegrationGaussian Integration
Gaussian Integration
 
Options Portfolio Selection
Options Portfolio SelectionOptions Portfolio Selection
Options Portfolio Selection
 
Funcion gamma
Funcion gammaFuncion gamma
Funcion gamma
 
CI_L01_Optimization.pdf
CI_L01_Optimization.pdfCI_L01_Optimization.pdf
CI_L01_Optimization.pdf
 
Ch02
Ch02Ch02
Ch02
 
Nonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo MethodNonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo Method
 
Numerical Linear Algebra for Data and Link Analysis.
Numerical Linear Algebra for Data and Link Analysis.Numerical Linear Algebra for Data and Link Analysis.
Numerical Linear Algebra for Data and Link Analysis.
 
Likelihood survey-nber-0713101
Likelihood survey-nber-0713101Likelihood survey-nber-0713101
Likelihood survey-nber-0713101
 
Unbiased Hamiltonian Monte Carlo
Unbiased Hamiltonian Monte Carlo Unbiased Hamiltonian Monte Carlo
Unbiased Hamiltonian Monte Carlo
 
sol pg 89
sol pg 89 sol pg 89
sol pg 89
 

Mehr von NBER

The Distributional E ffects of U.S. Clean Energy Tax Credits
The Distributional Effects of U.S. Clean Energy Tax CreditsThe Distributional Effects of U.S. Clean Energy Tax Credits
The Distributional E ffects of U.S. Clean Energy Tax Credits
NBER
 
An Experimental Evaluation of Strategies to Increase Property Tax Compliance:...
An Experimental Evaluation of Strategies to Increase Property Tax Compliance:...An Experimental Evaluation of Strategies to Increase Property Tax Compliance:...
An Experimental Evaluation of Strategies to Increase Property Tax Compliance:...
NBER
 
Jackson nber-slides2014 lecture3
Jackson nber-slides2014 lecture3Jackson nber-slides2014 lecture3
Jackson nber-slides2014 lecture3
NBER
 
Acemoglu lecture4
Acemoglu lecture4Acemoglu lecture4
Acemoglu lecture4
NBER
 

Mehr von NBER (20)

The Distributional E ffects of U.S. Clean Energy Tax Credits
The Distributional Effects of U.S. Clean Energy Tax CreditsThe Distributional Effects of U.S. Clean Energy Tax Credits
The Distributional E ffects of U.S. Clean Energy Tax Credits
 
An Experimental Evaluation of Strategies to Increase Property Tax Compliance:...
An Experimental Evaluation of Strategies to Increase Property Tax Compliance:...An Experimental Evaluation of Strategies to Increase Property Tax Compliance:...
An Experimental Evaluation of Strategies to Increase Property Tax Compliance:...
 
Nbe rtopicsandrecomvlecture1
Nbe rtopicsandrecomvlecture1Nbe rtopicsandrecomvlecture1
Nbe rtopicsandrecomvlecture1
 
Nbe rcausalpredictionv111 lecture2
Nbe rcausalpredictionv111 lecture2Nbe rcausalpredictionv111 lecture2
Nbe rcausalpredictionv111 lecture2
 
Nber slides11 lecture2
Nber slides11 lecture2Nber slides11 lecture2
Nber slides11 lecture2
 
Recommenders, Topics, and Text
Recommenders, Topics, and TextRecommenders, Topics, and Text
Recommenders, Topics, and Text
 
Machine Learning and Causal Inference
Machine Learning and Causal InferenceMachine Learning and Causal Inference
Machine Learning and Causal Inference
 
Introduction to Supervised ML Concepts and Algorithms
Introduction to Supervised ML Concepts and AlgorithmsIntroduction to Supervised ML Concepts and Algorithms
Introduction to Supervised ML Concepts and Algorithms
 
Jackson nber-slides2014 lecture3
Jackson nber-slides2014 lecture3Jackson nber-slides2014 lecture3
Jackson nber-slides2014 lecture3
 
Jackson nber-slides2014 lecture1
Jackson nber-slides2014 lecture1Jackson nber-slides2014 lecture1
Jackson nber-slides2014 lecture1
 
Acemoglu lecture4
Acemoglu lecture4Acemoglu lecture4
Acemoglu lecture4
 
The NBER Working Paper Series at 20,000 - Claudia Goldin
The NBER Working Paper Series at 20,000 - Claudia GoldinThe NBER Working Paper Series at 20,000 - Claudia Goldin
The NBER Working Paper Series at 20,000 - Claudia Goldin
 
The NBER Working Paper Series at 20,000 - James Poterba
The NBER Working Paper Series at 20,000 - James PoterbaThe NBER Working Paper Series at 20,000 - James Poterba
The NBER Working Paper Series at 20,000 - James Poterba
 
The NBER Working Paper Series at 20,000 - Scott Stern
The NBER Working Paper Series at 20,000 - Scott SternThe NBER Working Paper Series at 20,000 - Scott Stern
The NBER Working Paper Series at 20,000 - Scott Stern
 
The NBER Working Paper Series at 20,000 - Glenn Ellison
The NBER Working Paper Series at 20,000 - Glenn EllisonThe NBER Working Paper Series at 20,000 - Glenn Ellison
The NBER Working Paper Series at 20,000 - Glenn Ellison
 
Econometrics of High-Dimensional Sparse Models
Econometrics of High-Dimensional Sparse ModelsEconometrics of High-Dimensional Sparse Models
Econometrics of High-Dimensional Sparse Models
 
Nuts and bolts
Nuts and boltsNuts and bolts
Nuts and bolts
 
High-Dimensional Methods: Examples for Inference on Structural Effects
High-Dimensional Methods: Examples for Inference on Structural EffectsHigh-Dimensional Methods: Examples for Inference on Structural Effects
High-Dimensional Methods: Examples for Inference on Structural Effects
 
Applications: Prediction
Applications: PredictionApplications: Prediction
Applications: Prediction
 
Big Data Analysis
Big Data AnalysisBig Data Analysis
Big Data Analysis
 

Kürzlich hochgeladen

IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
Enterprise Knowledge
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 

Kürzlich hochgeladen (20)

Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 

Exchange confirm

  • 2. Overall Outline • Perturbation and Projection Methods for DSGE  Models: an Overview • Simple New Keynesian model – Formulation and log‐linear solution. – Ramsey‐optimal policy. – Using Dynare to solve the model by log‐linearization: • Taylor principle, implications of working capital, News shocks,  monetary policy with the long rate. • Financial Frictions as in BGG – Risk shocks and the CKM critique of intertemporal shocks. – Dynare exercise. • Ramsey Optimal Policy, Time Consistency, Timeless  Perspective.
  • 4. Outline • A Simple Example to Illustrate the basic ideas. – Functional form characterization of model  solution. – Use of Projections and Perturbations. • Neoclassical model. – Projection methods – Perturbation methods • Make sense of the proposition, ‘to a first order  approximation, can replace equilibrium conditions with  linear expansion about nonstochastic steady state and  solve the resulting system using certainty equivalence’ 
  • 5. Simple Example • Suppose that x is some exogenous variable  and that the following equation implicitly  defines y: hx, y  0, for all x ∈ X • Let the solution be defined by the ‘policy rule’,  g: y  gx ‘Error function’ • satisfying Rx; g ≡ hx, gx  0 • for all  x ∈ X
  • 6. The Need to Approximate • Finding the policy rule, g, is a big problem  outside special cases – ‘Infinite number of unknowns (i.e., one value of g for each possible x) in an infinite number of  equations (i.e., one equation for each possible x).’ • Two approaches:  – projection and perturbation 
  • 7. Projection ĝx;   • Find a parametric function,             , where     is a  vector of parameters chosen so that it imitates  Rx; g  0 the property of the exact solution, i.e.,                      for all x ∈ X , as well as possible.   • Choose values for     so that     ̂ Rx;   hx, ĝx;  x∈X • is close to zero for             . • The method is defined by how ‘close to zero’ is  ĝx;  defined and by the parametric function,              ,  that is used.
  • 8. Projection, continued • Spectral and finite element approximations ĝx;  – Spectral functions: functions,            , in which   ĝx;  x∈X each parameter in     influences              for all             example:      n 1 ĝx;   ∑  i Hi x,    i0 n H i x  x i ~ordinary polynominal (not computationaly efficient) H i x  T i x, T i z : −1, 1 → −1, 1, i th order Chebyshev polynomial  : X → −1, 1
  • 9. Projection, continued ĝx;  – Finite element approximations: functions,             ,   ĝx;  in which each parameter in     influences               over only a subinterval of  x ∈ X ĝx;   1 2 3 4 5 6 7 4 2 X
  • 10. Projection, continued  • ‘Close to zero’: collocation and Galerkin x : x1 , x2 , . . . , xn ∈ X • Collocation, for n values of                                      1  n choose n elements of                                so that    ̂ Rx i ;   hx i , ĝx i ;   0, i  1, . . . , n – how you choose the grid of x’s matters… • Galerkin, for m>n values of                                      x : x1 , x2 , . . . , xm ∈ X choose the n elements of      1   n m ∑ wij hxj , ĝxj ;   0, i  1, . . . , n j1
  • 11. Perturbation • Projection uses the ‘global’ behavior of the functional  equation to approximate solution. – Problem: requires finding zeros of non‐linear equations.  Iterative methods for doing this are a pain. – Advantage: can easily adapt to situations the policy rule is  not continuous or simply non‐differentiable (e.g.,  occasionally binding zero lower bound on interest rate). • Perturbation method uses local properties of  functional equation and Implicit Function/Taylor’s  theorem to approximate solution. – Advantage:  can implement it using non‐iterative methods.  – Possible disadvantages:  • may require enormously high derivatives to achieve a  decent  global approximation. • Does not work when there are important non‐differentiabilities (e.g., occasionally binding zero lower bound on interest rate).
  • 12. Perturbation, cnt’d x∗ ∈ X • Suppose there is a point,           , where we  know the value taken on by the function, g,  that we wish to approximate: gx ∗   g ∗ , some x ∗ • Use the implicit function theorem to  approximate g in a neighborhood of  x ∗ • Note: Rx; g  0 for all x ∈ X → j R x; g ≡ d j Rx; g  0 for all j, all x ∈ X. dx j
  • 13. Perturbation, cnt’d • Differentiate R with respect to   and evaluate  x x∗ the result at       : R 1 x ∗   d hx, gx|xx ∗  h 1 x ∗ , g ∗   h 2 x ∗ , g ∗ g ′ x ∗   0 dx ′ ∗ h 1 x ∗ , g ∗  → g x   − h 2 x ∗ , g ∗  • Do it again! 2 R x  ∗ d 2 hx, gx| ∗  h x ∗ , g ∗   2h x ∗ , g ∗ g ′ x ∗  xx 11 12 dx 2 h 22 x ∗ , g ∗ g ′ x ∗  2  h 2 x ∗ , g ∗ g ′′ x ∗ . → Solve this linearly for g ′′ x ∗ .
  • 14. Perturbation, cnt’d • Preceding calculations deliver (assuming  enough differentiability, appropriate  invertibility, a high tolerance for painful  notation!), recursively: g ′ x ∗ , g ′′ x ∗ , . . . , g n x ∗  • Then, have the following Taylor’s series  approximation: gx ≈ ĝx ĝx  g ∗  g ′ x ∗   x − x ∗   1 g ′′ x ∗   x − x ∗  2 . . .  1 g n x ∗   x − x ∗  n 2 n!
  • 15. Perturbation, cnt’d • Check…. • Study the graph of Rx; ĝ x∈X – over                to verify that it is everywhere close  to zero (or, at least in the region of interest). 
  • 16. Example of Implicit Function Theorem y hx, y  1 x 2  y 2  − 8  0 2 4 gx ≃ g∗ x ∗ x − x ∗  − ∗ g ‐4 4 x ‐4 h 1 x ∗ , g ∗  x ∗ h 2 had better not be zero! g′ ∗ x   − − ∗ ∗ ∗ h 2 x , g  g
  • 17. Neoclassical Growth Model • Objective:  1− c −1 E 0 ∑  t uc t , uc t   t 1− t0 • Constraints: c t  expk t1  ≤ fk t , a t , t  0, 1, 2, . . . . a t  a t−1   t . fk t , a t   expk t  expa t   1 −  expk t .
  • 18. Efficiency Condition ct1 E t u ′ fk t , a t  − expk t1  ct1 period t1 marginal product of capital − u ′ fk t1 , a t   t1  − expk t2  f K k t1 , a t   t1    0. • Here, k t , a t ~given numbers  t1 ~ iid, mean zero variance V  time t choice variable, k t1 • Convenient to suppose the model is the limit  →1  of a sequence of models,             , indexed by     t1 ~ 2 V  ,   1.
  • 19. Solution • A policy rule, k t1  gk t , a t , . • With the property: ct Rk t , a t , ; g ≡ E t u ′ fk t , a t  − expgk t , a t ,  ct1 k t1 a t1 k t1 a t1 − u ′ f gk t , a t , , a t   t1 − exp g gk t , a t , , a t   t1 ,  k t1 a t1  fK gk t , a t , , a t   t1   0, • for all   a t , k t and   1.
  • 20. Projection Methods • Let                      ĝk t , a t , ;  – be a function with finite parameters (could be either  spectral or finite element, as before).  • Choose parameters,   , to make Rk t , a t , ; ĝ – as close to zero as possible, over a range of values of  the state. – use Galerkin or Collocation. 
  • 21. Occasionally Binding Constraints • Suppose we add the non‐negativity constraint on  investment: expgk t , a t ,  − 1 −  expk t  ≥ 0 • Express problem in Lagrangian form and optimum is  characterized in terms of equality conditions with a  multiplier and with a complementary slackness condition  associated with the constraint. • Conceptually straightforward to apply preceding method.  For details, see Christiano‐Fisher, ‘Algorithms for Solving  Dynamic Models with Occasionally Binding Constraints’,  2000, Journal of Economic Dynamics and Control. – This paper describes alternative strategies, based on  parameterizing the expectation function, that may be easier,  when constraints are occasionally binding constraints.
  • 22. Perturbation Approach • Straightforward application of the perturbation approach, as in the simple  example, requires knowing the value taken on by the policy rule at a point. • The overwhelming majority of models used in macro do have this  property.  – In these models, can compute non‐stochastic steady state without any  knowledge of the policy rule, g. k∗ – Non‐stochastic steady state is      such that a0 (nonstochastic steady state in no uncertainty case) 0 (no uncertainty) ∗ ∗   k g k , 0 , 0 1 – and     1− k∗  log . 1 − 1 − 
  • 23. Perturbation • Error function: ct Rk t , a t , ; g ≡ E t u ′ fk t , a t  − expgk t , a t ,  ct1 − u ′ fgk t , a t , , a t   t1  − expggk t , a t , , a t   t1 ,   f K gk t , a t , , a t   t1   0, k t , a t , . – for all values of                 • So, all order derivatives of R with respect to its  arguments are zero (assuming they exist!).
  • 24. Four (Easy to Show) Results About  Perturbations • Taylor series expansion of policy rule: linear component of policy rule gk t , a t ,  ≃ k  g k k t − k  g a a t  g   second and higher order terms  1 g kk k t − k 2  g aa a 2  g   2   g ka k t − ka t  g k k t − k  g a a t  . . . t 2 – g   0 : to a first order approximation, ‘certainty equivalence’  – All terms found by solving linear equations, except coefficient on past  gk endogenous variable,        ,which requires solving for eigenvalues – To second order approximation: slope terms certainty equivalent – g k  g a  0 – Quadratic, higher order terms computed recursively.
  • 25. First Order Perturbation • Working out the following derivatives and  evaluating at  k t  k ∗ , a t    0 R k k t , a t , ; g  R a k t , a t , ; g  R  k t , a t , ; g  0 ‘problematic term’ Source of certainty equivalence • Implies: In linear approximation R k  u ′′ f k − e g g k  − u ′ f Kk g k − u ′′ f k g k − e g g 2 f K  0 k R a  u ′′ f a − e g g a  − u ′ f Kk g a  f Ka  − u ′′ f k g a  f a  − e g g k g a  g a f K  0 R   −u ′ e g  u ′′ f k − e g g k f K g   0
  • 26. Technical notes for following slide u ′′ f k − e g g k  − u ′ f Kk g k − u ′′ f k g k − e g g 2 f K k 0 1 f − e g g  − u ′ f Kk g − f g − e g g 2 f K 0  k k u ′′ k k k k 1 f − 1 e g  u ′ f Kk  f f K g  e g g 2 f K 0  k  u ′′ k k k 1 fk − 1  u ′ f Kk  f k g  g 2 0 k  eg fK f K u ′′ e g f K eg k 1 − 1  1  u ′ f Kk gk  g2  0 k   u ′′ e g f K • Simplify this further using: f K  K−1 expa  1 − , K ≡ expk   exp − 1k  a  1 −  f k   expk  a  1 −  expk  f K e g f Kk   − 1 exp − 1k  a f KK   − 1K−2 expa   − 1 exp − 2k  a  f Kk e −g • to obtain polynomial on next slide. 
  • 27. First Order, cont’d Rk  0 • Rewriting                term: 1 − 1  1  u ′ f KK gk  g2  0   k u ′′ f K 0  g k  1, g k  1 • There are two solutions,                                     – Theory (see Stokey‐Lucas) tells us to pick the smaller  one.  – In general, could be more than one eigenvalue less  than unity: multiple solutions. gk ga • Conditional on solution to     ,         solved for  Ra  0 linearly using                equation. • These results all generalize to multidimensional  case
  • 28. Numerical Example • Parameters taken from Prescott (1986):   2 20,   0. 36,   0. 02,   0. 95, Ve  0. 01 2 • Second order approximation: 3.88 0.98 0.996 0.06 0.07 0     ĝk t , a t−1 ,  t ,   k ∗  gk k t − k ∗   ga at  g  0.014 0.00017 0.067 0.079 0.000024 0.00068 1      1 g kk k t − k  g aa 2 a2 t  g  2  2 −0.035 −0.028 0 0     g ka k t − ka t  g k k t − k  g a a t 
  • 29. Conclusion • For modest US‐sized fluctuations and for  aggregate quantities, it is reasonable to work  with first order perturbations. • First order perturbation: linearize (or, log‐ linearize) equilibrium conditions around non‐ stochastic steady state and solve the resulting  system.  – This approach assumes ‘certainty equivalence’. Ok, as  a first order approximation.
  • 30. List of endogenous variables determined at t Solution by Linearization • (log) Linearized Equilibrium Conditions: E t  0 z t1   1 z t   2 z t−1   0 s t1   1 s t   0 • Posit Linear Solution: s t − Ps t−1 −  t  0. z t  Az t−1  Bs t Exogenous shocks • To satisfy equil conditions, A and B must:  0 A2   1 A   2 I  0, F   0   0 BP   1   0 A   1 B  0 • If there is exactly one A with eigenvalues less  than unity in absolute value, that’s the solution.  Otherwise, multiple solutions. • Conditional on A, solve linear system for B.