SlideShare ist ein Scribd-Unternehmen logo
1 von 77
Downloaden Sie, um offline zu lesen
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Advanced Econometrics #1 : Nonlinear Transformations*
A. Charpentier (Université de Rennes 1)
Université de Rennes 1
Graduate Course, 2017.
@freakonometrics 1
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Econometrics and ‘Regression’ ?
Galton (1870, Heriditary Genius, 1886, Regression to-
wards mediocrity in hereditary stature) and Pearson &
Lee (1896, On Telegony in Man, 1903 On the Laws of
Inheritance in Man) studied genetic transmission of
characterisitcs, e.g. the heigth.
On average the child of tall parents is taller than
other children, but less than his parents.
“I have called this peculiarity by the name of regres-
sion”, Francis Galton, 1886.
@freakonometrics 2
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Econometrics and ‘Regression’ ?
1 > library(HistData)
2 > attach(Galton)
3 > Galton$count <- 1
4 > df <- aggregate(Galton , by=list(parent ,
child), FUN=sum)[,c(1,2,5)]
5 > plot(df[,1:2], cex=sqrt(df[,3]/3))
6 > abline(a=0,b=1,lty =2)
7 > abline(lm(child~parent ,data=Galton))
8 > coefficients (lm(child~parent ,data=Galton)
)[2]
9 parent
10 0.6462906
q q q q q
q q q
q q q q q q q q
q q q q q q q
q q
q q
q q q q q
q q q q q q q q q
q q q q q q q q q
q
q q q q q q q q
q q q q q q q q q
q q q q q q q q
q q q q q q q
q q q q q q q q
q q q q q q
q q q q
64 66 68 70 72
62646668707274
height of the mid−parent
heightofthechild
q q q q q
q q q
q q q q q q q q
q q q q q q q
q q
q q
q q q q q
q q q q q q q q q
q q q q q q q q q
q
q q q q q q q q
q q q q q q q q q
q q q q q q q q
q q q q q q q
q q q q q q q q
q q q q q q
q q q q
It is more an autoregression issue here :
if Yt = φYt−1 + εt cor[Yt, Yt+h] = φh
→ 0 as h → ∞.
@freakonometrics 3
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Econometrics and ‘Regression’ ?
Regression is a correlation problem.
Overall, children are not smaller than parents
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q q
q
q
q
q
q
q
q
q q
q
qq
q
q
q
qq
q
q
q
q q
q
q
q
q
q q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q q
q
q
q
q
q q
q
qq
q q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
qq
q q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q q q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
60 65 70 75
60657075
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
qq
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qqq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
60 65 70 75
60657075
@freakonometrics 4
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Overview
◦ Linear Regression Model: yi = β0 + xT
i β + εi = β0 + β1x1,i + β2x2,i + εi
• Nonlinear Transformations : smoothing techniques
h(yi) = β0 + β1x1,i + β2x2,i + εi
yi = β0 + β1x1,i + h(x2,i) + εi
• Asymptotics vs. Finite Distance : boostrap techniques
• Penalization : Parcimony, Complexity and Overfit
• From least squares to other regressions : quantiles, expectiles, distributional,
@freakonometrics 5
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
References
Motivation
Kopczuk, W. Tax bases, tax rates and the elasticity of reported income. JPE.
References
Eubank, R.L. (1999) Nonparametric Regression and Spline Smoothing, CRC Press.
Fan, J. & Gijbels, I. (1996) Local Polynomial Modelling and Its Applications CRC
Press.
Hastie, T.J. & Tibshirani, R.J. (1990) Generalized Additive Models. CRC Press
Wand, M.P & Jones, M.C. (1994) Kernel Smoothing. CRC Press
@freakonometrics 6
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Deterministic or Parametric Transformations
Consider child mortality rate (y) as a function of GDP per capita (x).
q
q
q
q
q
q
q
q
qq
q
q
q
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
qq
q
qq qq
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
qq
q
q
q
q
q
q
q
q
q
qq
q q q q
q
q
q qq
q
q
qq
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
0e+00 2e+04 4e+04 6e+04 8e+04 1e+05
050100150
PIB par tête
Tauxdemortalitéinfantile
Afghanistan
Albania
Algeria
American.Samoa
Angola
Argentina
Armenia
Austria
Azerbaijan
Bahamas
Bangladesh
Belarus
Belgium
Belize
Benin
BhutanBolivia
Bosnia.and.Herzegovina
Brunei.Darussalam
Bulgaria
Burkina.Faso
Cambodia
Canada
Cape.Verde
Central.African.Republic
Chad
Channel.Islands Chile
China
Comoros
Congo
Cook.Islands
Côte.dIvoire
Cuba CyprusCzech.Republic
Korea
Democratic.Republic.of.the.Congo
Denmark
Djibouti
Egypt
El.Salvador
Equatorial.Guinea
Estonia
Fiji
FinlandFrance
French.Guiana
French.Polynesia
Gabon
Gambia
Ghana
Gibraltar
Greece
Grenada
Guam
Guatemala
Guinea
Guinea−Bissau
Guyana
Haiti
Honduras
India
Indonesia
Iran
IrelandIsrael Italy
Jamaica
Japan
Jordan
Kazakhstan
Kenya
Kyrgyzstan
Laos
Latvia
Lesotho
Libyan.Arab.Jamahiriya
Liechtenstein
Lithuania
Luxembourg
Madagascar
Malawi
Malaysia
Malta
Marshall.Islands
Martinique
Mauritius
Micronesia.(Federated.States.of)
Mongolia
Montenegro
Morocco
Mozambique
Myanmar
Namibia
Netherlands
Netherlands.Antilles
New.Caledonia
Nicaragua
Nigeria
Niue
Norway
Occupied.Palestinian.Territory
Oman
Pakistan
Papua.New.Guinea
Paraguay
Peru
Poland Puerto.Rico Qatar
Republic.of.Korea
Republic.of.Moldova
RéunionRomaniaRussian.Federation
Saint.Vincent.and.the.GrenadinesSamoa
San.Marino
Saudi.Arabia
Serbia
Sierra.Leone
Singapore
Slovakia Slovenia
Solomon.Islands
Somalia
Sri.Lanka
Sudan
Suriname
Sweden
Syrian.Arab.Republic
Tajikistan
Thailand
Macedonia
Timor−Leste
Togo
Tunisia
Turkey
Turkmenistan
Tuvalu
Ukraine
United.Arab.Emirates
United.Kingdom
United.Republic.of.Tanzania
United.States.of.America
United.States.Virgin.Islands
Uruguay
Venezuela
Viet.Nam
YemenZimbabwe
@freakonometrics 7
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Deterministic or Parametric Transformations
Logartihmic transformation, log(y) as a function of log(x)
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
1e+02 5e+02 1e+03 5e+03 1e+04 5e+04 1e+05
25102050100
PIB par tête (log)
Tauxdemortalité(log)
Afghanistan
Albania
Algeria
American.Samoa
Angola
Argentina
Armenia
Aruba
AustraliaAustria
Azerbaijan
Bahamas
Bahrain
Bangladesh
Barbados
Belarus
Belgium
Belize
Benin
BhutanBolivia
Bosnia.and.Herzegovina
Botswana
Brazil
Brunei.Darussalam
Bulgaria
Burkina.FasoBurundi
Cambodia
Cameroon
Canada
Cape.Verde
Central.African.Republic
Chad
Channel.Islands
Chile
China
Hong.Kong
Colombia
Comoros
Congo
Cook.Islands
Costa.Rica
Côte.dIvoire
Croatia
Cuba
Cyprus
Czech.Republic
Korea
Democratic.Republic.of.the.Congo
Denmark
Djibouti
Dominican.Republic
Ecuador
Egypt
El.Salvador
Equatorial.Guinea
Eritrea
Estonia
Ethiopia
Fiji
FinlandFrance
French.Guiana
French.Polynesia
Gabon
Gambia
Georgia
Germany
Ghana
Gibraltar
Greece
Greenland
Grenada
Guadeloupe
Guam
Guatemala
Guinea
Guinea−Bissau
Guyana
Haiti
Honduras
Hungary
Iceland
India
Indonesia
Iran
Iraq
Ireland
Isle.of.Man
Israel Italy
Jamaica
Japan
Jordan
Kazakhstan
Kenya
Kiribati
Kuwait
KyrgyzstanLaos
Latvia
Lebanon
Lesotho
Liberia
Libyan.Arab.Jamahiriya
Liechtenstein
Lithuania
Luxembourg
Madagascar
Malawi
Malaysia
Maldives
Mali
Malta
Marshall.Islands
Martinique
Mauritania
Mauritius
Mexico
Micronesia.(Federated.States.of)
Mongolia
Montenegro
Morocco
Mozambique
Myanmar
Namibia
Nepal
Netherlands
Netherlands.Antilles
New.Caledonia
New.Zealand
Nicaragua
Niger Nigeria
Niue
Norway
Occupied.Palestinian.Territory
Oman
Pakistan
Palau
Panama
Papua.New.Guinea
Paraguay
Peru
Philippines
Poland
Portugal
Puerto.Rico
Qatar
Republic.of.Korea
Republic.of.Moldova
Réunion
Romania
Russian.Federation
Rwanda
Saint.Lucia
Saint.Vincent.and.the.GrenadinesSamoa
San.Marino
Sao.Tome.and.Principe
Saudi.Arabia
Senegal
Serbia
Sierra.Leone
Singapore
Slovakia
Slovenia
Solomon.Islands
Somalia
South.Africa
Spain
Sri.Lanka
Sudan
Suriname
Swaziland
Sweden
Switzerland
Syrian.Arab.Republic
Tajikistan
Thailand
Macedonia
Timor−Leste
Togo
Tonga
Trinidad.and.Tobago
Tunisia
Turkey
Turkmenistan
Turks.and.Caicos.Islands
Tuvalu
Uganda
Ukraine
United.Arab.Emirates
United.Kingdom
United.Republic.of.Tanzania
United.States.of.America
United.States.Virgin.Islands
Uruguay
Uzbekistan
Vanuatu
Venezuela
Viet.Nam
Western.Sahara
Yemen
Zambia
Zimbabwe
@freakonometrics 8
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Deterministic or Parametric Transformations
Reverse transformation
q
q
q
q
q
q
q
q
qq
q
q
q
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
qq
q
qq qq
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
qq
q
q
q
q
q
q
q
q
q
qq
q q q q
q
q
q qq
q
q
qq
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
0e+00 2e+04 4e+04 6e+04 8e+04 1e+05
050100150
PIB par tête
Tauxdemortalité
Afghanistan
Albania
Algeria
American.Samoa
Angola
Argentina
Armenia
Aruba
AustraliaAustria
Azerbaijan
Bahamas
Bahrain
Bangladesh
BarbadosBelarus
Belgium
Belize
Benin
BhutanBolivia
Bosnia.and.Herzegovina
Botswana
Brazil
Brunei.Darussalam
Bulgaria
Burkina.Faso
Burundi
Cambodia
Cameroon
Canada
Cape.Verde
Central.African.Republic
Chad
Channel.Islands Chile
China
Hong.Kong
Colombia
Comoros
Congo
Cook.IslandsCosta.Rica
Côte.dIvoire
CroatiaCuba CyprusCzech.Republic
Korea
Democratic.Republic.of.the.Congo
Denmark
Djibouti
Dominican.Republic
Ecuador
Egypt
El.Salvador
Equatorial.Guinea
Eritrea
Estonia
Ethiopia
Fiji
FinlandFrance
French.Guiana
French.Polynesia
Gabon
Gambia
Georgia
Germany
Ghana
Gibraltar
GreeceGreenland
Grenada
GuadeloupeGuam
Guatemala
Guinea
Guinea−Bissau
Guyana
Haiti
Honduras
Hungary
Iceland
India
Indonesia
Iran
Iraq
Ireland
Isle.of.Man
Israel Italy
Jamaica
Japan
Jordan
Kazakhstan
Kenya
Kiribati
Kuwait
Kyrgyzstan
Laos
Latvia
Lebanon
Lesotho
Liberia
Libyan.Arab.Jamahiriya
Liechtenstein
Lithuania
Luxembourg
Madagascar
Malawi
Malaysia
Maldives
Mali
Malta
Marshall.Islands
Martinique
Mauritania
Mauritius
Mexico
Micronesia.(Federated.States.of)
Mongolia
Montenegro
Morocco
Mozambique
Myanmar
Namibia
Nepal
Netherlands
Netherlands.Antilles
New.CaledoniaNew.Zealand
Nicaragua
NigerNigeria
Niue
Norway
Occupied.Palestinian.Territory
Oman
Pakistan
Palau
Panama
Papua.New.Guinea
Paraguay
Peru
Philippines
Poland PortugalPuerto.Rico Qatar
Republic.of.Korea
Republic.of.Moldova
RéunionRomaniaRussian.Federation
Rwanda
Saint.Lucia
Saint.Vincent.and.the.GrenadinesSamoa
San.Marino
Sao.Tome.and.Principe
Saudi.Arabia
Senegal
Serbia
Sierra.Leone
Singapore
Slovakia Slovenia
Solomon.Islands
Somalia
South.Africa
Spain
Sri.Lanka
Sudan
Suriname
Swaziland
Sweden Switzerland
Syrian.Arab.Republic
Tajikistan
Thailand
Macedonia
Timor−Leste
Togo
Tonga
Trinidad.and.Tobago
Tunisia
Turkey
Turkmenistan
Turks.and.Caicos.Islands
Tuvalu
Uganda
Ukraine
United.Arab.Emirates
United.Kingdom
United.Republic.of.Tanzania
United.States.of.America
United.States.Virgin.Islands
Uruguay
Uzbekistan
Vanuatu
Venezuela
Viet.Nam
Western.Sahara
Yemen
Zambia
Zimbabwe
@freakonometrics 9
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Box-Cox transformation
See Box & Cox (1964) An Analysis of Transformations ,
h(y, λ) =



yλ
− 1
λ
if λ = 0
log(y) if λ = 0
or
h(y, λ, µ) =



[y + µ]λ
− 1
λ
if λ = 0
log([y + µ]) if λ = 0
@freakonometrics 10
0 1 2 3 4
−4−3−2−1012
−1 −0.5 0 0.5 1 1.5 2
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Profile Likelihood
In a statistical context, suppose that unknown parameter can be partitioned
θ = (λ, β) where λ is the parameter of interest, and β is a nuisance parameter.
Consider {y1, · · · , yn}, a sample from distribution Fθ, so that the log-likelihood is
log L(θ) =
n
i=1
log fθ(yi)
θ
MLE
is defined as θ
MLE
= argmax {log L(θ)}
Rewrite the log-likelihood as log L(θ) = log Lλ(β). Define
β
pMLE
λ = argmax
β
{log Lλ(β)}
and then λpMLE
= argmax
λ
log Lλ(β
pMLE
λ ) . Observe that
√
n(λpMLE
− λ)
L
−→ N(0, [Iλ,λ − Iλ,βI−1
β,βIβ,λ]−1
)
@freakonometrics 11
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Profile Likelihood and Likelihood Ratio Test
The (profile) likelihood ratio test is based on
2 max L(λ, β) − max L(λ0, β)
If (λ0, β0) are the true value, this difference can be written
2 max L(λ, β) − max L(λ0, β0) − 2 max L(λ0, β) − max L(λ0, β0)
Using Taylor’s expension
∂L(λ, β)
∂λ (λ0,βλ0
)
∼
∂L(λ, β)
∂λ (λ0,β0
)
− Iβ0λ0
I−1
β0β0
∂L(λ0, β)
∂β (λ0,β0
)
Thus,
1
√
n
∂L(λ, β)
∂λ (λ0,βλ0
)
L
→ N(0, Iλ0λ0
) − Iλ0β0
I−1
β0β0
Iβ0λ0
and 2 L(λ, β) − L(λ0, βλ0
)
L
→ χ2
(dim(λ)).
@freakonometrics 12
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Box-Cox
1 > boxcox(lm(dist~speed ,data=cars))
Here h∗
∼ 0.5
−0.5 0.0 0.5 1.0 1.5 2.0
−90−80−70−60−50
λ
log−Likelihood
95%
@freakonometrics 13
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Uncertainty: Parameters vs. Prediction
Uncertainty on regression parameters (β0, β1)
From the output of the regression we can derive
confidence intervals for β0 and β1, usually
βk ∈ βk ± u1−α/2se[βk]
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
Vitesse du véhicule
Distancedefreinage
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
Vitesse du véhicule
Distancedefreinage
@freakonometrics 14
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Uncertainty: Parameters vs. Prediction
Uncertainty on a prediction, y = m(x). Usually
m(x) ∈ m(x) ± u1−α/2se[m(x)]
hence, for a linear model
xT
β ± u1−α/2σ xT[XT
X]−1x
i.e. (with one covariate)
se2
[m(x)]2
= Var[β0 + β1x]
se2
[β0] + cov[β0, β1]x + se2
[β1]x2
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
Vitesse du véhicule
Distancedefreinage
1 > predict(lm(dist~speed ,data=cars),newdata=data.frame(speed=x),
interval="confidence")
@freakonometrics 15
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Least Squares and Expected Value (Orthogonal Projection Theorem)
Let y ∈ Rd
, y = argmin
m∈R



n
i=1
1
n
yi − m
εi
2



. It is the empirical version of
E[Y ] = argmin
m∈R



y − m
ε
2
dF(y)



= argmin
m∈R



E (Y − m
ε
)2



where Y is a 1 random variable.
Thus, argmin
m(·):Rk→R



n
i=1
1
n
yi − m(xi)
εi
2



is the empirical version of E[Y |X = x].
@freakonometrics 16
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
The Histogram and the Regressogram
Connections between the estimation of f(y) and E[Y |X = x].
Assume that yi ∈ [a1, ak+1), divided in k classes [aj, aj+1). The histogram is
ˆfa(y) =
k
j=1
1(t ∈ [aj, aj+1))
aj+1 − aj
1
n
n
i=1
1(yi ∈ [aj, aj+1))
Assume that aj+1 −aj = hn and hn → 0 as n → ∞
with nhn → ∞ then
E ( ˆfa(y) − f(y))2
∼ O(n−2/3
)
(for an optimal choice of hn).
1 > hist(height)
@freakonometrics 17
150 160 170 180 190
0.000.010.020.030.040.050.06
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
The Histogram and the Regressogram
Then a moving histogram was considered,
ˆf(y) =
1
2nhn
n
i=1
1(yi ∈ [y ± hn)) =
1
nhn
n
i=1
k
yi − y
hn
with k(x) =
1
2
1(x ∈ [−1, 1)), which a (flat) kernel
estimator.
1 > density(height , kernel = " rectangular ")
150 160 170 180 190 200
0.000.010.020.030.04
150 160 170 180 190 200
0.000.010.020.030.04
@freakonometrics 18
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
The Histogram and the Regressogram
From Tukey (1961) Curves as parameters, and touch
estimation, the regressogram is defined as
ˆma(x) =
n
i=1 1(xi ∈ [aj, aj+1))yi
n
i=1 1(xi ∈ [aj, aj+1))
and the moving regressogram is
ˆm(x) =
n
i=1 1(xi ∈ [x ± hn])yi
n
i=1 1(xi ∈ [x ± hn])
@freakonometrics 19
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Nadaraya-Watson and Kernels
Background: Kernel Density Estimator
Consider sample {y1, · · · , yn}, Fn empirical cumulative distribution function
Fn(y) =
1
n
n
i=1
1(yi ≤ y)
The empirical measure Pn consists in weights 1/n on each observation.
Idea: add (little) continuous noise to smooth Fn.
Let Yn denote a random variable with distribution Fn and define
˜Y = Yn + hU where U ⊥⊥ Yn, with cdf K
The cumulative distribution function of ˜Y is ˜F
˜F(y) = P[ ˜Y ≤ y] = E 1( ˜Y ≤ y) = E E 1( ˜Y ≤ y) Yn
˜F(y) = E 1 U ≤
y − Yn
h
Yn =
n
i=1
1
n
K
y − yi
h
@freakonometrics 20
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Nadaraya-Watson and Kernels
If we differentiate
˜f(y)=
1
nh
n
i=1
k
y − yi
h
=
1
n
n
i=1
kh (y − yi) with kh(u) =
1
h
k
u
h
˜f is the kernel density estimator of f, with kernel
k and bandwidth h.
Rectangular, k(u) =
1
2
1(|u| ≤ 1)
Epanechnikov, k(u) =
3
4
1(|u| ≤ 1)(1 − u2
)
Gaussian, k(u) =
1
√
2π
e− u2
2
1 > density(height , kernel = " epanechnikov ")
−2 −1 0 1 2
@freakonometrics 21
150 160 170 180 190 200
0.000.010.020.030.04
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Kernels and Statistical Properties
Consider here an i.id. sample {Y1, · · · , Yn} with density f
Given y, observe that E[ ˜f(y)] =
1
h
k
y − t
h
f(t)dt = k(u)f(y − hu)du. Use
Taylor expansion around h = 0,f(y − hu) ∼ f(y) − f (y)hu +
1
2
f (y)h2
u2
E[ ˜f(y)] = f(y)k(u)du − f (y)huk(u)du +
1
2
f (y + hu)h2
u2
k(u)du
= f(y) + 0 + h2 f (y)
2
k(u)u2
du + o(h2
)
Thus, if f is twice continuously differentiable with bounded second derivative,
k(u)du = 1, uk(u)du = 0 and u2
k(u)du < ∞,
then E[ ˜f(y)] = f(y) + h2 f (y)
2
k(u)u2
du + o(h2
)
@freakonometrics 22
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Kernels and Statistical Properties
For the heuristics on that bias, consider a flat kernel,
and set
fh(y) =
F(y + h) − F(y − h)
2h
then the natural estimate is
fh(y) =
F(y + h) − F(y − h)
2h
=
1
2nh
n
i=1
1(yi ∈ [y ± h])
Zi
where Zi’s are Bernoulli B(px) i.id. variables with
px = P[Yi ∈ [x ± h]] = 2h · fh(x). Thus, E(fh(y)) = fh(y), while
fh(y) ∼ f(y) +
h2
6
f (y) as h ∼ 0.
@freakonometrics 23
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Kernels and Statistical Properties
Similarly, as h → 0 and nh → ∞
Var[ ˜f(y)] =
1
n
E[kh(z − Z)2
] − (E[kh(z − Z)])
2
Var[ ˜f(y)] =
f(y)
nh
k(u)2
du + o
1
nh
Hence
• if h → 0 the bias goes to 0
• if nh → ∞ the variance goes to 0
@freakonometrics 24
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Kernels and Statistical Properties
Extension in Higher Dimension:
˜f(y) =
1
n|H|1/2
n
i=1
k H−1/2
(y − yi)
˜f(y) =
1
nhd|Σ|1/2
n
i=1
k Σ−1/2 (y − yi)
h
@freakonometrics 25
150 160 170 180 190 200
406080100120
height
weight
2e−04
4e−04
6e−04
8e−04
0.001
0.0012
0.0014
0.0016
0.0018
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Kernels and Convolution
Given f and g, set
(f g)(x) =
R
f(x − y)g(y)dy
Then ˜fh = ( ˆf kh), where
ˆf(y) =
ˆF(y)
dy
=
n
i=1
δyi
(y)
Hence, ˜f is the distribution of Y + ε where
Y is uniform over {y1, · · · , yn} and ε ∼ kh are independent
@freakonometrics 26
q q q q q
−0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2
0.00.20.40.60.81.0
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Nadaraya-Watson and Kernels
Here E[Y |X = x] = m(x). Write m as a function of densities
g(x) = yf(y|x)dy =
yf(y, x)dy
f(y, x)dy
Consider some bivariate kernel k, such that
tk(t, u)dt = 0 and κ(u) = k(t, u)dt
For the numerator, it can be estimated using
y ˜f(y, x)dy =
1
nh2
n
i=1
yk
y − yi
h
,
x − xi
h
=
1
nh
n
i=1
yik t,
x − xi
h
dt =
1
nh
n
i=1
yiκ
x − xi
h
@freakonometrics 27
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Nadaraya-Watson and Kernels
and for the denominator
f(y, x)dy =
1
nh2
n
i=1
k
y − yi
h
,
x − xi
h
=
1
nh
n
i=1
κ
x − xi
h
Therefore, plugging in the expression for g(x) yields
˜m(x) =
n
i=1 yiκh (x − xi)
n
i=1 κh (x − xi)
Observe that this regression estimator is a weighted
average (see linear predictor section)
˜m(x) =
n
i=1
ωi(x)yi with ωi(x) =
κh (x − xi)
n
i=1 κh (x − xi)
@freakonometrics 28
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Nadaraya-Watson and Kernels
One can prove that kernel regression bias is given by
E[ ˜m(x)] = m(x) + C1h2 1
2
m (x) + m (x)
f (x)
f(x)
In the univariate case, one can get the kernel estimator of derivatives
d ˜m(x)
dx
=
1
nh2
n
i=1
k
x − xi
h
yi
Actually, ˜m is a function of bandwidth h.
Note: this can be extended to multivariate x.
@freakonometrics 29
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
q
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
From kernels to k-nearest neighbours
An alternative is to consider
˜mk(x) =
1
n
n
i=1
ωi,k(x)yi
where ωi,k(x) =
n
k
if i ∈ Ik
x with
Ik
x = {i : xi one of the k nearest observations to x}
Lai (1977) Large sample properties of K-nearest neighbor procedures if k → ∞ and
k/n → 0 as n → ∞, then
E[ ˜mk(x)] ∼ m(x) +
1
24f(x)3
(m f + 2m f )(x)
k
n
2
while Var[ ˜mk(x)] ∼
σ2
(x)
k
@freakonometrics 30
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
From kernels to k-nearest neighbours
Remark: Brent & John (1985) Finding the median requires 2n comparisons
considered some median smoothing algorithm, where we consider the median
over the k nearest neighbours (see section #4).
@freakonometrics 31
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
k-Nearest Neighbors and Curse of Dimensionality
The higher the dimension, the larger the distance to the closest neigbbor
min
i∈{1,··· ,n}
{d(a, xi)}, xi ∈ Rd
.
q
dim1 dim2 dim3 dim4 dim5
0.00.20.40.60.81.0
dim1 dim2 dim3 dim4 dim5
0.00.20.40.60.81.0
n = 10 n = 100
@freakonometrics 32
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Bandwidth selection : MISE for Density
MSE[ ˜f(y)] = bias[ ˜f(y)]2
+ Var[ ˜f(y)]
MSE[ ˜f(y)] = f(y)
1
nh
k(u)2
du + h4 f (y)
2
k(u)u2
du
2
+ o h4
+
1
nh
Bandwidth choice is based on minimization of the asymptotic integrated MSE
(over y)
MISE( ˜f) = MSE[ ˜f(y)]dy ∼
1
nh
k(u)2
du + h4 f (y)
2
k(u)u2
du
2
@freakonometrics 33
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Bandwidth selection : MISE for Density
Thus, the first-order condition yields
−
C1
nh2
+ h3
f (y)2
dyC2 = 0
with C1 = k2
(u)du and C2 = k(u)u2
du
2
, and
h = n− 1
5
C1
C2 f (y)dy
1
5
h = 1.06n− 1
5 Var[Y ] from Silverman (1986) Density Estimation
1 > bw.nrd0(cars$speed)
2 [1] 2.150016
3 > bw.nrd(cars$speed)
4 [1] 2.532241
with Scott correction, see Scott (1992) Multivariate Density Estimation
@freakonometrics 34
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Bandwidth selection : MISE for Regression Model
One can prove that
MISE[mh] ∼
bias2
h4
4
x2
k(x)dx
2
m (x) + 2m (x)
f (x)
f(x)
2
dx
+
σ2
nh
k2
(x)dx ·
dx
f(x)
variance
as n → 0 and nh → ∞.
The bias is sensitive to the position of the xi’s.
h = n− 1
5


C1
dx
f(x)
C2 m (x) + 2m (x)f (x)
f(x) dx


1
5
Problem: depends on unknown f(x) and m(x).
@freakonometrics 35
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Bandwidth Selection : Cross Validation
Let R(h) = E (Y − mh(X))2
.
Natural idea R(h) =
1
n
n
i=1
(yi − mh(xi))
2
Instead use leave-one-out cross validation,
R(h) =
1
n
n
i=1
yi − m
(i)
h (xi)
2
where m
(i)
h is the estimator obtained by omitting the ith
pair (yi, xi) or k-fold cross validation,
R(h) =
1
n
k
j=1 i∈Ij
yi − m
(j)
h (xi)
2
where m
(j)
h is the estimator obtained by omitting pairs
(yi, xi) with i ∈ Ij.
@freakonometrics 36
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Bandwidth Selection : Cross Validation
Then find (numerically)
h = argmin R(h)
In the context of density estimation, see Chiu
(1991) Bandwidth Selection for Kernel Density Es-
timation 2 4 6 8 10
1416182022
bandwidth
Usual bias-variance tradeoff, or Goldilock principle:
h should be neither too small, nor too large
• undersmoothed: bias too large, variance too small
• oversmoothed: variance too large, bias too small
@freakonometrics 37
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Local Linear Regression
Consider ˆm(x) defined as ˆm(x) = β0 where (β0, β) is the solution of
min
(β0,β)
n
i=1
ω
(x)
i yi − [β0 + (x − xi)T
β]
2
where ω
(x)
i = kh(x − xi), e.g.
i.e. we seek the constant term in a weighted least squares regression of yi’s on
x − xi’s. If Xx is the matrix [1 (x − X)T
], and if W x is a matrix
diag[kh(x − x1), · · · , kh(x − xn)]
then ˆm(x) = 1T
(XT
xW xXx)−1
XT
xW xy
This estimator is also a linear predictor :
ˆm(x) =
n
i=1
ai(x)
ai(x)
yi
@freakonometrics 38
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
where
ai(x) =
1
n
kh(x − xi) 1 − s1(x)T
s2(x)−1 x − xi
h
with
s1(x) =
1
n
n
i=1
kh(x−xi)
x − xi
h
and s2(x) =
1
n
n
i=1
kh(x−xi)
x − xi
h
x − xi
h
Note that Nadaraya-Watson estimator was simply the solution of
min
β0
n
i=1
ω
(x)
i (yi − β0)
2
where ω
(x)
i = kh(x − xi)
E[ ˆm(x)] ∼ m(x) +
h2
2
m (x)µ2 where µ2 = k(u)u2
du.
Var[ ˆm(x)] ∼
1
nh
νσ2
x
f(x)
@freakonometrics 39
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
where ν = k(u)2
du
Thus, kernel regression MSE is
h2
4
g (x) + 2g (x)
f (x)
f(x)
2
µ2
2 +
1
nh
νσ2
x
f(x)
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
Vitesse du véhciule
Distancedefreinage
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
Vitesse du véhciule
Distancedefreinage
@freakonometrics 40
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
1 > loess(dist ~ speed , cars ,span =0.75 , degree =1)
2 > predict(REG , data.frame(speed = seq(5, 25, 0.25)), se = TRUE)
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
Vitesse du véhciule
Distancedefreinage
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
Vitesse du véhciule
Distancedefreinage
@freakonometrics 41
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Local polynomials
One might assume that, locally, m(x) ∼ µx(u) as u ∼ 0, with
µx(u) = β
(x)
0 + β
(x)
1 + [u − x] + β
(x)
2 +
[u − x]2
2
+ β
(x)
3 +
[u − x]3
2
+ · · ·
and we estimate β(x)
by minimizing
n
i=1
ω
(x)
i yi − µx(xi)
2
.
If Xx is the design matrix 1 xi − x
[xi − x]2
2
[xi − x]3
3
· · · , then
β
(x)
= XT
xW xXx
−1
XT
xW xy
(weighted least squares estimators).
1 > library(locfit)
2 > locfit(dist~speed ,data=cars)
@freakonometrics 42
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Series Regression
Recall that E[Y |X = x] = m(x).
Why not approximate m by a linear combination of approx-
imating functions h1(x), · · · , hk(x).
Set h(x) = (h1(x), · · · , hk(x)), and consider the regression
of yi’s on h(xi)’s,
yi = h(xi)T
β + εi
Then β = (HT
H)−1
HT
y
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
Vitesse du véhciule
Distancedefreinage
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
Vitesse du véhciule
Distancedefreinage
@freakonometrics 43
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Series Regression : polynomials
Even if m(x) = E(Y |X = x) is not a polynomial function,
a polynomial can still be a good approximation.
From Stone-Weierstrass theorem, if m(·) is continuous on
some interval, then there is a uniform approximation of
m(·) by polynomial functions.
1 > reg <- lm(y~x,data=db)
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
0 2 4 6 8 10
−1.5−1.0−0.50.00.51.01.5
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
0 2 4 6 8 10
−1.5−1.0−0.50.00.51.01.5
@freakonometrics 44
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Series Regression : polynomials
Assume that m(x) = E(Y |X = x) =
k
i=0
αixi
, where pa-
rameters α0, · · · , αk will be estimated (but not k).
1 > reg <- lm(y~poly(x ,5) ,data=db)
2 > reg <- lm(y~poly(x ,25) ,data=db)
@freakonometrics 45
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
0 2 4 6 8 10
−1.5−1.0−0.50.00.51.01.5
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
0 2 4 6 8 10
−1.5−1.0−0.50.00.51.01.5
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Series Regression : (Linear) Splines
Consider m + 1 knots on X, min{xi} ≤ t0 ≤ t1 ≤ · · · ≤ tm ≤ max{xn}, then
define linear (degree = 1) splines positive function,
bj,1(x) = (x − tj)+ =



x − tj if x > tj
0 otherwise
for linear splines, consider
Yi = β0 + β1Xi + β2(Xi − s)+ + εi
1 > positive_part <- function(x) ifelse(x>0,x ,0)
2 > reg <- lm(Y~X+positive_part(X-s), data=db)
@freakonometrics 46
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
0 2 4 6 8 10
−1.5−1.0−0.50.00.51.01.5
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Series Regression : (Linear) Splines
for linear splines, consider
Yi = β0 + β1Xi + β2(Xi − s1)+ + β3(Xi − s2)+ + εi
1 > reg <- lm(Y~X+positive_part(X-s1)+
2 positive_part(X-s2), data=db)
3 > library(bsplines)
A spline is a function defined by piecewise polynomials.
b-splines are defined recursively
@freakonometrics 47
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
0 2 4 6 8 10
−1.5−1.0−0.50.00.51.01.5
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
b-Splines (in Practice)
1 > reg1 <- lm(dist~speed+positive_part(speed -15) ,
data=cars)
2 > reg2 <- lm(dist~bs(speed ,df=2, degree =1) , data=
cars)
Consider m+1 knots on [0, 1], 0 ≤ t0 ≤ t1 ≤ · · · ≤ tm ≤ 1,
then define recursively b-splines as
bj,0(t) =



1 if tj ≤ t < tj+1
0 otherwise, and
bj,n(t) =
t − tj
tj+n − tj
bj,n−1(t)
+
tj+n+1 − t
tj+n+1 − tj+1
bj+1,n−1(t)
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
@freakonometrics 48
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
b-Splines (in Practice)
1 > summary(reg1)
2
3 Coefficients :
4 Estimate Std Error t value Pr(>|t|)
5 (Intercept) -7.6519 10.6254 -0.720 0.475
6 speed 3.0186 0.8627 3.499 0.001 **
7 (speed -15) 1.7562 1.4551 1.207 0.233
8
9 > summary(reg2)
10
11 Coefficients :
12 Estimate Std Error t value Pr(>|t|)
13 (Intercept) 4.423 7.343 0.602 0.5493
14 bs(speed)1 33.205 9.489 3.499 0.0012 **
15 bs(speed)2 80.954 8.788 9.211 4.2e -12 ***
@freakonometrics 49
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
0 0.25 0.5 0.75 1
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
0 0.25 0.5 0.75 1
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
b and p-Splines
O’Sullivan (1986) A statistical perspective on ill-posed in-
verse problems suggested a penalty on the second deriva-
tive of the fitted curve (see #3).
m(x) = argmin
n
i=1
yi − b(xi)T
β
2
+ λ
R
b (xi)T
β
@freakonometrics 50
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
0 0.25 0.5 0.75 1
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
0 0.25 0.5 0.75 1
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Adding Constraints: Convex Regression
Assume that yi = m(xi) + εi where m : Rd
→ ∞R is some convex function.
m is convex if and only if ∀x1, x2 ∈ Rd
, ∀t ∈ [0, 1],
m(tx1 + [1 − t]x2) ≤ tm(x1) + [1 − t]m(x2)
Proposition (Hidreth (1954) Point Estimates of Ordinates of Concave Functions)
m = argmin
m convex
n
i=1
yi − m(xi)
2
Then θ = (m (x1), · · · , m (xn)) is unique.
Let y = θ + ε, then
θ = argmin
θ∈K
n
i=1
yi − θi)
2
where K = {θ ∈ Rn
: ∃m convex , m(xi) = θi}. I.e. θ is the projection of y onto
the (closed) convex cone K. The projection theorem gives existence and unicity.
@freakonometrics 51
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Adding Constraints: Convex Regression
In dimension 1: yi = m(xi) + εi. Assume that observations are ordered
x1 < x2 < · · · < xn.
Here
K = θ ∈ Rn
:
θ2 − θ1
x2 − x1
≤
θ3 − θ2
x3 − x2
≤ · · · ≤
θn − θn−1
xn − xn−1
Hence, quadratic program with n − 2 linear con-
straints.
m is a piecewise linear function (interpolation of
consecutive pairs (xi, θi )).
If m is differentiable, m is convex if
m(x) + m(x) · [y − x] ≤ m(y)
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
@freakonometrics 52
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Adding Constraints: Convex Regression
More generally: if m is convex, then there exists ξx ∈ Rn
such that
m(x) + ξx · [y − x] ≤ m(y)
ξx is a subgradient of m at x. And then
∂m(x) = m(x) + ξ · [y − x] ≤ m(y), ∀y ∈ Rn
Hence, θ is solution of
argmin y − θ 2
subject to θi + ξi[xj − xi] ≤ θj, ∀i, j
and ξ1, · · · , ξn ∈ Rn
.
@freakonometrics 53
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
speed
dist
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Testing (Non-)Linearities
In the linear model,
y = Xβ = X[XT
X]−1
XT
H
y
Hi,i is the leverage of the ith element of this hat matrix.
Write
yi =
n
j=1
[XT
i [XT
X]−1
XT
]jyj =
n
j=1
[H(Xi)]jyj
where
H(x) = xT
[XT
X]−1
XT
The prediction is
m(x) = E(Y |X = x) =
n
j=1
[H(x)]jyj
@freakonometrics 54
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Testing (Non-)Linearities
More generally, a predictor m is said to be linear if for all x if there is
S(·) : Rn
→ Rn
such that
m(x) =
n
j=1
S(x)jyj
Conversely, given y1, · · · , yn, there is a matrix S n × n such that
y = Sy
For the linear model, S = H.
trace(H) = dim(β): degrees of freedom
Hi,i
1 − Hi,i
is related to Cook’s distance, from Cook (1977), Detection of Influential
Observations in Linear Regression.
@freakonometrics 55
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Testing (Non-)Linearities
For a kernel regression model, with kernel k and bandwidth h
S
(k,h)
i,j =
kh(xi − xj)
n
k=1
kh(xk − xj)
where kh(·) = k(·/h), while S(k,h)
(x)j =
Kh(x − xj)
n
k=1
kh(x − xk)
For a k-nearest neighbor, S
(k)
i,j =
1
k
1(j ∈ Ixi
) where Ixi
are the k nearest
observations to xi, while S(k)
(x)j =
1
k
1(j ∈ Ix).
@freakonometrics 56
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Testing (Non-)Linearities
Observe that trace(S) is usually seen as a degree of smoothness.
Do we have to smooth? Isn’t linear model sufficent?
Define
T =
Sy − Hy
trace([S − H]T[S − H])
If the model is linear, then T has a Fisher distribution.
Remark: In the case of a linear predictor, with smoothing matrix Sh
R(h) =
1
n
n
i=1
(yi − m
(−i)
h (xi))2
=
1
n
n
i=1
Yi − mh(xi)
1 − [Sh]i,i
2
We do not need to estimate n models.
@freakonometrics 57
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Confidence Intervals
If y = mh(x) = Sh(x)y, let σ2
=
1
n
n
i=1
(yi − mh(xi))
2
and a confidence interval
is, at x mh(y) ± t1−α/2σ Sh(x)Sh(x)T .
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
vitesse du véhicule
distancedefreinage
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
@freakonometrics 58
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Confidence Bands
0
50
100
150
5
10
15
20
25
dist
speed 0
50
100
150
5
10
15
20
25
dist
speed
@freakonometrics 59
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Confidence Bands
Also called variability bands for functions in Härdle (1990) Applied Nonparametric
Regresion.
From Collomb (1979) Condition nécessaires et suffisantes de convergence uniforme
d’un estimateur de la r´gression, with Kernel regression (Nadarayah-Watson)
sup |m(x) − mh(x)| ∼ C1h2
+ C2
log n
nh
sup |m(x) − mh(x)| ∼ C1h2
+ C2
log n
nhdim(x)
@freakonometrics 60
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Confidence Bands
So far, we have mainly discussed pointwise convergence with
√
nh (mh(x) − m(x))
L
→ N(µx, σ2
x).
This asymptotic normality can be used to derive (pointwise) confidence intervals
P(IC−
(x) ≤ m(x) ≤ IC+
(x)) = 1 − α ∀x ∈ X.
But we can also seek uniform convergence properties. We want to derive
functions IC±
such that
P(IC−
(x) ≤ m(x) ≤ IC+
(x) ∀x ∈ X) = 1 − α.
@freakonometrics 61
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Confidence Bands
• Bonferroni’s correction
Use a standard Gaussian (pointwise) confidence interval
IC±
(x) = m(x) ±
√
nhσt1−α/2.
and take also into accound the regularity of m. Set
V (η) =
1
2
2η + 1
n
+
1
n
m ∞,x, for some 0 < η < 1
where ϕ ∞,x is on a neighborhood of x. Then consider
IC±
(x) = IC±
(x) ± V (η).
@freakonometrics 62
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Confidence Bands
• Use of Gaussian processes
Observe that
√
nh (mh(x) − m(x))
D
→ Gx for some Gaussian process (Gx).
Confidence bands are derived from quantiles of sup{Gx, x ∈ X}.
If we use kernel k for smoothing, Johnston (1982) Probabilities of Maximal
Deviations for Nonparametric Regression Function Estimates proved that
Gx = k(x − t)dWt, for some standard (Wt) Wiener process
is then a Gaussian process with variance k(x)k(t − x)dt. And
IC±
(x) = ϕ(x) ±
qα
2 log(1/h)
+ dn
5
7
σ2
√
nh
with dn = 2 log h−1 +
1
2 log h−1
log
3
4π2
, where exp(−2 exp(−qα)) = 1 − α.
@freakonometrics 63
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Confidence Bands
• Bootstrap (see #2)
Finally, McDonald (1986) Smoothing with Split Linear Fits suggested a bootstrap
algorithm to approximate the distribution of Zn = sup{|ϕ(x) − ϕ(x)|, x ∈ X}.
@freakonometrics 64
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Confidence Bands
Depending on the smoothing parameter h, we get different corrections
@freakonometrics 65
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Confidence Bands
Depending on the smoothing parameter h, we get different corrections
@freakonometrics 66
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Boosting to Capture NonLinear Effects
We want to solve
m = argmin E (Y − m(X))2
The heuristics is simple: we consider an iterative process where we keep modeling
the errors.
Fit model for y, h1(·) from y and X, and compute the error, ε1 = y − h1(X).
Fit model for ε1, h2(·) from ε1 and X, and compute the error, ε2 = ε1 − h2(X),
etc. Then set
mk(·) = h1(·)
∼y
+ h2(·)
∼ε1
+ h3(·)
∼ε2
+ · · · + hk(·)
∼εk−1
Hence, we consider an iterative procedure, mk(·) = mk−1(·) + hk(·).
@freakonometrics 67
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Boosting
h(x) = y − mk(x), which can be interpreted as a residual. Note that this residual
is the gradient of
1
2
[y − mk(x)]2
A gradient descent is based on Taylor expansion
f(xk)
f,xk
∼ f(xk−1)
f,xk−1
+ (xk − xk−1)
α
f(xk−1)
f,xk−1
But here, it is different. We claim we can write
fk(x)
fk,x
∼ fk−1(x)
fk−1,x
+ (fk − fk−1)
β
?
fk−1, x
where ? is interpreted as a ‘gradient’.
@freakonometrics 68
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Boosting
Here, fk is a Rd
→ R function, so the gradient should be in such a (big)
functional space → want to approximate that function.
mk(x) = mk−1(x) + argmin
f∈F
n
i=1
(yi − [mk−1(x) + f(x)])2
where f ∈ F means that we seek in a class of weak learner functions.
If learner are two strong, the first loop leads to some fixed point, and there is no
learning procedure, see linear regression y = xT
β + ε. Since ε ⊥ x we cannot
learn from the residuals.
In order to make sure that we learn weakly, we can use some shrinkage
parameter ν (or collection of parameters νj).
@freakonometrics 69
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Boosting with Piecewise Linear Spline & Stump Functions
Instead of εk = εk−1 − hk(x), set εk = εk−1 − ν·hk(x)
Remark : bumps are related to regression trees (see 2015 course).
@freakonometrics 70
q
qq
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
qq
q
q
q
qq
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
qqq
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qqqqq
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
0 1 2 3 4 5 6
−1.5−1.0−0.50.00.51.01.5
q
qq
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
qq
q
q
q
qq
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
qqq
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qqqqq
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
0 1 2 3 4 5 6
−1.5−1.0−0.50.00.51.01.5
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Ruptures
One can use Chow test to test for a rupture. Note that it is simply Fisher test,
with two parts,
β =



β1 for i = 1, · · · , i0
β2 for i = i0 + 1, · · · , n
and test



H0 : β1 = β2
H1 : β1 = β2
i0 is a point between k and n − k (we need enough observations). Chow (1960)
Tests of Equality Between Sets of Coefficients in Two Linear Regressions suggested
Fi0
=
ηT
η − εT
ε
εT
ε/(n − 2k)
where εi = yi − xT
i β, and ηi =



Yi − xT
i β1 for i = k, · · · , i0
Yi − xT
i β2 for i = i0 + 1, · · · , n − k
@freakonometrics 71
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Ruptures
1 > Fstats(dist ~ speed ,data=cars ,from =7/50)
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
Vitesse du véhicule
Distancedefeinage
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
Indice
Fstatistics
0 10 20 30 40 50
024681012
q
@freakonometrics 72
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Tester la présence d’une rupture, le test de Chow
1 > Fstats(dist ~ speed ,data=cars ,from =2/50)
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
5 10 15 20 25
020406080100120
Vitesse du véhicule
Distancedefeinage
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
Indice
Fstatistics
0 10 20 30 40 50
024681012
q
@freakonometrics 73
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Ruptures
If i0 is unknown, use CUSUM types of tests, see Ploberger & Krämer (1992) The
Cusum Test with OLS Residuals. For all t ∈ [0, 1], set
Wt =
1
σ
√
n
nt
i=1
εi.
If α is the confidence level, bounds are generally ±α, even if theoretical bounds
should be ±α t(1 − t).
1 > cusum <- efp(dist ~ speed , type = "OLS -CUSUM",data=cars)
2 > plot(cusum ,ylim=c(-2,2))
3 > plot(cusum , alpha = 0.05 , alt.boundary = TRUE ,ylim=c(-2,2))
@freakonometrics 74
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Ruptures
OLS−based CUSUM test
Time
Empiricalfluctuationprocess
0.0 0.2 0.4 0.6 0.8 1.0
−2−1012
OLS−based CUSUM test with alternative boundaries
Time
Empiricalfluctuationprocess
0.0 0.2 0.4 0.6 0.8 1.0
−2−1012
@freakonometrics 75
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Ruptures and Nonlinear Models
See Imbens & Lemieux (2008) Regression Discontinuity Designs.
@freakonometrics 76
Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1
Generalized Additive Models
Linear regression model E[Y |X = x] = β0 + xT
β = β0 +
p
j=1
βjxj
Additive model E[Y |X = x] = β0 +
p
j=1
hj(xj) where hj(·) can be any nonlinear
function.
1 > library(mgcv)
2 > gam(dist~s(speed),
data=cars)
@freakonometrics 77
0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.2
0.4
0.6
0.8
1.0
−0.5
0.0
0.5
1.0
1.5
0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.2
0.4
0.6
0.8
1.0
−0.5
0.0
0.5
1.0
1.5

Weitere ähnliche Inhalte

Was ist angesagt? (20)

Slides erm-cea-ia
Slides erm-cea-iaSlides erm-cea-ia
Slides erm-cea-ia
 
Slides econometrics-2018-graduate-3
Slides econometrics-2018-graduate-3Slides econometrics-2018-graduate-3
Slides econometrics-2018-graduate-3
 
Side 2019 #7
Side 2019 #7Side 2019 #7
Side 2019 #7
 
Slides sales-forecasting-session2-web
Slides sales-forecasting-session2-webSlides sales-forecasting-session2-web
Slides sales-forecasting-session2-web
 
Slides econometrics-2018-graduate-4
Slides econometrics-2018-graduate-4Slides econometrics-2018-graduate-4
Slides econometrics-2018-graduate-4
 
Slides econometrics-2018-graduate-2
Slides econometrics-2018-graduate-2Slides econometrics-2018-graduate-2
Slides econometrics-2018-graduate-2
 
Varese italie #2
Varese italie #2Varese italie #2
Varese italie #2
 
Slides erasmus
Slides erasmusSlides erasmus
Slides erasmus
 
Side 2019 #3
Side 2019 #3Side 2019 #3
Side 2019 #3
 
Side 2019, part 1
Side 2019, part 1Side 2019, part 1
Side 2019, part 1
 
Slides lln-risques
Slides lln-risquesSlides lln-risques
Slides lln-risques
 
Side 2019 #5
Side 2019 #5Side 2019 #5
Side 2019 #5
 
Slides econ-lm
Slides econ-lmSlides econ-lm
Slides econ-lm
 
Slides networks-2017-2
Slides networks-2017-2Slides networks-2017-2
Slides networks-2017-2
 
Side 2019 #10
Side 2019 #10Side 2019 #10
Side 2019 #10
 
Side 2019 #6
Side 2019 #6Side 2019 #6
Side 2019 #6
 
Varese italie #2
Varese italie #2Varese italie #2
Varese italie #2
 
Varese italie seminar
Varese italie seminarVarese italie seminar
Varese italie seminar
 
Lausanne 2019 #1
Lausanne 2019 #1Lausanne 2019 #1
Lausanne 2019 #1
 
Econometrics 2017-graduate-3
Econometrics 2017-graduate-3Econometrics 2017-graduate-3
Econometrics 2017-graduate-3
 

Andere mochten auch (20)

Soutenance julie viard_partie_1
Soutenance julie viard_partie_1Soutenance julie viard_partie_1
Soutenance julie viard_partie_1
 
HdR
HdRHdR
HdR
 
Lg ph d_slides_vfinal
Lg ph d_slides_vfinalLg ph d_slides_vfinal
Lg ph d_slides_vfinal
 
Slides erm-cea-ia
Slides erm-cea-iaSlides erm-cea-ia
Slides erm-cea-ia
 
Slides barcelona Machine Learning
Slides barcelona Machine LearningSlides barcelona Machine Learning
Slides barcelona Machine Learning
 
Slides 2040-4
Slides 2040-4Slides 2040-4
Slides 2040-4
 
Slides 2040-6
Slides 2040-6Slides 2040-6
Slides 2040-6
 
Julien slides - séminaire quantact
Julien slides - séminaire quantactJulien slides - séminaire quantact
Julien slides - séminaire quantact
 
Slides arbres-ubuntu
Slides arbres-ubuntuSlides arbres-ubuntu
Slides arbres-ubuntu
 
Slides act6420-e2014-ts-1
Slides act6420-e2014-ts-1Slides act6420-e2014-ts-1
Slides act6420-e2014-ts-1
 
Slides inequality 2017
Slides inequality 2017Slides inequality 2017
Slides inequality 2017
 
So a webinar-2013-2
So a webinar-2013-2So a webinar-2013-2
So a webinar-2013-2
 
Slides guanauato
Slides guanauatoSlides guanauato
Slides guanauato
 
Slides ensae-2016-4
Slides ensae-2016-4Slides ensae-2016-4
Slides ensae-2016-4
 
Slides ensae-2016-8
Slides ensae-2016-8Slides ensae-2016-8
Slides ensae-2016-8
 
Slides ensae-2016-6
Slides ensae-2016-6Slides ensae-2016-6
Slides ensae-2016-6
 
Slides ensae-2016-7
Slides ensae-2016-7Slides ensae-2016-7
Slides ensae-2016-7
 
Slides ensae-2016-9
Slides ensae-2016-9Slides ensae-2016-9
Slides ensae-2016-9
 
Slides ensae-2016-5
Slides ensae-2016-5Slides ensae-2016-5
Slides ensae-2016-5
 
Slides ensae-2016-10
Slides ensae-2016-10Slides ensae-2016-10
Slides ensae-2016-10
 

Ähnlich wie Econometrics, PhD Course, #1 Nonlinearities

Slides econometrics-2018-graduate-1
Slides econometrics-2018-graduate-1Slides econometrics-2018-graduate-1
Slides econometrics-2018-graduate-1Arthur Charpentier
 
ML4 Regression.pptx
ML4 Regression.pptxML4 Regression.pptx
ML4 Regression.pptxDayal Sati
 
Bipolar Disorder Investigation Using Modified Logistic Ridge Estimator
Bipolar Disorder Investigation Using Modified Logistic Ridge EstimatorBipolar Disorder Investigation Using Modified Logistic Ridge Estimator
Bipolar Disorder Investigation Using Modified Logistic Ridge EstimatorIOSR Journals
 
Predictive Modeling in Insurance in the context of (possibly) big data
Predictive Modeling in Insurance in the context of (possibly) big dataPredictive Modeling in Insurance in the context of (possibly) big data
Predictive Modeling in Insurance in the context of (possibly) big dataArthur Charpentier
 
Chapter 2 Simple Linear Regression Model.pptx
Chapter 2 Simple Linear Regression Model.pptxChapter 2 Simple Linear Regression Model.pptx
Chapter 2 Simple Linear Regression Model.pptxaschalew shiferaw
 
統計(人間科学のための基礎数学)
統計(人間科学のための基礎数学)統計(人間科学のための基礎数学)
統計(人間科学のための基礎数学)Masahiro Okano
 
DataHandlingStatistics.ppt
DataHandlingStatistics.pptDataHandlingStatistics.ppt
DataHandlingStatistics.pptssuser7f3860
 

Ähnlich wie Econometrics, PhD Course, #1 Nonlinearities (12)

Slides econometrics-2018-graduate-1
Slides econometrics-2018-graduate-1Slides econometrics-2018-graduate-1
Slides econometrics-2018-graduate-1
 
Slides ub-1
Slides ub-1Slides ub-1
Slides ub-1
 
Slides ub-3
Slides ub-3Slides ub-3
Slides ub-3
 
ML4 Regression.pptx
ML4 Regression.pptxML4 Regression.pptx
ML4 Regression.pptx
 
Chapter1
Chapter1Chapter1
Chapter1
 
Big model, big data
Big model, big dataBig model, big data
Big model, big data
 
Bipolar Disorder Investigation Using Modified Logistic Ridge Estimator
Bipolar Disorder Investigation Using Modified Logistic Ridge EstimatorBipolar Disorder Investigation Using Modified Logistic Ridge Estimator
Bipolar Disorder Investigation Using Modified Logistic Ridge Estimator
 
Predictive Modeling in Insurance in the context of (possibly) big data
Predictive Modeling in Insurance in the context of (possibly) big dataPredictive Modeling in Insurance in the context of (possibly) big data
Predictive Modeling in Insurance in the context of (possibly) big data
 
Chapter 2 Simple Linear Regression Model.pptx
Chapter 2 Simple Linear Regression Model.pptxChapter 2 Simple Linear Regression Model.pptx
Chapter 2 Simple Linear Regression Model.pptx
 
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
 
統計(人間科学のための基礎数学)
統計(人間科学のための基礎数学)統計(人間科学のための基礎数学)
統計(人間科学のための基礎数学)
 
DataHandlingStatistics.ppt
DataHandlingStatistics.pptDataHandlingStatistics.ppt
DataHandlingStatistics.ppt
 

Mehr von Arthur Charpentier (20)

Family History and Life Insurance
Family History and Life InsuranceFamily History and Life Insurance
Family History and Life Insurance
 
ACT6100 introduction
ACT6100 introductionACT6100 introduction
ACT6100 introduction
 
Family History and Life Insurance (UConn actuarial seminar)
Family History and Life Insurance (UConn actuarial seminar)Family History and Life Insurance (UConn actuarial seminar)
Family History and Life Insurance (UConn actuarial seminar)
 
Control epidemics
Control epidemics Control epidemics
Control epidemics
 
STT5100 Automne 2020, introduction
STT5100 Automne 2020, introductionSTT5100 Automne 2020, introduction
STT5100 Automne 2020, introduction
 
Family History and Life Insurance
Family History and Life InsuranceFamily History and Life Insurance
Family History and Life Insurance
 
Machine Learning in Actuarial Science & Insurance
Machine Learning in Actuarial Science & InsuranceMachine Learning in Actuarial Science & Insurance
Machine Learning in Actuarial Science & Insurance
 
Reinforcement Learning in Economics and Finance
Reinforcement Learning in Economics and FinanceReinforcement Learning in Economics and Finance
Reinforcement Learning in Economics and Finance
 
Optimal Control and COVID-19
Optimal Control and COVID-19Optimal Control and COVID-19
Optimal Control and COVID-19
 
Slides OICA 2020
Slides OICA 2020Slides OICA 2020
Slides OICA 2020
 
Lausanne 2019 #3
Lausanne 2019 #3Lausanne 2019 #3
Lausanne 2019 #3
 
Lausanne 2019 #4
Lausanne 2019 #4Lausanne 2019 #4
Lausanne 2019 #4
 
Lausanne 2019 #2
Lausanne 2019 #2Lausanne 2019 #2
Lausanne 2019 #2
 
Side 2019 #11
Side 2019 #11Side 2019 #11
Side 2019 #11
 
Side 2019 #12
Side 2019 #12Side 2019 #12
Side 2019 #12
 
Side 2019 #9
Side 2019 #9Side 2019 #9
Side 2019 #9
 
Side 2019 #8
Side 2019 #8Side 2019 #8
Side 2019 #8
 
Side 2019 #4
Side 2019 #4Side 2019 #4
Side 2019 #4
 
Side 2019, part 2
Side 2019, part 2Side 2019, part 2
Side 2019, part 2
 
Pareto Models, Slides EQUINEQ
Pareto Models, Slides EQUINEQPareto Models, Slides EQUINEQ
Pareto Models, Slides EQUINEQ
 

Kürzlich hochgeladen

Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)cama23
 
Food processing presentation for bsc agriculture hons
Food processing presentation for bsc agriculture honsFood processing presentation for bsc agriculture hons
Food processing presentation for bsc agriculture honsManeerUddin
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management SystemChristalin Nelson
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptxmary850239
 
Activity 2-unit 2-update 2024. English translation
Activity 2-unit 2-update 2024. English translationActivity 2-unit 2-update 2024. English translation
Activity 2-unit 2-update 2024. English translationRosabel UA
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxHumphrey A Beña
 
Barangay Council for the Protection of Children (BCPC) Orientation.pptx
Barangay Council for the Protection of Children (BCPC) Orientation.pptxBarangay Council for the Protection of Children (BCPC) Orientation.pptx
Barangay Council for the Protection of Children (BCPC) Orientation.pptxCarlos105
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Seán Kennedy
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Celine George
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Celine George
 
Concurrency Control in Database Management system
Concurrency Control in Database Management systemConcurrency Control in Database Management system
Concurrency Control in Database Management systemChristalin Nelson
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptxmary850239
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)lakshayb543
 
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfGrade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfJemuel Francisco
 

Kürzlich hochgeladen (20)

Raw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptxRaw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptx
 
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptxYOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
 
Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)
 
Food processing presentation for bsc agriculture hons
Food processing presentation for bsc agriculture honsFood processing presentation for bsc agriculture hons
Food processing presentation for bsc agriculture hons
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management System
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx
 
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptxLEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
 
Activity 2-unit 2-update 2024. English translation
Activity 2-unit 2-update 2024. English translationActivity 2-unit 2-update 2024. English translation
Activity 2-unit 2-update 2024. English translation
 
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptxFINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
 
Barangay Council for the Protection of Children (BCPC) Orientation.pptx
Barangay Council for the Protection of Children (BCPC) Orientation.pptxBarangay Council for the Protection of Children (BCPC) Orientation.pptx
Barangay Council for the Protection of Children (BCPC) Orientation.pptx
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17
 
Concurrency Control in Database Management system
Concurrency Control in Database Management systemConcurrency Control in Database Management system
Concurrency Control in Database Management system
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
 
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfGrade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
 

Econometrics, PhD Course, #1 Nonlinearities

  • 1. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Advanced Econometrics #1 : Nonlinear Transformations* A. Charpentier (Université de Rennes 1) Université de Rennes 1 Graduate Course, 2017. @freakonometrics 1
  • 2. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Econometrics and ‘Regression’ ? Galton (1870, Heriditary Genius, 1886, Regression to- wards mediocrity in hereditary stature) and Pearson & Lee (1896, On Telegony in Man, 1903 On the Laws of Inheritance in Man) studied genetic transmission of characterisitcs, e.g. the heigth. On average the child of tall parents is taller than other children, but less than his parents. “I have called this peculiarity by the name of regres- sion”, Francis Galton, 1886. @freakonometrics 2
  • 3. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Econometrics and ‘Regression’ ? 1 > library(HistData) 2 > attach(Galton) 3 > Galton$count <- 1 4 > df <- aggregate(Galton , by=list(parent , child), FUN=sum)[,c(1,2,5)] 5 > plot(df[,1:2], cex=sqrt(df[,3]/3)) 6 > abline(a=0,b=1,lty =2) 7 > abline(lm(child~parent ,data=Galton)) 8 > coefficients (lm(child~parent ,data=Galton) )[2] 9 parent 10 0.6462906 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 64 66 68 70 72 62646668707274 height of the mid−parent heightofthechild q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q It is more an autoregression issue here : if Yt = φYt−1 + εt cor[Yt, Yt+h] = φh → 0 as h → ∞. @freakonometrics 3
  • 4. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Econometrics and ‘Regression’ ? Regression is a correlation problem. Overall, children are not smaller than parents q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q qq q q q q q q q q q q q q q q qq qq q q q q q q q q q qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q qq q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q qq q q q q q q q q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 60 65 70 75 60657075 q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q 60 65 70 75 60657075 @freakonometrics 4
  • 5. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Overview ◦ Linear Regression Model: yi = β0 + xT i β + εi = β0 + β1x1,i + β2x2,i + εi • Nonlinear Transformations : smoothing techniques h(yi) = β0 + β1x1,i + β2x2,i + εi yi = β0 + β1x1,i + h(x2,i) + εi • Asymptotics vs. Finite Distance : boostrap techniques • Penalization : Parcimony, Complexity and Overfit • From least squares to other regressions : quantiles, expectiles, distributional, @freakonometrics 5
  • 6. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 References Motivation Kopczuk, W. Tax bases, tax rates and the elasticity of reported income. JPE. References Eubank, R.L. (1999) Nonparametric Regression and Spline Smoothing, CRC Press. Fan, J. & Gijbels, I. (1996) Local Polynomial Modelling and Its Applications CRC Press. Hastie, T.J. & Tibshirani, R.J. (1990) Generalized Additive Models. CRC Press Wand, M.P & Jones, M.C. (1994) Kernel Smoothing. CRC Press @freakonometrics 6
  • 7. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Deterministic or Parametric Transformations Consider child mortality rate (y) as a function of GDP per capita (x). q q q q q q q q qq q q q q qq q q q qq q q q q q q q q q q q q q q q q q q q q qq q qq qq q q q q q q q q q q q q q qq q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q qq q q q q q q q qq q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 0e+00 2e+04 4e+04 6e+04 8e+04 1e+05 050100150 PIB par tête Tauxdemortalitéinfantile Afghanistan Albania Algeria American.Samoa Angola Argentina Armenia Austria Azerbaijan Bahamas Bangladesh Belarus Belgium Belize Benin BhutanBolivia Bosnia.and.Herzegovina Brunei.Darussalam Bulgaria Burkina.Faso Cambodia Canada Cape.Verde Central.African.Republic Chad Channel.Islands Chile China Comoros Congo Cook.Islands Côte.dIvoire Cuba CyprusCzech.Republic Korea Democratic.Republic.of.the.Congo Denmark Djibouti Egypt El.Salvador Equatorial.Guinea Estonia Fiji FinlandFrance French.Guiana French.Polynesia Gabon Gambia Ghana Gibraltar Greece Grenada Guam Guatemala Guinea Guinea−Bissau Guyana Haiti Honduras India Indonesia Iran IrelandIsrael Italy Jamaica Japan Jordan Kazakhstan Kenya Kyrgyzstan Laos Latvia Lesotho Libyan.Arab.Jamahiriya Liechtenstein Lithuania Luxembourg Madagascar Malawi Malaysia Malta Marshall.Islands Martinique Mauritius Micronesia.(Federated.States.of) Mongolia Montenegro Morocco Mozambique Myanmar Namibia Netherlands Netherlands.Antilles New.Caledonia Nicaragua Nigeria Niue Norway Occupied.Palestinian.Territory Oman Pakistan Papua.New.Guinea Paraguay Peru Poland Puerto.Rico Qatar Republic.of.Korea Republic.of.Moldova RéunionRomaniaRussian.Federation Saint.Vincent.and.the.GrenadinesSamoa San.Marino Saudi.Arabia Serbia Sierra.Leone Singapore Slovakia Slovenia Solomon.Islands Somalia Sri.Lanka Sudan Suriname Sweden Syrian.Arab.Republic Tajikistan Thailand Macedonia Timor−Leste Togo Tunisia Turkey Turkmenistan Tuvalu Ukraine United.Arab.Emirates United.Kingdom United.Republic.of.Tanzania United.States.of.America United.States.Virgin.Islands Uruguay Venezuela Viet.Nam YemenZimbabwe @freakonometrics 7
  • 8. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Deterministic or Parametric Transformations Logartihmic transformation, log(y) as a function of log(x) q q q q q q q q qq q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 1e+02 5e+02 1e+03 5e+03 1e+04 5e+04 1e+05 25102050100 PIB par tête (log) Tauxdemortalité(log) Afghanistan Albania Algeria American.Samoa Angola Argentina Armenia Aruba AustraliaAustria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin BhutanBolivia Bosnia.and.Herzegovina Botswana Brazil Brunei.Darussalam Bulgaria Burkina.FasoBurundi Cambodia Cameroon Canada Cape.Verde Central.African.Republic Chad Channel.Islands Chile China Hong.Kong Colombia Comoros Congo Cook.Islands Costa.Rica Côte.dIvoire Croatia Cuba Cyprus Czech.Republic Korea Democratic.Republic.of.the.Congo Denmark Djibouti Dominican.Republic Ecuador Egypt El.Salvador Equatorial.Guinea Eritrea Estonia Ethiopia Fiji FinlandFrance French.Guiana French.Polynesia Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea−Bissau Guyana Haiti Honduras Hungary Iceland India Indonesia Iran Iraq Ireland Isle.of.Man Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Kuwait KyrgyzstanLaos Latvia Lebanon Lesotho Liberia Libyan.Arab.Jamahiriya Liechtenstein Lithuania Luxembourg Madagascar Malawi Malaysia Maldives Mali Malta Marshall.Islands Martinique Mauritania Mauritius Mexico Micronesia.(Federated.States.of) Mongolia Montenegro Morocco Mozambique Myanmar Namibia Nepal Netherlands Netherlands.Antilles New.Caledonia New.Zealand Nicaragua Niger Nigeria Niue Norway Occupied.Palestinian.Territory Oman Pakistan Palau Panama Papua.New.Guinea Paraguay Peru Philippines Poland Portugal Puerto.Rico Qatar Republic.of.Korea Republic.of.Moldova Réunion Romania Russian.Federation Rwanda Saint.Lucia Saint.Vincent.and.the.GrenadinesSamoa San.Marino Sao.Tome.and.Principe Saudi.Arabia Senegal Serbia Sierra.Leone Singapore Slovakia Slovenia Solomon.Islands Somalia South.Africa Spain Sri.Lanka Sudan Suriname Swaziland Sweden Switzerland Syrian.Arab.Republic Tajikistan Thailand Macedonia Timor−Leste Togo Tonga Trinidad.and.Tobago Tunisia Turkey Turkmenistan Turks.and.Caicos.Islands Tuvalu Uganda Ukraine United.Arab.Emirates United.Kingdom United.Republic.of.Tanzania United.States.of.America United.States.Virgin.Islands Uruguay Uzbekistan Vanuatu Venezuela Viet.Nam Western.Sahara Yemen Zambia Zimbabwe @freakonometrics 8
  • 9. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Deterministic or Parametric Transformations Reverse transformation q q q q q q q q qq q q q q qq q q q qq q q q q q q q q q q q q q q q q q q q q qq q qq qq q q q q q q q q q q q q q qq q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q qq q q q q q q q qq q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 0e+00 2e+04 4e+04 6e+04 8e+04 1e+05 050100150 PIB par tête Tauxdemortalité Afghanistan Albania Algeria American.Samoa Angola Argentina Armenia Aruba AustraliaAustria Azerbaijan Bahamas Bahrain Bangladesh BarbadosBelarus Belgium Belize Benin BhutanBolivia Bosnia.and.Herzegovina Botswana Brazil Brunei.Darussalam Bulgaria Burkina.Faso Burundi Cambodia Cameroon Canada Cape.Verde Central.African.Republic Chad Channel.Islands Chile China Hong.Kong Colombia Comoros Congo Cook.IslandsCosta.Rica Côte.dIvoire CroatiaCuba CyprusCzech.Republic Korea Democratic.Republic.of.the.Congo Denmark Djibouti Dominican.Republic Ecuador Egypt El.Salvador Equatorial.Guinea Eritrea Estonia Ethiopia Fiji FinlandFrance French.Guiana French.Polynesia Gabon Gambia Georgia Germany Ghana Gibraltar GreeceGreenland Grenada GuadeloupeGuam Guatemala Guinea Guinea−Bissau Guyana Haiti Honduras Hungary Iceland India Indonesia Iran Iraq Ireland Isle.of.Man Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Kuwait Kyrgyzstan Laos Latvia Lebanon Lesotho Liberia Libyan.Arab.Jamahiriya Liechtenstein Lithuania Luxembourg Madagascar Malawi Malaysia Maldives Mali Malta Marshall.Islands Martinique Mauritania Mauritius Mexico Micronesia.(Federated.States.of) Mongolia Montenegro Morocco Mozambique Myanmar Namibia Nepal Netherlands Netherlands.Antilles New.CaledoniaNew.Zealand Nicaragua NigerNigeria Niue Norway Occupied.Palestinian.Territory Oman Pakistan Palau Panama Papua.New.Guinea Paraguay Peru Philippines Poland PortugalPuerto.Rico Qatar Republic.of.Korea Republic.of.Moldova RéunionRomaniaRussian.Federation Rwanda Saint.Lucia Saint.Vincent.and.the.GrenadinesSamoa San.Marino Sao.Tome.and.Principe Saudi.Arabia Senegal Serbia Sierra.Leone Singapore Slovakia Slovenia Solomon.Islands Somalia South.Africa Spain Sri.Lanka Sudan Suriname Swaziland Sweden Switzerland Syrian.Arab.Republic Tajikistan Thailand Macedonia Timor−Leste Togo Tonga Trinidad.and.Tobago Tunisia Turkey Turkmenistan Turks.and.Caicos.Islands Tuvalu Uganda Ukraine United.Arab.Emirates United.Kingdom United.Republic.of.Tanzania United.States.of.America United.States.Virgin.Islands Uruguay Uzbekistan Vanuatu Venezuela Viet.Nam Western.Sahara Yemen Zambia Zimbabwe @freakonometrics 9
  • 10. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Box-Cox transformation See Box & Cox (1964) An Analysis of Transformations , h(y, λ) =    yλ − 1 λ if λ = 0 log(y) if λ = 0 or h(y, λ, µ) =    [y + µ]λ − 1 λ if λ = 0 log([y + µ]) if λ = 0 @freakonometrics 10 0 1 2 3 4 −4−3−2−1012 −1 −0.5 0 0.5 1 1.5 2
  • 11. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Profile Likelihood In a statistical context, suppose that unknown parameter can be partitioned θ = (λ, β) where λ is the parameter of interest, and β is a nuisance parameter. Consider {y1, · · · , yn}, a sample from distribution Fθ, so that the log-likelihood is log L(θ) = n i=1 log fθ(yi) θ MLE is defined as θ MLE = argmax {log L(θ)} Rewrite the log-likelihood as log L(θ) = log Lλ(β). Define β pMLE λ = argmax β {log Lλ(β)} and then λpMLE = argmax λ log Lλ(β pMLE λ ) . Observe that √ n(λpMLE − λ) L −→ N(0, [Iλ,λ − Iλ,βI−1 β,βIβ,λ]−1 ) @freakonometrics 11
  • 12. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Profile Likelihood and Likelihood Ratio Test The (profile) likelihood ratio test is based on 2 max L(λ, β) − max L(λ0, β) If (λ0, β0) are the true value, this difference can be written 2 max L(λ, β) − max L(λ0, β0) − 2 max L(λ0, β) − max L(λ0, β0) Using Taylor’s expension ∂L(λ, β) ∂λ (λ0,βλ0 ) ∼ ∂L(λ, β) ∂λ (λ0,β0 ) − Iβ0λ0 I−1 β0β0 ∂L(λ0, β) ∂β (λ0,β0 ) Thus, 1 √ n ∂L(λ, β) ∂λ (λ0,βλ0 ) L → N(0, Iλ0λ0 ) − Iλ0β0 I−1 β0β0 Iβ0λ0 and 2 L(λ, β) − L(λ0, βλ0 ) L → χ2 (dim(λ)). @freakonometrics 12
  • 13. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Box-Cox 1 > boxcox(lm(dist~speed ,data=cars)) Here h∗ ∼ 0.5 −0.5 0.0 0.5 1.0 1.5 2.0 −90−80−70−60−50 λ log−Likelihood 95% @freakonometrics 13 q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist
  • 14. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Uncertainty: Parameters vs. Prediction Uncertainty on regression parameters (β0, β1) From the output of the regression we can derive confidence intervals for β0 and β1, usually βk ∈ βk ± u1−α/2se[βk] q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 Vitesse du véhicule Distancedefreinage q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 Vitesse du véhicule Distancedefreinage @freakonometrics 14
  • 15. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Uncertainty: Parameters vs. Prediction Uncertainty on a prediction, y = m(x). Usually m(x) ∈ m(x) ± u1−α/2se[m(x)] hence, for a linear model xT β ± u1−α/2σ xT[XT X]−1x i.e. (with one covariate) se2 [m(x)]2 = Var[β0 + β1x] se2 [β0] + cov[β0, β1]x + se2 [β1]x2 q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 Vitesse du véhicule Distancedefreinage 1 > predict(lm(dist~speed ,data=cars),newdata=data.frame(speed=x), interval="confidence") @freakonometrics 15
  • 16. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Least Squares and Expected Value (Orthogonal Projection Theorem) Let y ∈ Rd , y = argmin m∈R    n i=1 1 n yi − m εi 2    . It is the empirical version of E[Y ] = argmin m∈R    y − m ε 2 dF(y)    = argmin m∈R    E (Y − m ε )2    where Y is a 1 random variable. Thus, argmin m(·):Rk→R    n i=1 1 n yi − m(xi) εi 2    is the empirical version of E[Y |X = x]. @freakonometrics 16
  • 17. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 The Histogram and the Regressogram Connections between the estimation of f(y) and E[Y |X = x]. Assume that yi ∈ [a1, ak+1), divided in k classes [aj, aj+1). The histogram is ˆfa(y) = k j=1 1(t ∈ [aj, aj+1)) aj+1 − aj 1 n n i=1 1(yi ∈ [aj, aj+1)) Assume that aj+1 −aj = hn and hn → 0 as n → ∞ with nhn → ∞ then E ( ˆfa(y) − f(y))2 ∼ O(n−2/3 ) (for an optimal choice of hn). 1 > hist(height) @freakonometrics 17 150 160 170 180 190 0.000.010.020.030.040.050.06
  • 18. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 The Histogram and the Regressogram Then a moving histogram was considered, ˆf(y) = 1 2nhn n i=1 1(yi ∈ [y ± hn)) = 1 nhn n i=1 k yi − y hn with k(x) = 1 2 1(x ∈ [−1, 1)), which a (flat) kernel estimator. 1 > density(height , kernel = " rectangular ") 150 160 170 180 190 200 0.000.010.020.030.04 150 160 170 180 190 200 0.000.010.020.030.04 @freakonometrics 18
  • 19. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 The Histogram and the Regressogram From Tukey (1961) Curves as parameters, and touch estimation, the regressogram is defined as ˆma(x) = n i=1 1(xi ∈ [aj, aj+1))yi n i=1 1(xi ∈ [aj, aj+1)) and the moving regressogram is ˆm(x) = n i=1 1(xi ∈ [x ± hn])yi n i=1 1(xi ∈ [x ± hn]) @freakonometrics 19 q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist
  • 20. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Nadaraya-Watson and Kernels Background: Kernel Density Estimator Consider sample {y1, · · · , yn}, Fn empirical cumulative distribution function Fn(y) = 1 n n i=1 1(yi ≤ y) The empirical measure Pn consists in weights 1/n on each observation. Idea: add (little) continuous noise to smooth Fn. Let Yn denote a random variable with distribution Fn and define ˜Y = Yn + hU where U ⊥⊥ Yn, with cdf K The cumulative distribution function of ˜Y is ˜F ˜F(y) = P[ ˜Y ≤ y] = E 1( ˜Y ≤ y) = E E 1( ˜Y ≤ y) Yn ˜F(y) = E 1 U ≤ y − Yn h Yn = n i=1 1 n K y − yi h @freakonometrics 20
  • 21. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Nadaraya-Watson and Kernels If we differentiate ˜f(y)= 1 nh n i=1 k y − yi h = 1 n n i=1 kh (y − yi) with kh(u) = 1 h k u h ˜f is the kernel density estimator of f, with kernel k and bandwidth h. Rectangular, k(u) = 1 2 1(|u| ≤ 1) Epanechnikov, k(u) = 3 4 1(|u| ≤ 1)(1 − u2 ) Gaussian, k(u) = 1 √ 2π e− u2 2 1 > density(height , kernel = " epanechnikov ") −2 −1 0 1 2 @freakonometrics 21 150 160 170 180 190 200 0.000.010.020.030.04
  • 22. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Kernels and Statistical Properties Consider here an i.id. sample {Y1, · · · , Yn} with density f Given y, observe that E[ ˜f(y)] = 1 h k y − t h f(t)dt = k(u)f(y − hu)du. Use Taylor expansion around h = 0,f(y − hu) ∼ f(y) − f (y)hu + 1 2 f (y)h2 u2 E[ ˜f(y)] = f(y)k(u)du − f (y)huk(u)du + 1 2 f (y + hu)h2 u2 k(u)du = f(y) + 0 + h2 f (y) 2 k(u)u2 du + o(h2 ) Thus, if f is twice continuously differentiable with bounded second derivative, k(u)du = 1, uk(u)du = 0 and u2 k(u)du < ∞, then E[ ˜f(y)] = f(y) + h2 f (y) 2 k(u)u2 du + o(h2 ) @freakonometrics 22
  • 23. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Kernels and Statistical Properties For the heuristics on that bias, consider a flat kernel, and set fh(y) = F(y + h) − F(y − h) 2h then the natural estimate is fh(y) = F(y + h) − F(y − h) 2h = 1 2nh n i=1 1(yi ∈ [y ± h]) Zi where Zi’s are Bernoulli B(px) i.id. variables with px = P[Yi ∈ [x ± h]] = 2h · fh(x). Thus, E(fh(y)) = fh(y), while fh(y) ∼ f(y) + h2 6 f (y) as h ∼ 0. @freakonometrics 23
  • 24. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Kernels and Statistical Properties Similarly, as h → 0 and nh → ∞ Var[ ˜f(y)] = 1 n E[kh(z − Z)2 ] − (E[kh(z − Z)]) 2 Var[ ˜f(y)] = f(y) nh k(u)2 du + o 1 nh Hence • if h → 0 the bias goes to 0 • if nh → ∞ the variance goes to 0 @freakonometrics 24
  • 25. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Kernels and Statistical Properties Extension in Higher Dimension: ˜f(y) = 1 n|H|1/2 n i=1 k H−1/2 (y − yi) ˜f(y) = 1 nhd|Σ|1/2 n i=1 k Σ−1/2 (y − yi) h @freakonometrics 25 150 160 170 180 190 200 406080100120 height weight 2e−04 4e−04 6e−04 8e−04 0.001 0.0012 0.0014 0.0016 0.0018
  • 26. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Kernels and Convolution Given f and g, set (f g)(x) = R f(x − y)g(y)dy Then ˜fh = ( ˆf kh), where ˆf(y) = ˆF(y) dy = n i=1 δyi (y) Hence, ˜f is the distribution of Y + ε where Y is uniform over {y1, · · · , yn} and ε ∼ kh are independent @freakonometrics 26 q q q q q −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 0.00.20.40.60.81.0
  • 27. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Nadaraya-Watson and Kernels Here E[Y |X = x] = m(x). Write m as a function of densities g(x) = yf(y|x)dy = yf(y, x)dy f(y, x)dy Consider some bivariate kernel k, such that tk(t, u)dt = 0 and κ(u) = k(t, u)dt For the numerator, it can be estimated using y ˜f(y, x)dy = 1 nh2 n i=1 yk y − yi h , x − xi h = 1 nh n i=1 yik t, x − xi h dt = 1 nh n i=1 yiκ x − xi h @freakonometrics 27
  • 28. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Nadaraya-Watson and Kernels and for the denominator f(y, x)dy = 1 nh2 n i=1 k y − yi h , x − xi h = 1 nh n i=1 κ x − xi h Therefore, plugging in the expression for g(x) yields ˜m(x) = n i=1 yiκh (x − xi) n i=1 κh (x − xi) Observe that this regression estimator is a weighted average (see linear predictor section) ˜m(x) = n i=1 ωi(x)yi with ωi(x) = κh (x − xi) n i=1 κh (x − xi) @freakonometrics 28 q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist q q q q q qq q q q q q q q q q q q q q q q q q q
  • 29. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Nadaraya-Watson and Kernels One can prove that kernel regression bias is given by E[ ˜m(x)] = m(x) + C1h2 1 2 m (x) + m (x) f (x) f(x) In the univariate case, one can get the kernel estimator of derivatives d ˜m(x) dx = 1 nh2 n i=1 k x − xi h yi Actually, ˜m is a function of bandwidth h. Note: this can be extended to multivariate x. @freakonometrics 29 q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist q
  • 30. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 From kernels to k-nearest neighbours An alternative is to consider ˜mk(x) = 1 n n i=1 ωi,k(x)yi where ωi,k(x) = n k if i ∈ Ik x with Ik x = {i : xi one of the k nearest observations to x} Lai (1977) Large sample properties of K-nearest neighbor procedures if k → ∞ and k/n → 0 as n → ∞, then E[ ˜mk(x)] ∼ m(x) + 1 24f(x)3 (m f + 2m f )(x) k n 2 while Var[ ˜mk(x)] ∼ σ2 (x) k @freakonometrics 30 q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist
  • 31. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 From kernels to k-nearest neighbours Remark: Brent & John (1985) Finding the median requires 2n comparisons considered some median smoothing algorithm, where we consider the median over the k nearest neighbours (see section #4). @freakonometrics 31
  • 32. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 k-Nearest Neighbors and Curse of Dimensionality The higher the dimension, the larger the distance to the closest neigbbor min i∈{1,··· ,n} {d(a, xi)}, xi ∈ Rd . q dim1 dim2 dim3 dim4 dim5 0.00.20.40.60.81.0 dim1 dim2 dim3 dim4 dim5 0.00.20.40.60.81.0 n = 10 n = 100 @freakonometrics 32
  • 33. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Bandwidth selection : MISE for Density MSE[ ˜f(y)] = bias[ ˜f(y)]2 + Var[ ˜f(y)] MSE[ ˜f(y)] = f(y) 1 nh k(u)2 du + h4 f (y) 2 k(u)u2 du 2 + o h4 + 1 nh Bandwidth choice is based on minimization of the asymptotic integrated MSE (over y) MISE( ˜f) = MSE[ ˜f(y)]dy ∼ 1 nh k(u)2 du + h4 f (y) 2 k(u)u2 du 2 @freakonometrics 33
  • 34. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Bandwidth selection : MISE for Density Thus, the first-order condition yields − C1 nh2 + h3 f (y)2 dyC2 = 0 with C1 = k2 (u)du and C2 = k(u)u2 du 2 , and h = n− 1 5 C1 C2 f (y)dy 1 5 h = 1.06n− 1 5 Var[Y ] from Silverman (1986) Density Estimation 1 > bw.nrd0(cars$speed) 2 [1] 2.150016 3 > bw.nrd(cars$speed) 4 [1] 2.532241 with Scott correction, see Scott (1992) Multivariate Density Estimation @freakonometrics 34
  • 35. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Bandwidth selection : MISE for Regression Model One can prove that MISE[mh] ∼ bias2 h4 4 x2 k(x)dx 2 m (x) + 2m (x) f (x) f(x) 2 dx + σ2 nh k2 (x)dx · dx f(x) variance as n → 0 and nh → ∞. The bias is sensitive to the position of the xi’s. h = n− 1 5   C1 dx f(x) C2 m (x) + 2m (x)f (x) f(x) dx   1 5 Problem: depends on unknown f(x) and m(x). @freakonometrics 35
  • 36. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Bandwidth Selection : Cross Validation Let R(h) = E (Y − mh(X))2 . Natural idea R(h) = 1 n n i=1 (yi − mh(xi)) 2 Instead use leave-one-out cross validation, R(h) = 1 n n i=1 yi − m (i) h (xi) 2 where m (i) h is the estimator obtained by omitting the ith pair (yi, xi) or k-fold cross validation, R(h) = 1 n k j=1 i∈Ij yi − m (j) h (xi) 2 where m (j) h is the estimator obtained by omitting pairs (yi, xi) with i ∈ Ij. @freakonometrics 36 q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q
  • 37. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Bandwidth Selection : Cross Validation Then find (numerically) h = argmin R(h) In the context of density estimation, see Chiu (1991) Bandwidth Selection for Kernel Density Es- timation 2 4 6 8 10 1416182022 bandwidth Usual bias-variance tradeoff, or Goldilock principle: h should be neither too small, nor too large • undersmoothed: bias too large, variance too small • oversmoothed: variance too large, bias too small @freakonometrics 37
  • 38. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Local Linear Regression Consider ˆm(x) defined as ˆm(x) = β0 where (β0, β) is the solution of min (β0,β) n i=1 ω (x) i yi − [β0 + (x − xi)T β] 2 where ω (x) i = kh(x − xi), e.g. i.e. we seek the constant term in a weighted least squares regression of yi’s on x − xi’s. If Xx is the matrix [1 (x − X)T ], and if W x is a matrix diag[kh(x − x1), · · · , kh(x − xn)] then ˆm(x) = 1T (XT xW xXx)−1 XT xW xy This estimator is also a linear predictor : ˆm(x) = n i=1 ai(x) ai(x) yi @freakonometrics 38
  • 39. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 where ai(x) = 1 n kh(x − xi) 1 − s1(x)T s2(x)−1 x − xi h with s1(x) = 1 n n i=1 kh(x−xi) x − xi h and s2(x) = 1 n n i=1 kh(x−xi) x − xi h x − xi h Note that Nadaraya-Watson estimator was simply the solution of min β0 n i=1 ω (x) i (yi − β0) 2 where ω (x) i = kh(x − xi) E[ ˆm(x)] ∼ m(x) + h2 2 m (x)µ2 where µ2 = k(u)u2 du. Var[ ˆm(x)] ∼ 1 nh νσ2 x f(x) @freakonometrics 39
  • 40. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 where ν = k(u)2 du Thus, kernel regression MSE is h2 4 g (x) + 2g (x) f (x) f(x) 2 µ2 2 + 1 nh νσ2 x f(x) q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 Vitesse du véhciule Distancedefreinage q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 Vitesse du véhciule Distancedefreinage @freakonometrics 40
  • 41. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 1 > loess(dist ~ speed , cars ,span =0.75 , degree =1) 2 > predict(REG , data.frame(speed = seq(5, 25, 0.25)), se = TRUE) q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 Vitesse du véhciule Distancedefreinage q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 Vitesse du véhciule Distancedefreinage @freakonometrics 41
  • 42. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Local polynomials One might assume that, locally, m(x) ∼ µx(u) as u ∼ 0, with µx(u) = β (x) 0 + β (x) 1 + [u − x] + β (x) 2 + [u − x]2 2 + β (x) 3 + [u − x]3 2 + · · · and we estimate β(x) by minimizing n i=1 ω (x) i yi − µx(xi) 2 . If Xx is the design matrix 1 xi − x [xi − x]2 2 [xi − x]3 3 · · · , then β (x) = XT xW xXx −1 XT xW xy (weighted least squares estimators). 1 > library(locfit) 2 > locfit(dist~speed ,data=cars) @freakonometrics 42
  • 43. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Series Regression Recall that E[Y |X = x] = m(x). Why not approximate m by a linear combination of approx- imating functions h1(x), · · · , hk(x). Set h(x) = (h1(x), · · · , hk(x)), and consider the regression of yi’s on h(xi)’s, yi = h(xi)T β + εi Then β = (HT H)−1 HT y q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 Vitesse du véhciule Distancedefreinage q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 Vitesse du véhciule Distancedefreinage @freakonometrics 43
  • 44. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Series Regression : polynomials Even if m(x) = E(Y |X = x) is not a polynomial function, a polynomial can still be a good approximation. From Stone-Weierstrass theorem, if m(·) is continuous on some interval, then there is a uniform approximation of m(·) by polynomial functions. 1 > reg <- lm(y~x,data=db) q q q q q q q qq q q q q q q q q qq q qq q q q qq q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q q q q qq q q q q q q q q 0 2 4 6 8 10 −1.5−1.0−0.50.00.51.01.5 q q q q q q q qq q q q q q q q q qq q qq q q q qq q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q q q q qq q q q q q q q q 0 2 4 6 8 10 −1.5−1.0−0.50.00.51.01.5 @freakonometrics 44
  • 45. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Series Regression : polynomials Assume that m(x) = E(Y |X = x) = k i=0 αixi , where pa- rameters α0, · · · , αk will be estimated (but not k). 1 > reg <- lm(y~poly(x ,5) ,data=db) 2 > reg <- lm(y~poly(x ,25) ,data=db) @freakonometrics 45 q q q q q q q qq q q q q q q q q qq q qq q q q qq q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q q q q qq q q q q q q q q 0 2 4 6 8 10 −1.5−1.0−0.50.00.51.01.5 q q q q q q q qq q q q q q q q q qq q qq q q q qq q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q q q q qq q q q q q q q q 0 2 4 6 8 10 −1.5−1.0−0.50.00.51.01.5
  • 46. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Series Regression : (Linear) Splines Consider m + 1 knots on X, min{xi} ≤ t0 ≤ t1 ≤ · · · ≤ tm ≤ max{xn}, then define linear (degree = 1) splines positive function, bj,1(x) = (x − tj)+ =    x − tj if x > tj 0 otherwise for linear splines, consider Yi = β0 + β1Xi + β2(Xi − s)+ + εi 1 > positive_part <- function(x) ifelse(x>0,x ,0) 2 > reg <- lm(Y~X+positive_part(X-s), data=db) @freakonometrics 46 q q q q q q q qq q q q q q q q q qq q qq q q q qq q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q q q q qq q q q q q q q q 0 2 4 6 8 10 −1.5−1.0−0.50.00.51.01.5 q q q q q q q qq q q q q q q q q qq q qq q q q qq q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q q q q qq q q q q q q q q
  • 47. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Series Regression : (Linear) Splines for linear splines, consider Yi = β0 + β1Xi + β2(Xi − s1)+ + β3(Xi − s2)+ + εi 1 > reg <- lm(Y~X+positive_part(X-s1)+ 2 positive_part(X-s2), data=db) 3 > library(bsplines) A spline is a function defined by piecewise polynomials. b-splines are defined recursively @freakonometrics 47 q q q q q q q qq q q q q q q q q qq q qq q q q qq q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q q q q qq q q q q q q q q 0 2 4 6 8 10 −1.5−1.0−0.50.00.51.01.5 q q q q q q q qq q q q q q q q q qq q qq q q q qq q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist
  • 48. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 b-Splines (in Practice) 1 > reg1 <- lm(dist~speed+positive_part(speed -15) , data=cars) 2 > reg2 <- lm(dist~bs(speed ,df=2, degree =1) , data= cars) Consider m+1 knots on [0, 1], 0 ≤ t0 ≤ t1 ≤ · · · ≤ tm ≤ 1, then define recursively b-splines as bj,0(t) =    1 if tj ≤ t < tj+1 0 otherwise, and bj,n(t) = t − tj tj+n − tj bj,n−1(t) + tj+n+1 − t tj+n+1 − tj+1 bj+1,n−1(t) q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist @freakonometrics 48
  • 49. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 b-Splines (in Practice) 1 > summary(reg1) 2 3 Coefficients : 4 Estimate Std Error t value Pr(>|t|) 5 (Intercept) -7.6519 10.6254 -0.720 0.475 6 speed 3.0186 0.8627 3.499 0.001 ** 7 (speed -15) 1.7562 1.4551 1.207 0.233 8 9 > summary(reg2) 10 11 Coefficients : 12 Estimate Std Error t value Pr(>|t|) 13 (Intercept) 4.423 7.343 0.602 0.5493 14 bs(speed)1 33.205 9.489 3.499 0.0012 ** 15 bs(speed)2 80.954 8.788 9.211 4.2e -12 *** @freakonometrics 49 q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist 0 0.25 0.5 0.75 1 q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist 0 0.25 0.5 0.75 1
  • 50. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 b and p-Splines O’Sullivan (1986) A statistical perspective on ill-posed in- verse problems suggested a penalty on the second deriva- tive of the fitted curve (see #3). m(x) = argmin n i=1 yi − b(xi)T β 2 + λ R b (xi)T β @freakonometrics 50 q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist 0 0.25 0.5 0.75 1 q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist 0 0.25 0.5 0.75 1
  • 51. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Adding Constraints: Convex Regression Assume that yi = m(xi) + εi where m : Rd → ∞R is some convex function. m is convex if and only if ∀x1, x2 ∈ Rd , ∀t ∈ [0, 1], m(tx1 + [1 − t]x2) ≤ tm(x1) + [1 − t]m(x2) Proposition (Hidreth (1954) Point Estimates of Ordinates of Concave Functions) m = argmin m convex n i=1 yi − m(xi) 2 Then θ = (m (x1), · · · , m (xn)) is unique. Let y = θ + ε, then θ = argmin θ∈K n i=1 yi − θi) 2 where K = {θ ∈ Rn : ∃m convex , m(xi) = θi}. I.e. θ is the projection of y onto the (closed) convex cone K. The projection theorem gives existence and unicity. @freakonometrics 51
  • 52. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Adding Constraints: Convex Regression In dimension 1: yi = m(xi) + εi. Assume that observations are ordered x1 < x2 < · · · < xn. Here K = θ ∈ Rn : θ2 − θ1 x2 − x1 ≤ θ3 − θ2 x3 − x2 ≤ · · · ≤ θn − θn−1 xn − xn−1 Hence, quadratic program with n − 2 linear con- straints. m is a piecewise linear function (interpolation of consecutive pairs (xi, θi )). If m is differentiable, m is convex if m(x) + m(x) · [y − x] ≤ m(y) q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist @freakonometrics 52
  • 53. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Adding Constraints: Convex Regression More generally: if m is convex, then there exists ξx ∈ Rn such that m(x) + ξx · [y − x] ≤ m(y) ξx is a subgradient of m at x. And then ∂m(x) = m(x) + ξ · [y − x] ≤ m(y), ∀y ∈ Rn Hence, θ is solution of argmin y − θ 2 subject to θi + ξi[xj − xi] ≤ θj, ∀i, j and ξ1, · · · , ξn ∈ Rn . @freakonometrics 53 q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 speed dist
  • 54. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Testing (Non-)Linearities In the linear model, y = Xβ = X[XT X]−1 XT H y Hi,i is the leverage of the ith element of this hat matrix. Write yi = n j=1 [XT i [XT X]−1 XT ]jyj = n j=1 [H(Xi)]jyj where H(x) = xT [XT X]−1 XT The prediction is m(x) = E(Y |X = x) = n j=1 [H(x)]jyj @freakonometrics 54
  • 55. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Testing (Non-)Linearities More generally, a predictor m is said to be linear if for all x if there is S(·) : Rn → Rn such that m(x) = n j=1 S(x)jyj Conversely, given y1, · · · , yn, there is a matrix S n × n such that y = Sy For the linear model, S = H. trace(H) = dim(β): degrees of freedom Hi,i 1 − Hi,i is related to Cook’s distance, from Cook (1977), Detection of Influential Observations in Linear Regression. @freakonometrics 55
  • 56. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Testing (Non-)Linearities For a kernel regression model, with kernel k and bandwidth h S (k,h) i,j = kh(xi − xj) n k=1 kh(xk − xj) where kh(·) = k(·/h), while S(k,h) (x)j = Kh(x − xj) n k=1 kh(x − xk) For a k-nearest neighbor, S (k) i,j = 1 k 1(j ∈ Ixi ) where Ixi are the k nearest observations to xi, while S(k) (x)j = 1 k 1(j ∈ Ix). @freakonometrics 56
  • 57. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Testing (Non-)Linearities Observe that trace(S) is usually seen as a degree of smoothness. Do we have to smooth? Isn’t linear model sufficent? Define T = Sy − Hy trace([S − H]T[S − H]) If the model is linear, then T has a Fisher distribution. Remark: In the case of a linear predictor, with smoothing matrix Sh R(h) = 1 n n i=1 (yi − m (−i) h (xi))2 = 1 n n i=1 Yi − mh(xi) 1 − [Sh]i,i 2 We do not need to estimate n models. @freakonometrics 57
  • 58. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Confidence Intervals If y = mh(x) = Sh(x)y, let σ2 = 1 n n i=1 (yi − mh(xi)) 2 and a confidence interval is, at x mh(y) ± t1−α/2σ Sh(x)Sh(x)T . q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 vitesse du véhicule distancedefreinage q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q @freakonometrics 58
  • 59. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Confidence Bands 0 50 100 150 5 10 15 20 25 dist speed 0 50 100 150 5 10 15 20 25 dist speed @freakonometrics 59
  • 60. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Confidence Bands Also called variability bands for functions in Härdle (1990) Applied Nonparametric Regresion. From Collomb (1979) Condition nécessaires et suffisantes de convergence uniforme d’un estimateur de la r´gression, with Kernel regression (Nadarayah-Watson) sup |m(x) − mh(x)| ∼ C1h2 + C2 log n nh sup |m(x) − mh(x)| ∼ C1h2 + C2 log n nhdim(x) @freakonometrics 60
  • 61. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Confidence Bands So far, we have mainly discussed pointwise convergence with √ nh (mh(x) − m(x)) L → N(µx, σ2 x). This asymptotic normality can be used to derive (pointwise) confidence intervals P(IC− (x) ≤ m(x) ≤ IC+ (x)) = 1 − α ∀x ∈ X. But we can also seek uniform convergence properties. We want to derive functions IC± such that P(IC− (x) ≤ m(x) ≤ IC+ (x) ∀x ∈ X) = 1 − α. @freakonometrics 61
  • 62. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Confidence Bands • Bonferroni’s correction Use a standard Gaussian (pointwise) confidence interval IC± (x) = m(x) ± √ nhσt1−α/2. and take also into accound the regularity of m. Set V (η) = 1 2 2η + 1 n + 1 n m ∞,x, for some 0 < η < 1 where ϕ ∞,x is on a neighborhood of x. Then consider IC± (x) = IC± (x) ± V (η). @freakonometrics 62
  • 63. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Confidence Bands • Use of Gaussian processes Observe that √ nh (mh(x) − m(x)) D → Gx for some Gaussian process (Gx). Confidence bands are derived from quantiles of sup{Gx, x ∈ X}. If we use kernel k for smoothing, Johnston (1982) Probabilities of Maximal Deviations for Nonparametric Regression Function Estimates proved that Gx = k(x − t)dWt, for some standard (Wt) Wiener process is then a Gaussian process with variance k(x)k(t − x)dt. And IC± (x) = ϕ(x) ± qα 2 log(1/h) + dn 5 7 σ2 √ nh with dn = 2 log h−1 + 1 2 log h−1 log 3 4π2 , where exp(−2 exp(−qα)) = 1 − α. @freakonometrics 63
  • 64. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Confidence Bands • Bootstrap (see #2) Finally, McDonald (1986) Smoothing with Split Linear Fits suggested a bootstrap algorithm to approximate the distribution of Zn = sup{|ϕ(x) − ϕ(x)|, x ∈ X}. @freakonometrics 64
  • 65. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Confidence Bands Depending on the smoothing parameter h, we get different corrections @freakonometrics 65
  • 66. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Confidence Bands Depending on the smoothing parameter h, we get different corrections @freakonometrics 66
  • 67. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Boosting to Capture NonLinear Effects We want to solve m = argmin E (Y − m(X))2 The heuristics is simple: we consider an iterative process where we keep modeling the errors. Fit model for y, h1(·) from y and X, and compute the error, ε1 = y − h1(X). Fit model for ε1, h2(·) from ε1 and X, and compute the error, ε2 = ε1 − h2(X), etc. Then set mk(·) = h1(·) ∼y + h2(·) ∼ε1 + h3(·) ∼ε2 + · · · + hk(·) ∼εk−1 Hence, we consider an iterative procedure, mk(·) = mk−1(·) + hk(·). @freakonometrics 67
  • 68. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Boosting h(x) = y − mk(x), which can be interpreted as a residual. Note that this residual is the gradient of 1 2 [y − mk(x)]2 A gradient descent is based on Taylor expansion f(xk) f,xk ∼ f(xk−1) f,xk−1 + (xk − xk−1) α f(xk−1) f,xk−1 But here, it is different. We claim we can write fk(x) fk,x ∼ fk−1(x) fk−1,x + (fk − fk−1) β ? fk−1, x where ? is interpreted as a ‘gradient’. @freakonometrics 68
  • 69. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Boosting Here, fk is a Rd → R function, so the gradient should be in such a (big) functional space → want to approximate that function. mk(x) = mk−1(x) + argmin f∈F n i=1 (yi − [mk−1(x) + f(x)])2 where f ∈ F means that we seek in a class of weak learner functions. If learner are two strong, the first loop leads to some fixed point, and there is no learning procedure, see linear regression y = xT β + ε. Since ε ⊥ x we cannot learn from the residuals. In order to make sure that we learn weakly, we can use some shrinkage parameter ν (or collection of parameters νj). @freakonometrics 69
  • 70. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Boosting with Piecewise Linear Spline & Stump Functions Instead of εk = εk−1 − hk(x), set εk = εk−1 − ν·hk(x) Remark : bumps are related to regression trees (see 2015 course). @freakonometrics 70 q qq q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q qq q q q q q q q q q q q q q q q q q q q q qq q q q q qq q q q q q q q q q q q qq q q q qq q q q qq q q qq q q q q q q q q q q q q q q q q q q q q qq q q q q q q qq q q q qq q q q q q q qq q q q q q q qq q q q q q q qq q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q q q q qq q qqq q q q q q q q qq qq q q q q q q q q q q q q q q q q q q q q q q q q qqqqq q q q q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q 0 1 2 3 4 5 6 −1.5−1.0−0.50.00.51.01.5 q qq q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q qq q q q q q q q q q q q q q q q q q q q q qq q q q q qq q q q q q q q q q q q qq q q q qq q q q qq q q qq q q q q q q q q q q q q q q q q q q q q qq q q q q q q qq q q q qq q q q q q q qq q q q q q q qq q q q q q q qq q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q q q q qq q qqq q q q q q q q qq qq q q q q q q q q q q q q q q q q q q q q q q q q qqqqq q q q q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q 0 1 2 3 4 5 6 −1.5−1.0−0.50.00.51.01.5
  • 71. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Ruptures One can use Chow test to test for a rupture. Note that it is simply Fisher test, with two parts, β =    β1 for i = 1, · · · , i0 β2 for i = i0 + 1, · · · , n and test    H0 : β1 = β2 H1 : β1 = β2 i0 is a point between k and n − k (we need enough observations). Chow (1960) Tests of Equality Between Sets of Coefficients in Two Linear Regressions suggested Fi0 = ηT η − εT ε εT ε/(n − 2k) where εi = yi − xT i β, and ηi =    Yi − xT i β1 for i = k, · · · , i0 Yi − xT i β2 for i = i0 + 1, · · · , n − k @freakonometrics 71
  • 72. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Ruptures 1 > Fstats(dist ~ speed ,data=cars ,from =7/50) q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 Vitesse du véhicule Distancedefeinage q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q Indice Fstatistics 0 10 20 30 40 50 024681012 q @freakonometrics 72
  • 73. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Tester la présence d’une rupture, le test de Chow 1 > Fstats(dist ~ speed ,data=cars ,from =2/50) q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q 5 10 15 20 25 020406080100120 Vitesse du véhicule Distancedefeinage q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q Indice Fstatistics 0 10 20 30 40 50 024681012 q @freakonometrics 73
  • 74. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Ruptures If i0 is unknown, use CUSUM types of tests, see Ploberger & Krämer (1992) The Cusum Test with OLS Residuals. For all t ∈ [0, 1], set Wt = 1 σ √ n nt i=1 εi. If α is the confidence level, bounds are generally ±α, even if theoretical bounds should be ±α t(1 − t). 1 > cusum <- efp(dist ~ speed , type = "OLS -CUSUM",data=cars) 2 > plot(cusum ,ylim=c(-2,2)) 3 > plot(cusum , alpha = 0.05 , alt.boundary = TRUE ,ylim=c(-2,2)) @freakonometrics 74
  • 75. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Ruptures OLS−based CUSUM test Time Empiricalfluctuationprocess 0.0 0.2 0.4 0.6 0.8 1.0 −2−1012 OLS−based CUSUM test with alternative boundaries Time Empiricalfluctuationprocess 0.0 0.2 0.4 0.6 0.8 1.0 −2−1012 @freakonometrics 75
  • 76. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Ruptures and Nonlinear Models See Imbens & Lemieux (2008) Regression Discontinuity Designs. @freakonometrics 76
  • 77. Arthur CHARPENTIER, Advanced Econometrics Graduate Course, Winter 2017, Université de Rennes 1 Generalized Additive Models Linear regression model E[Y |X = x] = β0 + xT β = β0 + p j=1 βjxj Additive model E[Y |X = x] = β0 + p j=1 hj(xj) where hj(·) can be any nonlinear function. 1 > library(mgcv) 2 > gam(dist~s(speed), data=cars) @freakonometrics 77 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 −0.5 0.0 0.5 1.0 1.5 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 −0.5 0.0 0.5 1.0 1.5