1. Arthur CHARPENTIER - Sales forecasting.
Sales forecasting # 1
Arthur Charpentier
arthur.charpentier@univ-rennes1.fr
1
2. Arthur CHARPENTIER - Sales forecasting.
Agenda
Qualitative and quantitative methods, a very general introduction
• Series decomposition
• Short versus long term forecasting
• Regression techniques
Regression and econometric methods
• Box & Jenkins ARIMA time series method
• Forecasting with ARIMA series
Practical issues: forecasting with MSExcel
2
3. Arthur CHARPENTIER - Sales forecasting.
Somes references
Major reference for this short course, Pindyck, R.S. & Rubinfeld, D.L. (1997).
Econometric models and economic forecasts. Mc Graw Hill.
“A forecast is a quantitative estimate about the likelihood of future events which
is developed on the basis of past and current information”.
3
4. Arthur CHARPENTIER - Sales forecasting.
Forecasting challenges ?
“With over 50 foreign cars already on sale here, the Japanese auto industry isn’t
likely to carve out a big slice of the U.S. market”. - Business Week, 1958
“I think there is a world market for maybe five computers”. - Thomas J. Watson,
1943, Chairman of the Board of IBM
“640K ought to be enough for anybody”. - Bill Gates, 1981
“Stocks have reached what looks like a permanently high plateau”. - Irving Fisher,
Professor of Economics, Yale University, October 16, 1929.
4
5. Arthur CHARPENTIER - Sales forecasting.
Challenge: use MSExcel (only) to build a forecast model
MSExcel is not a statistical software.
Specific softwares can be used, e.g. SAS, Gauss, RATS, EViews, SPlus, or more
recently, R (which is the free statistical software).
5
6. Arthur CHARPENTIER - Sales forecasting.
Macro versus micro ?
Macroeconomic Forecasting is related to the prediction of aggregate economic
behavior, e.g. GDP, Unemployment, Interest Rates, Exports, Imports,
Government Spending, etc.
It is a very difficult exercice, which appears frequently in the media.
6
7. Arthur CHARPENTIER - Sales forecasting.
−4−20246810
American Express
University of
North Carolina
Goldman Sachs
PNC Financial
Kudlow & co
Figure 1: Economic growth forecasts, from Wall Street Journal, Sept. 12, 2002,
Q4 2002, Q1 2003 and Q2 2003.
7
8. Arthur CHARPENTIER - Sales forecasting.
Macro versus micro ?
Microeconomic Forecasting is related to the prediction of firm sales, industry
sales, product sales, prices, costs...
Usually more accurate, and applicable to business manager...
Problem is that human behavior is not always rational: there is always
unpredictable uncertainty.
8
9. Arthur CHARPENTIER - Sales forecasting.
Short versus long term?
0 50 100 150 200 250
−4
−2
0
2
4
6
8
0 50 100 150 200 250
−5
0
5
0 50 100 150 200 250
−20
−10
0
10
Figure 2: Forecasting a time series, with different models.
9
10. Arthur CHARPENTIER - Sales forecasting.
Short versus long term?
160 180 200 220 240
−4
−2
0
2
4
6
8
160 180 200 220 240
−4
−2
0
2
4
6
8
160 180 200 220 240
−4
−2
0
2
4
6
8
Figure 3: Forecasting a time series, with different models.
10
12. Arthur CHARPENTIER - Sales forecasting.
Series decomposition
Decomposition assumes that the data consist of
data = pattern + error
Where the pattern is made of trend, cycle, and seasonality.
General representation is
Xt = f(St, Dt, Ct, εt)
where
• Xt denotes the time series value at time t,
• St denotes the seasonal component at time t, i.e. seasonal effect,
• Dt denotes the trend component at time t, i.e. secular trend,
• Ct denotes the cycle component at time t, i.e. cyclical variation,
• εt denotes the error component at time t, i.e. random fluctuations,
12
13. Arthur CHARPENTIER - Sales forecasting.
Series decomposition
The secular trends are long-run trends that cause changes in an economic data
series,
three different patterns can be distinguished,
• linear trend, Yt = α + βt
• constant rate of growth trend, Yt = Y0(1 + γ)t
• declining rate of growth trend, Yt = exp(α − β/t)
For the linear trend, adjustment can be obtained, introducing breaks for instance.
For constant rate of growth trend, note that in that case
log Yt = log Y0 + log(1 + γ) · t, which is a linear model on the logarithm of the
serie.
13
14. Arthur CHARPENTIER - Sales forecasting.
Series decomposition
For those two models, standard regression techniques can be used.
For declining rate of growth trend, log Yt = α − β/t, which is sometimes called
semilog regression model.
The cyclical variations are major expansions and contractions in an economic
series that are usually greater than a year in duration
The seasonal effect cause variation during a year, that tend to be more or less
consistent from year to year,
From an econometric point of view, a seasonal effect is obtained using dummy
variables. E.g for quaterly data,
Yt = α + βt + γ1∆1,t + γ2∆2,t + γ3∆3,t + γ4∆4,t
where ∆i,t is an indicator series, being equal to 1 when t is in the ith quarter,
and 0 if not.
The random fluctuations cannot be predicted.
14
21. Arthur CHARPENTIER - Sales forecasting.
Exogeneous versus endogenous variables
Model Xt = f(St, Dt, Ct, εt, Zt) can contain on exogeneous variables Z, so that
• St, the seasonal component at time t, can be predicted, i.e.
ST +1, ST +2, · · · , ST +h
• Dt, the trend component at time t, can be predicted, i.e.
DT +1, DT +2, · · · , DT +h
• Ct, the cycle component at time t, can be predicted, i.e.
CT +1, CT +2, · · · , CT +h
• Zt, the exogeneous variables at time t, can be predicted, i.e.
ZT +1, ZT +2, · · · , ZT +h
• but εt, the error component cannot be predicted
21
22. Arthur CHARPENTIER - Sales forecasting.
Exogeneous versus endogenous variables
Like in classical regression models: try to find a model Yi = Xiβ + εi which the
highest prediction value.
Classical ideas in econometrics: compare Yi and Yi, which should be as closed as
possible. E.g. minimize
n
i=1
(Yi − Yi)2
, which is the sum of squared errors, and
can be related to the R2
, or MSE, or RMSE.
When dealing with time series, it is possible to add an endogeneous component.
Endogeneous variables are those that the model seeks to explain via the solution
of the system of equations.
The general model is then
Xt = f(St, Dt, Ct, εt, Zt, Xt−1, Xt−2, ..., Zt−1, ..., εt−1, ...)
22
23. Arthur CHARPENTIER - Sales forecasting.
Comparing forecast models
In order to evaluate the accuracy - or reliability - of forecasting models, the R2
has been seen as a good measure in regression analysis,but the standard is the
root mean square error (RMSE), i.e.
RMSE =
1
n
n
i=1
(Yi − Yi)2
where is a good measure of the goodness of fit.
The smaller the value of the RMSE, the greater the accurary of a forecasting
model.
23
24. Arthur CHARPENTIER - Sales forecasting.
q
q
ESTIMATION
PERIOD
EX−POST
FORECAST
PERIOD
EX−ANTE
FORECAST
PERIOD
Figure 11: Estimation period, ex-ante and ex-post forecasting periods.
24
25. Arthur CHARPENTIER - Sales forecasting.
Regression model
Consider the following regression model, Yi = Xiβ + εi.
Call:
lm(formula = weight ~ groupCtl+ groupTrt - 1)
Residuals:
Min 1Q Median 3Q Max
-1.0710 -0.4938 0.0685 0.2462 1.3690
Coefficients:
Estimate Std. Error t value Pr(>|t|)
groupCtl 5.0320 0.2202 22.85 9.55e-15 ***
groupTrt 4.6610 0.2202 21.16 3.62e-14 ***
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Residual standard error: 0.6964 on 18 degrees of freedom
Multiple R-Squared: 0.9818, Adjusted R-squared: 0.9798
F-statistic: 485.1 on 2 and 18 DF, p-value: < 2.2e-16
25
26. Arthur CHARPENTIER - Sales forecasting.
Lest square estimation
Parameters are estimated using ordinary least squares techniques, i.e.
β = (X X)−1
X Y . E(β) = β.
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
qq
q
q
5 10 15 20 25
020406080100120
car speed
distance
Linear regression, distance versus speed
Figure 12: Least square regression, Y = a + bX.
26
27. Arthur CHARPENTIER - Sales forecasting.
Lest square estimation
Parameters are estimated using ordinary least squares techniques, i.e.
β = (X X)−1
X Y . E(β) = β.
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
qq
q
q
5 10 15 20 25
020406080100120
car speed
distance
Linear regression, speed versus distance
Figure 13: Least square regression, X = c + dY .
27
28. Arthur CHARPENTIER - Sales forecasting.
Lest square estimation
Assuming ε ∼ N(0, σ2
), then V (β) = (X X)−1
σ2
.
The variance of residuals σ2
can be estimated using ε ε/(n − k − 1).
It is possible to test H0 : βi = 0, then βi/σ (X X)−1
i,i has a Student t
distribution under H0, with n − k − 1 degrees of freedom.
The p-value corresponding to the power of the t-test, i.e. 1- probability of second
type error.
The confidence interval for βi can be obtained easilty as
βi − tn−k(1 − α/2)σ [(X X)−1]i,i; βi + tn−k(1 − α/2)σ [(X X)−1]i,i
where tn−k(1 − α/2) stands for the (1 − α/2) quantile of the t distribution with
n − k degrees of freedom.
28
29. Arthur CHARPENTIER - Sales forecasting.
Lest square estimation
3.5 4.0 4.5 5.0 5.5
−0.010.010.020.030.04
Endemics
Area
q
−0.15 −0.10 −0.05 0.00 0.05
−0.010.010.020.030.04
Elevation
Area
q
29
30. Arthur CHARPENTIER - Sales forecasting.
Lest square estimation
The R2
is the correlation coefficient between series {Y1, · · · , Yn} and {Y1, · · · , Yn},
where Yi = Xiβ. It can be interpreted as the ratio of the variance explained by
regression, and total variance.
The adjusted R2
, called R
2
, is defined as
R
2
=
(n − 1)R2
− k
n − k
= 1 −
n − 1
n − k − 1
(1 − R2
).
Assume that residuals are N(0, σ2
), then Y ∼ N(Xβ, σ2
I), and thus, it is
possible to use maximum likelihood technique,
log L(β, σ|X, Y ) = −
n
2
log(2π) −
n
2
log(σ2
) −
(Y − Xβ) (Y − Xβ)
2σ2
Akake criteria (AIC) and Schwarz criteria (SBC) can be used to choose a model.
AIC = −2 log L + 2k and SBC = −2 log L + k log n
30
31. Arthur CHARPENTIER - Sales forecasting.
Lest square estimation
Fisher’s statistics can be used to test globally the significance of the regression,
i.e. H0 : β = 0, defined as F =
n − k
k − 1
R2
1 − R2
.
Additional tests can be run, e.g. to test normality of residuals, such as
Jarque-Berra statistics, defined as
BJ =
n
6
sk2
+
n
24
[κ − 3]2
,
where sk denotes the empirical skewness, and κ the empirical kurtosis. Under
assumption H0 of normality, BJ ∼2
(2).
31
33. Arthur CHARPENTIER - Sales forecasting.
Prediction in the linear model
Given a new observation x0, the predicted response is x0β. Note that the
associated variance is V ar(x0β) = x0(X X)−1
x0σ2
.
Since the future observation should be x0β+ε (where ε is unknown, but yield
additional uncertainty), the confidence interval for this predicted value can be
computed as
βi − tn−k(1 − α/2)σ 1+x0(X X)−1x0; βi + tn−k(1 − α/2)σ 1+x0(X X)−1x0
where again tn−k(1 − α/2) stands for the (1 − α/2) quantile of the t distribution
with n − k degrees of freedom.
Remark Recall that this is rather different compared with the confidence
interval for the mean response, given x0, which is
βi − tn−k(1 − α/2)σ x0(X X)−1x0; βi + tn−k(1 − α/2)σ x0(X X)−1x0
33
35. Arthur CHARPENTIER - Sales forecasting.
Regression, basics on statistical regression techniques
Remark statistical uncertainty and parameter uncertainty. Consider i.i.d.
observations X1, lcdot, Xn from a N(µ, σ) distribution, where µ is unknown and
should be estimated.
Step 1: in case σ is known. The natural estimate of unkown µ is µ =
1
n
n
i=1
Xi,
and the 95% confidence interval is
µ + u2.5%
σ
√
n
; µ + u97.5%
σ
√
n
where u2.5% = −1.9645 and u97.5% = 1.9645. Both are quantiles of the N(0, 1)
distribution.
35
36. Arthur CHARPENTIER - Sales forecasting.
Regression, basics on statistical regression techniques
Step 2: in case σ is unknown. The natural estimate of unkown µ is still
µ =
1
n
n
i=1
Xi, and the 95% confidence interval is
µ + t2.5%
σ
√
n
; µ + t97.5%
σ
√
n
The following table gives values of t2.5% and t97.5% for different values of n.
36
37. Arthur CHARPENTIER - Sales forecasting.
n t2.5% t97.5% n t2.5% t97.5%
5 -2.570582 2.570582 30 -2.042272 2.042272
10 -2.228139 2.228139 40 -2.021075 2.021075
15 -2.131450 2.131450 50 -2.008559 2.008559
20 -2.085963 2.085963 100 -1.983972 1.983972
25 -2.059539 2.059539 200 -1.971896 1.971896
Table 1: Quantiles of the t distribution for different values of n.
This information is embodied in the form of a model - a single equation
structural model, a multiequation model, or a time series model
By extrapolating the models beyond the period over which they are estimated
,we get forecasts about future events.
37
38. Arthur CHARPENTIER - Sales forecasting.
Regression model for time series
Consider the following regression model,
Yt = α + βXt + εt where εt ∼ N(0, σ2
).
Step 1: in case α and β are known,
Given a known value XT +1, and if α and β are known, then
YT +1 = E(YT +1) = α + βXT +1
This yields a forecast error, εT +1 = YT +1 − YT +1. This error has two properties
• the forecast should be unbiased E(εT +1) = 0
• the forecast error variance is constant V (εT +1) = E(ε2
T +1) = σ2
.
38
39. Arthur CHARPENTIER - Sales forecasting.
Regression model for time series
Step 2: in case α and β are unknown,
The best forecast for YT +1 is then determined from a simple two-stage procedure,
• estimate parameters of the linear equation using ordinary least squares
• set YT +1 = α + βXT +1
Thus, the forecast error is then
εT +1 = YT +1 − YT +1 = (α − α) + (β − β)XT +1 − εT +1
Thus, there are two sources of error:
• the additive error term εT +1
• the random nature of statistical estimation
39
40. Arthur CHARPENTIER - Sales forecasting.
Figure 14: Forecasting techniques, problem of uncertainty related to parameter
estimation.
40
41. Arthur CHARPENTIER - Sales forecasting.
Regression model for time series
Consider the following regression model
Goal of ordinay least squares, minimize
N
I=1(Yi − Yi)2
where Y = α + βX. Then
β =
n XiYi − Xi Yi
n X2
i − ( Xi)
2
and
α =
Yi
n
− β ·
Xi
n
= Y − βX
The least square slope can be writen
β =
(Xi − X)(Yi − Y )
(Xi − X)2
V (εT +1) = V (α) + 2XT +1cov(α, β) + X2
T +1V (β) + σ2
41
42. Arthur CHARPENTIER - Sales forecasting.
Regression model for time series
under the assumption of the linear model, i.e.
• there exists a linear relationship between X and Y , Y = α + βX,
• the Xi’s are nonrandom variables,
• the errors have zero expected value, E(ε) = 0,
• the errors have constant variance, V (ε) = σ2
,
• the errors are independent,
• the errors are normally distributed.
42
43. Arthur CHARPENTIER - Sales forecasting.
Regression model and Gauss-Markov theorem
Under the 5 first assumptions, the estimators α and β are the best (most
efficient) linear unbiased estimator of α and β, in the sense that they have
minimum variance, of all linear unbiased estimators (i.e. BLUE, best linear
unbiased estimators).
The two estimators are further asymptotically normal,
√
n(β − β)→N 0,
n · σ2
(Xi − X)2
and
√
n(α − α)→N 0, σ2 X2
i
(Xi − X)2
.
The asymptotic variances of α and β can be estimated as
V (β) =
σ2
(Xi − X)2
and V (α) =
σ2
n (Xi − X)2
while the covariance is
cov(α, β) =
−Xσ2
(Xi − X)2
.
43
44. Arthur CHARPENTIER - Sales forecasting.
Regression model and Gauss-Markov theorem
Thus, if σ denotes the standard deviation of εT +1, the standard deviation s of
εT +1 can be estimated as
s2
= σ 1+
1
T
+
(XT +1 − X)2
(Xi − X)2
> σ.
44
45. Arthur CHARPENTIER - Sales forecasting.
RMSE (root mean square error) and Theil’s inequality
Recall that the root mean square error (RMSE), i.e.
RMSE =
1
n
n
i=1
(Yi − Yi)2
Another useful statistic is Theil inequality coefficient defined as
U =
1
T
n
i=1
(Yi − Yi)2
1
T
n
i=1
Y 2
i +
1
T
n
i=1
Y 2
i
From this normalization U always fall between 0 and 1. U = 0 is a perfect fit,
while U = 1 means that the predictive performance is as bad as it could possibly
be.
45
46. Arthur CHARPENTIER - Sales forecasting.
Step 3, assume that α, β and XT +1 are unknown, but that
XT +1 = XT +1 + uT +1, where uT +1 ∼ N(0, σ2
u). The two errors are uncorrelated.
Here, the error of forecast is
εT +1 = YT +1 − YT +1 = (α − α) + (β − β)XT +1 − εT +1
It can be proved (easily) that E(εT +1) = 0. But its variance is slightly more
complecated to derive
V (εT +1) = V (α) + 2XT +1cov(α, β) + (X2
T +1+σ2
u)V (β) + σ2
+β2
σ2
u
And therefore, the forecast error variance is then
s2
= σ 1 +
1
T
+
(XT +1 − X)2
+ σ2
u
(Xi − X)2
+ β2
σ2
u > σ2
,
which,again, increases the forecast error.
46
47. Arthur CHARPENTIER - Sales forecasting.
To go further, multiple regression model
In the multiple regression model, Y = Xβ + ε, in which
Y =
Y1
Y2
...
Yn
,X =
X1,1 X2,1 ... Xk,1
X1,2 X2,2 ... Xk,2
... ... ...
X1,n X2,n ... Xk,n
,β =
β1
β2
...
βK
,ε =
ε1
ε2
...
εn
• there exists a textcolorbluelinear relationship between X1, , Xk and Y ,
Y = α + β1X1 + +βkXk,
• the Xi’s are nonrandom variables, and moreover, there are no exact linear
relationship between two and more independent variables,
• the errors have zero expected value, E(ε = 0,
• the errors have constant variance, var(ε) = σ2
,
• the errors are independent,
47
48. Arthur CHARPENTIER - Sales forecasting.
• the errors are normally distributed.
The new assumption here is that “there are no exact linear relationship between
two and more independent variables”.
If such a relationship exists, variables are perfectly collinear, i.e. perfect
collinearity.
From a statistical point of view, multicollinearity occures when two variables are
closely related. This might occur e.g. between two series {X2, X3, · · · , XT } and
{X1, X2, · · · , XT −1} with strong autocorrelation.
48
49. Arthur CHARPENTIER - Sales forecasting.
To go further, forecasting with serial correlated errors
In previous model, errors were homoscedastic. A more general model is obtained
when errors are heteroscedastic, i.e. non-constant variance. Goldfeld-Quandt test
can be performed.
An alternative is to assume serial correlation. Cochrane-Orcutt or Hildreth-Lu
procedures can be performed.
Consider the following regression model,
Yt = α + βXt + εt where εt = ρεt − 1 + ηt
with −1 ≤ ρ ≤ +1 and ηt ∼ N(0, σ2
).
Step 1, assume that α, β and ρ are known.
YT +1 = α + βXT +1 + εT +1 = α + βXT +1 + ρεT
assuming that εT +1 = ρεT . Recursively,
εT +2 = ρεT +1 = ρ2
εT
49
50. Arthur CHARPENTIER - Sales forecasting.
εT +3 = ρεT +2 = ρ3
εT
εT +h = ρεT +h−1 = ρh
εT
Since |ρ| < 1, ρh
approaches 0 as h gets arbitrary large. Hence, the information
provided by serial correlation becomes less and less usefull.
YT +1 = α(1 − ρ) + βXT +1 + ρ(YT − βXT )
Since YT = α + βXT + εT , then
YT +1 = α + βXT +1 + ρεT
Thus, the forecast error is then
εT = YT − YT = ρεT − εT +1
50
51. Arthur CHARPENTIER - Sales forecasting.
To go further, using lag models
We have mentioned earlier that when dealing with time series, it was possible not
only to consider the linear regression of Yt on Xt, but to consider lagged variates
• either Xt−1, Xt−2, Xt−2, ...etc,
• or Yt−1, Yt−2, Yt−2, ...etc,
First, we will focuse on adding lagged explanatory exogneous variable, i.e.
models such as
Yt = α + β0Xt + β1Xt−1 + β2Xt−2 + · · · + βhXt−h + · · · + εt.
Remark In a very general setting Xt can be a random vector in Rk
.
51
52. Arthur CHARPENTIER - Sales forecasting.
To go further, a geometric lag model
Assume that weights of the lagged explanatory variables are all positive and
decline geometrically with time,
Yt = α + β Xt + ωXt−1 + ω2
Xt−2 + ω3
Xt−3 + · · · + ωh
Xt−h + · · · + εt,
with 0 < ω < 1.
Note that
Yt−1 = α+β Xt−1 + ωXt−2 + ω2
Xt−3 + ω3
Xt−4 + · · · + ωh
Xt−h−1 + · · · +εt−1,
so that
Yt − ωYt−1 = α(1 − ω) + βXt + ηt
where ηt = εt − ωεt−1.
Rewriting Yt = α(1 − ω) + ωYt−1 + βXt + ηt.
52
53. Arthur CHARPENTIER - Sales forecasting.
To go further, a geometric lag model
This would be called single-equation autoregressive model, with a single lagged
dependent variable.
The presence of a lagged dependent variable in the model causes ordinary
least-squares parameter estimates to be biased, although they remain consistent.
53
54. Arthur CHARPENTIER - Sales forecasting.
Estimation of parameters
In classical linear econometrics, Y = Xβ + ε, with ε ∼ N(0, σ2
). Then
β = (X X)−1
X Y
• is the ordinary least squares estimator, OLS,
• is the maximum likelihood estimator, ML.
Maximum likelihood estimator is consistent, asymptotically efficient, and
(asymptotic) variances can be determined. This can be obtined using
optimization techniques.
Remark it is possible to use generalized method of moments, GMM.
54
55. Arthur CHARPENTIER - Sales forecasting.
To go further, modeling a qualitative variable
In some case, the variable of interest is not necessarily of price (continuous
variable on R), but a binary variable.
Consider the following regression model Yi = α + βXi + εi, with Yi =
1
0
where the ε are independent random variables, with 0 mean.
Then E(Yi) = α + βXi.
Note that Yi is then a Bernoulli (binomial) distribution.
Classical models are either the probit or the logit model.
The idea is that there exists a continuous latent unobservable Y ∗
such that
Yi =
1 if Y∗
i > ti
0 if Y∗
i ≤ ti
with Y ∗
i = α + βXi + εi, which is now a classical
regression model.
Equivalently, it means that Yi is then a Bernoulli (binomial) distribution B(pi)
55
56. Arthur CHARPENTIER - Sales forecasting.
where
pi = F(α + βXi),
where F is a cumulative distribution function. If F is the cumulative distribution
function of N(0,1), i.e.
F(x) =
1
√
2π
x
−∞
exp −
z2
2
dz,
which is the probit model, or the cumularive distribution of the logistic
distribution
F(x) =
1
1 + exp(−x)
for the logit model.
Those models can be extended to so-called ordered probit model, where Y can
denote e.g. a rating (AAA,BB+, B-,...etc).
Maximum likelihood techniques can be used.
56
57. Arthur CHARPENTIER - Sales forecasting.
Modeling the random component
The unpredictible random component is the key element when forecasting. Most
of the uncertainty comes from this random component εt.
The lower the variance, the smaller the uncertainty on forecasts.
The general theoritical framework related to randomness of time series is related
to weakly stationary.
57