SlideShare ist ein Scribd-Unternehmen logo
1 von 79
Downloaden Sie, um offline zu lesen
Investigation into the effect on and robustness of the Optimal Region
of Investment Strategies for a drawdown pension through the entire
investment period under different models of the economy and a
splitting of this investment period
Andrew Hair
H00009767
August 2015
Actuarial Mathematics and Statistics
School of Mathematical and Computer Sciences
Dissertation submitted as part of the requirements for the award of
the degree of MSc in Quantitative Financial Engineering
Abstract
This dissertation investigates the optimal investment region for a young person en-
tering the workforce today. Once this region was found, using the Wilkie Model as
a model of the economy, a correlated Geometric Brownian Motion was then used as
an alternative model of the economy. This allowed for an investigation into the ro-
bustness of the region and any changes in characteristics of the region. A restricted
region of possible investment strategies has been searched using a basic Genetic Al-
gorithm methodology and a basic Dynamic Programming methodology. A strategy
was determined to be optimal using a risk metrics which consisted of a median life
and quantiles of the fund. It was found that, while providing a slightly different op-
timal region and quantiles, changing the underlying model of the economy honed in
on a similar region of the investment period, with the correlated Geometric Brownian
Motion having a more flexible region. The area was then divided into accumulation
and de-cumlation and these areas were searched briefly. There are a number of possi-
ble areas of further research and expansion including a full unrestricted optimization
using Stochastic Control to allow for an optimiaztion to be found with relaxed con-
straints on parameters. Another area could be full optimization while splitting the
area by creating a risk metric for the midpoint, which would be consistent with the
risk metric at the end. The motivation behind this project was the introduction of
increased flexibility in drawdown pension options for pensioners as of April 2015, as
announced in the 2014 UK budget.
i
Acknowledgements
There have been a number of people who have contributed towards this dissertation.
Firstly, I would like to thank Professor David Wilkie for providing his model as a basis
for the project, along with his model he also provided help and advice throughout.
Next, I would thank Dr Timothy Johnson for his guidance, input and assistance as
Academic supervisor. His input has been invaluable in the completion of this project
along with his ability to clear up a vast number of points, small and large. Lastly, a
special thanks to Simon Butlin of Standard Life for his advice and time as Industry
supervisor. The time he committed to going through processes and debugging code
made it possible for the project to advance in the way it has. I must also thank him
for creating the criteria for this project and giving me access to Standard Life for
the duration of this project. This access allowed for the project to benefit from the
environment Standard Life creates.
ii
Contents
Abstract i
Acknowledgements ii
Introduction 1
The Wilkie Model 4
Introduction and Background . . . . . . . . . . . . . . . . . . . . . . . . . 4
Total Return Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Mean Return, Volatility and Correlation . . . . . . . . . . . . . . . . . . . 7
Correlated Geometric Brownian Motion 10
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Using this Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Calibration and Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
The Asset-Liability Model 14
Creating the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Calibration and Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Parameters and Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Optimization Methods 19
Risk Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Methods of Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Method One: Genetic Algorithm . . . . . . . . . . . . . . . . . . . . 22
Method Two: Dynamic Programming . . . . . . . . . . . . . . . . . . 24
Changing the Model 28
Results from Searching the Space . . . . . . . . . . . . . . . . . . . . . . . 28
Results under the Wilkie Model . . . . . . . . . . . . . . . . . . . . . 30
Optimal Region under Wilkie Model . . . . . . . . . . . . . . 30
Key observations under Wilkie Model . . . . . . . . . . . . . . 30
Results under the Geometric Brownian Motion . . . . . . . . . . . . . 32
Optimal Region under GBM . . . . . . . . . . . . . . . . . . . 33
Key observations under GBM . . . . . . . . . . . . . . . . . . 33
Discussion of the Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Comparing the Quantiles and Processes of the models . . . . . 35
Comparing the Optimal region of Investment . . . . . . . . . . 38
Graphical comparison of the Optimal region . . . . . . . . . . 39
Conclusion of Comparison . . . . . . . . . . . . . . . . . . . . 41
Splitting the Time 42
Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Effect of the Risk Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Further Research 45
Stochastic Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Time Consistent Risk Metric . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Conclusion 48
Appendix 50
Appendix 1 - Total Return Indices . . . . . . . . . . . . . . . . . . . . . . 50
Appendix 2 - Mean Return, volatility and Correlation . . . . . . . . . . . . 51
Appendix 3 - Geometric Brownian Motion . . . . . . . . . . . . . . . . . . 53
Appendix 4 - Process of Searching . . . . . . . . . . . . . . . . . . . . . . . 54
Appendix 5 - MatLab Code . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Code for the GBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Code for reading in data . . . . . . . . . . . . . . . . . . . . . . . . . 61
Code Script for risk metric . . . . . . . . . . . . . . . . . . . . . . . . 62
Code function for ALM . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Code function for counting ruin of fund . . . . . . . . . . . . . . . . . 66
Code function for quantiles of death and expected life . . . . . . . . . 67
Bibliography and References 68
Introduction
This project aims to investigate the effect of changing the underlying model of the
economy on a pension fund through finding an optimal investment region for each
model. There will be two models being considered, the Wilkie Model and the cor-
related Geometric Brownian Motion (GBM). In order to investigate the effect, key
characteristics of the investment period shall be examined under each model, with the
optimal region of investments compared. For this project, the investment period will
be consider to be one period, from the start of saving until death. This is different
than the majority of approaches which will split the period into an accumulation and
de-cumulation stage. In these approaches, the accumulation stage would aim to grow
the pot as large as possible, which is not how this project approaches the optimization.
At the end of this project, a brief examination will be taken of the effect of splitting
the optimal region into these two stages.
The motivation behind this project is the increased drawdown flexibility introduced
in the 2014 UK Budget [HMT, 2014]. From April 2015 onwards, pensioners who have
pensions in a Defined Contribution (DC) scheme would be able to withdraw directly
from the fund, take a lump sum amount from the pot or buy an annuity with the
total savings. This change in the pension landscape gives more options and flexibility
to people approaching retirement [gov, 2015a]. Because of this, it was deemed im-
portant to investigate if the optimal region of investment strategies are comparable
across different models. The two models being considered here were chosen due to
the fact that the Wilkie Model can be described as a more complex model than the
GBM. This project will aim to investigate the similarity and differences observed in
the investment region under each model. For the course of this project only drawdown
shall be considered as the only income throughout retirement. This means that any
alternative income, including lump sum withdrawals and purchasing an annuity are
not included in any optimization procedure.
This project approaches the optimization of a pension scheme from a different angle.
Instead of considering the two stages, with the aim of growing the pot as large as
possible in the accumulation stage. It instead takes the region as a whole with the
1
aim that, at the time of death of the person, the pension fund would be near de-
pletion. The contribution rate, replacement rate and salary are fixed throughout for
consistency. Due to these fixed values, it meant that the only variables considered
would be the investment strategies and the underlying model of the economy which
would be used to grow the fund. This constricted process allowed for a more detailed
analysis of the effect on the investment space and optimal region of investment under
the different models.
The investment space that this project considers is the combination of three assets
across the total investment period. The asset classes are made to represent high,
medium and low volatility investments. The investment period is considered from the
time that the young person, age 20, begins saving for their pension up until an upper
age limit of 120. This means that it is a 100 year period. In order to restrict this
investment space to a feasible size to search, it was determined that investment strate-
gies would remain constant for decades at a time. It was also determined that there
would be restrictions and conditions on the investment of assets. It was determined
that short-selling was not allowed and 100% of the fund was invested at each time
step. There was also a limit set that at most the fund could have a 50% split between
assets. This meant that for each of the 10 decades, there was 6 possible investment
options. This restriction allowed the investment space to be a feasible size to search
while still allowing for characteristics of the fund and space to be observed under the
different models. The optimal investment region would be the area the optimization
methods converged to with respect to the risk metric being used.
While researching around the subject, it was observed that there has been very little
published work into finding an optimal investment strategy under drawdown for the
whole period. There was a significant amount of work covering the options for workers
after retirement. Work by Gerrard, Haberman and Vigna (2004) covers the optimal
investment choices for members DC schemes post retirement [Gerrard et al., 2004].
This looks at a more constricted drawdown than is considered in this project. Work
by Mowbray (2009) considered investment design for DC schemes [Mowbray, 2009].
Other papers that consider post retirement pension options include Sheshinski (2007),
Milevsky (2007), Crouch et al. (2013), Wadsworth et al. (1999), Dutta et al. (2000)
and Blake, Cairns and Dowd (2006). While these papers cover most post retirement
options and optimizations this project aims to look at the entire period.
In order to find the optimal region, two methods of searching the space have been
applied. These optimization methods are Genetic Algorithms and Dynamic Program-
ming. The theory and principles applied here have been taken from papers and books
2
including Aris (1964) and James (2011).
Using these optimizations on the constricted space. Investment strategies were run
through an Asset-Liability Model that was created as part of this project. Quantiles
produced from this model were used as part of a risk metric to determine optimal-
ity. Due to the fixed replication rate for drawdown it meant that upside risk could
not be addressed. Therefore, only the downside risk of running out too early was
addressed in the risk metric. The criteria for this project meant a single risk metric
was considered for the whole period, not separate metrics for the two stages. This
investigation provided results which showed, in general, the simplistic model exhibits
similar properties to the more complex model. This is observed in the early and late
years of the fund, along with extreme theorems. While slightly different investment
strategies are given, the simplistic model does provide results with less finesse than
the complex model provides. Due to the construction of the simplistic model the
quantile range observed is narrower and the time of ruin is impacted. It was deter-
mined that, in general, the characteristics of investment period, strategies and fund
could be observed equally well on both models. However, for more detailed analysis
the complex model provided a model with more finesse.
Due to the constrictions and fixed parameters used, it is realised that the investment
strategies found in this project are contrived and unrealistic. However, the consistency
between the search spaces under the different models allowed for comparisons of the
effects and characteristics of the fund and investment region. These constrictions
meant that further research could be carried out on this subject. Some areas of
further research are discussed in the final section of this project. These include a
feasible way to search a space with relaxed constraints and to fully examine splitting
the investment period under the criteria for this project.
3
The Wilkie Model
In this chapter the Wilkie Model shall be introduced and some context will be given
for its use in this project. The total return indices will be given and finally, the mean
return and volatility across decades along with the correlation between indices shall
be examined.
Introduction and Background
The model being used in this project was provided by Professor Wilkie along with
the accompanying papers ”Yet More on a Stochastic Economic Model” parts 1 and 2
[Wilkie et al., 2011, Wilkie and S¸ahin, 2015]. Since the model is not the main area of
this project only some background will be given. These papers are used to update the
model which was first defined in the paper ”More on a Stochastic Investment Model
for Actuarial Use” in 1995 [Wilkie, 1995]. The papers update the inputs, bringing
them up-to-date and making slight adjustments to certain parts of the model through
updating and re-basing where necessary. They then go on to examine how the model
has faired in its predictions and look at the parameters of the model. In the main,
the original model is unchanged from 1995 by these papers for use in this project.
The Wilkie Model of the economy is a stochastic model based upon Box-Jenkins time
series models [Sahin et al., 2008]. The model has a cascading structure meaning that
there is a central value defining the state of the market. All other parameters come
from this central parameter [Fraysse, 2015]. This means that the model is stable in
times of crisis while providing acceptable forecasts of the future. The model used
in this project requires 14 initial conditions along with 35 parameters which give 13
outputs for the areas of the economy modelled. Parameters include means, standard
deviations, weighting factors and deviation from the mean [Wilkie et al., 2011]. These
parameters are input for all areas of the model. A number of areas include Retail
Prices, Wages, Share Dividends, long-term Bond Yields and a Base Rate as an example
of typical input areas. The weighting factors are used with respect to the inflation rate
calculated from the Retail Prices. This is due to the fact that inflation is the central
value in the cascading model. Some of the input parameters are with respect to an
Auto-Regressive Conditional Heteroskedasticity (ARCH) model which is introduced
4
as an alternative model for inflation [Sahin et al., 2008]. For this project only five of
the outputs are used. In the model, the parameters mostly come from least-squares
estimation meaning that they are unbias and linear with the actual models being
determined by either a stationary or integrated auto-regressive model of order one
(AR(1) or ARIMA(1,1,0)) [Sahin et al., 2008]. The structure of the model, as stated
above is cascading, which is centred around inflation. A diagram of this cascading
model can be found below:
Figure 1: Cascading Structure of the Wilkie Model centred around Inflation
This diagram shows the cascade effect of the Wilkie Model for most of the major
components used in this project. While the basic concepts of the Wilkie Model follow
least-squares estimates and auto-regressive models there is some ARCH characteris-
tics for the residuals to account for fatter tails where necessary.
There is a fair amount of criticism of the Wilkie Model highlighting issues and areas
where it may fall short. Short-comings that are highlighted are that the model is
highly dependent on the initial parameters due to over parametrisation. There are
claims of biases in the estimations from inflation due to events in the market [Huber,
1995]. One criticism of the model which will be examined later is that in ”A Stochastic
Asset Model & Calibration for Long-term Financial Planning Purposes” [Hibbert
et al., 2001]. This shows that there there maybe an annual decrease in the standardised
volatility. There are also academic critics who highlight that the model does not hold
the no arbitrage or market efficiency theorem during times of stress [Fraysse, 2015].
There are upsides to using this model for this project. The outputs provide a good
basis for the total return indices used along with the access and simplicity of the
model. Since we are looking at pension fund investments over a long time span, the
Wilkie Model returning annual values was advantageous.
5
Total Return Indices
This section shall examine the total return indices used for this project. It was de-
cided to constrain the investment strategy to three investments in order to restrict the
search space. Effectively the three investments chosen are Equity, long-term Bonds
and Cash to replicate total returns indices with different risk-return profiles. The risk
considered when creating the index was the level of volatility produced by each of the
asset classes. When speaking about different levels of risk, the volatility produced is
used to distinguish between the high, median and low risk assets. The inflation index
has also been calculated to allow the salary and drawdown to grow with inflation.
The formula for the inflation index is:
Inflt = Inflt−1.exp{I(t)}
In this formula the index is given by Infl and I is the inflation output from the Wilkie
Model.
The formula for the total return index for equity is:
TRISt = TRISt−1.(P(t)+D(t))
P(t−1)
Here the TRIS is the total return index for shares which consists of the price index
of shares, P, and the dividends paid at that time, D. This replicates higher risk of
volatile movements with higher expected return investments.
The formula for the total return index for long-term bonds is:
TRICt = TRICt−1.
( 1
C(t)
+1)
( 1
C(t−1) )
The TRIC represents the total return index of Consol yield, Government Stocks,
which are assumed to pay an annual coupon of one. Even though perpetual stocks
are not available any more this represents a medium risk investment.
Finally, this is the formula to represent a cash total return index:
TRIBt = TRIBt−1. (1 + B(t))
This is not a true cash total return index since B represents short-term bond yield
which here shall represent a Cash total return Index. This is meant to represent a
safe, low risk investment.
6
All of the notation used can be found in the paper ”Yet More on a Stochastic Eco-
nomic Model: Part 1” [Wilkie et al., 2011]. The derivation of these formulae can be
found in Appendix 1.
Tax is not considered here since that goes beyond the scope of this project. Since
the aim of the project is to test robustness in optimal regions of investment strategies
across models it was important not to over complicate the inputs. Since the second
model used, which is defined in the next chapter, is calibrated to the outputs of these
indices it was deemed acceptable to ignore tax.
As discussed above, while these do not match the exact description or are totally
correct, the main idea is to test the robustness and characteristics of a region across
the models. It was decided that as long as the investments are consistent in returns,
volatility and correlation between assets across the models, it would be acceptable in
testing the robustness.
Mean Return, Volatility and Correlation
It was required that the a mean return, volatility and correlation between investment
options was found in order to calibrate the second model used in this project. To
achieve this, the average rate over 10 year periods was calculated for each simulation.
For the purpose of this project, it was deemed that 5000 simulations of the Wilkie
Model for 100 years will be used. This allowed for 5000 simulations of the value of
the pension fund across the investment period from the start of working life. It also
meant that it was possible to reduce the bias and error in the model due to the large
sample size. It implied that the sample mean return and volatility of the model could
be determined to a high degree of accuracy. Using this large number of simulations
meant that the bias cause by any outliers in the simulation was reduced. The graphs
of the simulations follow over the page.
7
Figure 2: Graphs of the 5000 simulations of each index as produced by the Wilkie Model
Each graph represents the values of each index for the 5000 simulations. The dashed
yellow line in each graph is a line of the average value each index takes. As can be
seen in the graphs of total return for cash and bonds there is one outlier which reaches
extreme values. Due to the large number of simulations, these outliers do not effect
the average return or volatility. This was possible due to the simplicity of the Wilkie
Model and the speed at which it produced 5000 simulations.
The return over 10 years has been calculated using Rt = ln TRt+10
TRt
for each index in
each simulation. The average of this 10 year return across the 5000 simulations was
then taken using ¯Rt =
( 5000
n=1 Rt,n)
n
which gives 10 decades of average 10 year returns
across the 100 years that is being considered. In order to annualise these values they
have been divided by 10.
A similar approach has been used for standard deviation and correlation as for the
mean. Both of these values have been calculated from Rt. The sample standard de-
viation for 10 year returns has been calculated using σt =
5000
n=1 (Rt,n− ¯Rt)
2
(n−1)
with the
square root of 10 being taken in order to annualise the volatility. This was examined
carefully due to the criticism of the Wilkie Model in ”A Stochastic Asset Model &
Calibration for long-term financial planning purposes” [Hibbert et al., 2001]. To vali-
date the standard deviation found, the standard deviation was calculated for the first
8
decade by hand. This used the same formula but individually calculating each part on
excel to validate the final value [Sto, 2015]. For both mean return and volatility, Rt has
been 10 year return for the index that was being calculated. When calculating the cor-
relation between the indices the formula used is ρi,j =
5000
n=1 (Ri
t,n− ¯Ri
t)(Rj
t,n− ¯Rj
t )
5000
n=1 (Ri
t,n− ¯Ri
t)
2 5000
n=1 (Rj
t,n− ¯Rj
t )
2 ,
where i, j = inflation, equity, bonds and cash. The values produced by these formulae
can be found in Appendix 2. It was observed from the output that after the first
20 years volatility and mean return remained consistent with correlation remaining
consistent throughout. In order to simplify the second model it was decided that a
single value, an average of the 10 decades, shall be used to reflect the annual return
and volatility. The implication and justification of this decision is discussed later in
the paper on the chapter on Optimization.
9
Correlated Geometric Brownian Motion
In this chapter the second model used in this project shall be introduced, the Geomet-
ric Brownian Motion. The creation and justification of choosing this model shall be
discussed followed by the mathematics behind the model and the use of the Cholesky
Decomposition. Finally, the calibration and validation process shall be covered.
Introduction
It was decided that the second model to be used in the project should be a simplistic
model in comparison with the Wilkie Model. Therefore, the Geometric Brownian
Motion (GBM) was selected since it is a simple log-normal model [epi, 2015].
The GBM can be used as a fundamental model of the economy since it produces
a basic exponential growth of assets. A stochastic process, Xt, follows a Brownian
Motion with the stochastic differential equation, dXt = µXtdt + σXtdWt, where dWt
is a standard Brownian Motion, N(0, t) [Glasserman, 2003, p. 93:104]. Using Itˆo’s
formula it follows that the process has the form Xt = X0exp{(µ − 1
2
σ2
)t + σWt}, with
the full proof found in Appendix 3. This is the basic mathematics behind the second
model used in this project.
Using this Model
The use of the GBM comes from the methodology used previously in calculating the
10 year rate of return. From the form of the process in the previous section, Xt =
X0exp{(µ − 1
2
σ2
)t + σWt}, where Xt represents the annual total return index, TRt,
as calculated under the Wilkie Model. Rearranging this process gives ln TRt
TRt−1
=
(µ− 1
2
σ2
)t+σWt which is an annual rate of return. Taking expectations and variance
of this formula gives E[ln TRt
TRt−1
] = (µ − 1
2
σ2
)t which is an annualised average rate
of return, ¯Rt. The variance is the volatility of the Wilkie Model, ¯σt, for each decade.
This means that the formula used to model the annual total returns has the form:
TRt = TRt−1exp{ ¯R + ¯σ }
In this formula ¯R is the annual rate of return under the Wilkie Model. ¯σ is the volatil-
ity under the Wilkie Model. Both of these values are the average for each decade,
10
calculated as ¯R =
10
t=1
¯Rt
10
and ¯σ =
10
t=1 ¯σt
10
. Due to it being an annual total return, t
is 1 and the error term, , is distributed as a Multi-variate Normal, MV N(0, 1), with
correlation between indices. This is created using the Cholesky Decomposition.
The Multi-variate Normal (MV N) is created using the correlation matrix, ρ, which is
taken from the Wilkie Model. Since the errors in the GBM are normally distributed
with mean 0 and variance 1 the MV N is distributed N(0, Σ) where 0 is a vector of
zeros of length d for the mean and Σ is the covariance matrix which has the form:
Σ =






σ1
σ2
...
σd












ρ11 ρ12 . . . ρ1d
ρ21 ρ22 . . . ρ2d
...
...
...
ρd1 ρd2 . . . ρdd












σ1
σ2
...
σd






.
In this formula for Σ all σ are 1. Since this is true it means that the errors can
be generated using Σ = AAT
. This means that the Cholesky Decomposition can
be used to produce a lower triangular matrix for A which reduces the price of the
computation. In the code used to create the error term at the back of this paper the
Cholesky Decomposition is applied such that:
Σ =






A11
A21 A22
...
...
...
Ad1 Ad2 . . . Add












A11 A12 . . . A1d
A22 . . . Ad2
...
...
Add






This is applied to the positive semi-definite covariance matrix. Using this MV N
means that the errors of the GBM are correlated between assets which gives the total
return indices created here a similar level of correlation as produced in the Wilkie
Model [Glasserman, 2003, p. 71:72].
The algorithm, which is given over the page, provides the structure for constructing
the model of the economy using the GBM.
11
Algorithm 1 GBM model algorithm
for loop over each simulation, z do
Generate the correlated error for each simulation, err
for loop for each index, j do
Set the starting value for each index, S0(1, j)
for loop over the time, i do
Generate the GBM simulations:
Si,j,z = Si−1,j,z.exp{µj + σj.err}
The GBM grows the previous value of the index by the exponential of
the mean and volatility multipled by the error term
end for
end for
end for
This algorithm shows the logical progression of the function. The actual code used can
be found in Appendix 5. The code makes use of the ”SDETools” toolkit in MatLab
[Horchler, 2011]. The error is generated and correlated using the function mvnrnd
which applies the Cholesky Decomposition as described above.
Calibration and Validation
Since this project is based upon testing the robustness and characteristics of an opti-
mal investment region across different models it was deemed important that there is
a similarity across the models. For this reason it was important to validate that the
calibration of the GBM was consistent across the mean return, volatility and corre-
lation with that produced by the Wilkie Model. In order to validate the output of
the GBM, 4 areas were validated. These areas were the mean return, volatility and
correlation output from the total return indices in the GBM along with the correlation
between errors to confirm the source of the correlation. This was achieved through
similar methods as used when validating the output of the Wilkie Model. The out-
puts of the total return indices from the GBM was used to calculate 10 year rate of
returns which were then used to get the annualised average 10 year rate of returns
and volatility across the 5000 simulations. The correlation was also calculated from
this in the same manner as under the Wilkie Model. The correlation between error
terms was also examined over a large sample to determine that the correlation from
the errors was consistent with the correlation between the indices. For completeness,
the GBM was reproduced with independent standard normal distributions and the
correlation tested to confirm that the source of the correlation was the error term.
The graphs of the total return indices follow over the page.
12
Figure 3: Graphs of the 5000 simulations of each index under the GBM
Figure 3 shows the 4 graphs and can be used to highlight the differences of the models.
One point of difference is that the Wilkie Model has more extreme values and outliers
than produced from the GBM. It is also clear that it is less likely to return from these
extreme values than the GBM. This is caused by what drives each process with the
correlated errors of the GBM being independent through time whereas the Wilkie
Model is driven by the auto-regressive process. Most of the paths in the GBM do not
have the smoothness which is present in the Wilkie Model for a similar reason. While
the volatilities of the paths are the same, each graph is decidedly different across the
models. This shall be discussed later in the Optimization chapter.
13
The Asset-Liability Model
In order for the pension fund to be accumulated and de-cumulated as required, it
meant that an Asset-Liability Model (ALM) had to be created. This chapter will
cover the creation and validation of the ALM.
Creating the Model
The objective of the ALM for this project is to allow a pension fund for a young
person to start saving from a given age and accumulate up to the Normal Retirement
Age (NRA). From the NRA a constant drawdown will be taken which will be a fixed
percentage of the final salary at retirement, this is known as the replacement rate.
This model was built for this project in order to control and understand where and
when the cash flows occurred. It was also created to meet the criteria of the project.
It was required that the entire investment area be considered as a whole. This ALM
allows for this by tackling accumulation and de-cumulation in the same model. The
project also only allows for a consistent drawdown to be taken from the pot until it
is depleted. This means that a number of options that are available to workers at
retirement have been omitted. These include the purchase of an annuity at retire-
ment or taking any of the pension pot as a lump sum. There are a number of papers
that discuss optimal pension options for post-retirement produces. However, these
assume the aim of the accumulation is to grow the pot as large as possible. In this
project, the aim of the whole process is to optimise a full strategy with the aim of go-
ing into drawdown. This allows for the characteristics of the full space to be examined.
A number of other assumptions and simplifications have also been made to allow for
the testing of the different models to be the main focus. A full list of the parameters
and constraints, along with the reasons behind choosing the values, can be found at
the end of this chapter. Some of the main parameters include constant pension con-
tribution rate, no early or late retirement, constant replacement rate, no promotions
or bonuses and an upper limit of life at 120 years old. The pension contributions
for this project are locked at 8% which is the minimum required by the government
by October 2018 [pas, 2015a]. With the aim of investigating the different models of
the economy the starting age is set at 20, along with the NRA at 65 with a salary of
14
£20,000. The fact that this is a fairly low income meant that the replacement rate was
fixed at 60% of the final salary which is normal according to Standard Life. In order
to simplify the optimization it was decided that there would be no salary inflation
through promotion and only RPI inflation would be considered to keep the buying
power of the salary constant.
The basics of the ALM are to accumulate the pension fund throughout the working
lifetime of a person. It then de-cumulates the fund through a drawdown pension
where a constant amount, with respect to inflation, is taken annually. The fund is
annually accumulated with respect to investment returns on an underlying model of
the economy with respect to the given investment strategy. This is run over a given
number of simulations from the model of the economy. The following algorithm gives
the structure of the ALM.
Algorithm 2 Asset-Liability Model algorithm
for loop for the accumulation period do
Each contribution is inflated from the previous contribution with the annual
rate of inflation.
The fund is then accumulated up until NRA. This is done by accumulating the
fund with investment returns and then adding that years contribution.
end for
for loop for the de-cumulation period do
The initial drawdown is calculated with respect to the salary inflated up to
NRA.
The drawdown for each time-step is then calculated using the previous draw-
down amount and the annual inflation.
The fund is then de-cumulated through time. This is done by accumulating the
fund with respect to investment returns the taking the annual drawdown
amount.
end for
It was decided that the fund would start off empty with the first contribution being
accounted for at the end of the first year. It was also decided that the drawdown was
received instantly at NRA. The code for the ALM written in MatLab can be found
in Appendix 5.
The investment returns in the ALM are accounted for using a weighted average on
the investment strategy. The returns are taken to be continuous so have the form:
exp{Weqi.ln TRISt
TRISt−1
+ Wbon.ln TRICt
TRICt−1
+ Wcas.ln TRIBt
TRIBt−1
}
In this equation, TRIS, TRIC and TRIB are the total returns indices for equity,
bond and cash respectively. Wi represents the weights of the returns with respect to
15
the investment strategy, where i indicates the index the weight is relevant too.
The final function of the ALM is to count the time at which the fund goes negative
on a simulation. With the time at which the fund runs out a histogram is constructed
to display information about the life of the fund. This histogram and count of when
the fund goes negative is used in producing the risk metric which is used in the opti-
mization later.
A standard histogram produced by the ALM with sensible investment strategy is
produced below. A sensible investment strategy is decided to be a strategy which
starts of in equity and moves to cash as the fund reaches the NRA, as the fund starts
to deplete it moves back to equity.
Figure 4: Histogram of the ruin times of a fund with a sensible investment strategy over 5000
simulations, produced from the ALM, with time representing the life of the worker
Figure 4 highlights some common points of the histogram produced by observing the
time a fund dies across a suitably large number of simulations. The main point of the
histogram is the large amount of data contained in the last bin of the histogram. This
large spike is due to the fact that the fund does not run out within the time period
being considered. It would be observed that the tail extended further than age 120,
at which the life has been capped at in this project. Using Bayes Information Criteria
(BIC) to run a diagnostic of the histogram it is found that the data most closely follows
a Generalized Extreme Value distribution. This was found using ”allfitdist” function
on MatLab [Sheppard, 2012]. It is clear that for a sensible investment strategy the
life of the fund is skewed to the right. This observation was used later in determining
16
the risk metric for the optimization.
Calibration and Validation
In order to calibrate and validate the ALM, to confirm cash flows are falling at the
correct times, the model was first built using the algorithm in the previous section as
an aid. Once this model was created with flexible inputs it was then calibrated to a
basic model to ensure cash flows were being taken at the correct time.
The basic model of the ALM had three stages, each built on top of the previous, to
check different cash flows of the final model. This started with a simple deterministic
return with no inflation, then introduced inflation to the deterministic model and a
final full model dependent on a model of the economy for investment returns. The
deterministic return was set at 5% to confirm that the contributions and drawdowns
were taken at the correct time with respect to the annual growth of the fund. The
deterministic model with inflation was used to confirm that the salary, contributions
and drawdown were taking account of inflation at the correct time. The final full
model was used to validate that the ALM being used was providing correct consistent
values with respect to all values being fed into it. The ALM was checked with the
contrived models as they progressed with any alterations being made. This process
resulted in a fully calibrated and validated ALM which ran correctly.
Parameters and Constraints
This section will cover all the parameters and constraints that are relevant for search-
ing the space. The logic and reasoning behind choosing these values are also covered.
Since the main aim of this project was to consider the investment period as a whole,
it was deemed prudent to choose sensible values for all the inputs in the ALM. These
values would be kept constant throughout the process so that only the investment
strategy would be varied throughout the search.
The parameters in this project are considered to be the inputs to the ALM. These
include the start age of 20 years, the Normal Retirement Age (NRA) of 65 and the
final age considered as 120. In order to give a large investment period, the start age
was set at 20. This age was decided upon since it could cover both young people fin-
ishing apprenticeships, graduates from universities and people who had gone straight
into work from high school. 20 was decided to be a fair median age between these
groups. The NRA was set to 65 due to the fact it is a median for the age at which the
current state pension is received [gov, 2015b]. Previously, 65 was the NRA in the UK
17
but this has been phased out with no definite upper limit in most professions. It was
also determined that the life would be alive at NRA, since the aim of the project was
to examine the drawdown pension. The age 120 as an upper limit is being considered
due to the restriction of the life tables used. It is fairly common that 120 is considered
to be an upper limit for life.
Other parameters include the fixed salary of £20,000, the fixed contribution rate of
8% and the fixed replacement rate of 60%. This salary was decided upon since it rep-
resented an income which was considered low as it is below the nation average [ONS,
2013], but would keep an average family out of poverty [jrf, 2008]. It was important
that it wasn’t too low as it was decided that the salary would only grow with inflation
to keep the purchasing power, but would not grow with promotions or bonuses. The
contribution was set at 8% of the salary since it is a minimum rate required by the
government by 2018. The contribution will be made up from 4% from the employee,
3% from the employer and 1% from the government through tax breaks [pas, 2015a].
By keeping it constant at a low rate it avoided introducing lifestyle into the optimiza-
tion. In general a contribution rate would vary as the employee matured, with the
need of buying a house and supporting a family being removed from this project. The
replacement rate, the percentage of final salary received through drawdown, was set
at 60%. This is an industry standard where persons on a low income receive 60% of
the final salary as an income.
By fixing these parameters it meant that the only variables being considered by the
model was the underlying model of the economy and the investment strategy. This
allowed for a focus on searching the investment period for an optimal strategy and the
robustness of this strategy. The constraints on the inputs are discussed in the next
chapter concerning how the investment period has been constricted. By constricting
this region it allowed for the search region to be reduced to a feasible size.
18
Optimization Methods
This chapter will introduce the risk metric used to determine an optimal investment
strategy, what is considered to be optimal under this metric and other metrics that
could be considered. Once this has been introduced the different methods, Genetic
Algorithms and Dynamic Programming, for searching the space of possible investment
strategies shall be discussed.
Risk Metric
In general a risk metric is a quantifiable way to measure risk. The risk being consid-
ered in this project focuses on the risk of running out of income too early through
drawdown. It is also constrained by the fact this project does not want an excessively
large pot left at the death of the worker. If a large amount of money is left in the
pension pot it would imply the worker has not used the drawdown system to its full
advantage. It is assumed that the worker will not be leaving any money to beneficia-
ries which is why the pot must be as close to empty as possible. This project is only
considering altering investment strategy and not the contribution rate or drawdown
income. Therefore the risk being considered must aim to find an investment strategy
which provides a fund lifetime which will aim to be near depletion at the time of death
of a worker. This criteria was decided upon with the aim that the pensioner would
use their income to the full. A full optimization, which would allow for variable draw-
down and contribution rates is considered in the chapter on further research. This
uses Stochastic Control and would allow for the maximization of the workers income
and monetary needs through work and retirement.
To begin with, a measure for the workers life had to be decided upon. It was assumed
that the worker would survive up to the NRA (Normal Retirement Age) since only the
drawdown optimization was being considered. Using a life table provided by Standard
Life it was possible to create a density curve of the death of a life, alive at NRA. The
tables used were ”RMC00 with CMI 2013 M [1.5%] advanced improvements”. This
meant that there was a minimum improvement of 1.5% on the mortality rate through
time. These tables provided values of probability for dying at a given year and at
a given age. Using this information, a number of people alive at a given age was
19
calculated starting with a suitably large number of lives at the NRA. This allowed
the number of deaths in a year to be found which was used to create a density of
death by dividing the annual deaths by the total starting life [lon, 2009]. A density
curve was produced and is displayed:
Figure 5: Conditional Density Curve of death for a life alive at NRA
This density was validated using numerical integration to confirm the area under the
graph was approximately equal to one. From the information gathered a number of
other measures of a life alive at NRA were found. The curtate expectation of life
was found using the equation ex = inf
k=1 PK x [Ciecka and Ciecka, 1996]. This came
out as 94.24 years of age. The 5%, 25%, 50% and 75% quantiles were also found
using numerical integration techniques. These were found to be 75, 89, 96 and 102
respectively. These values are consistent with the graph provided. Since the graph is
skewed to the left, it implies that the mean would be slightly less than the median
as is the case [epi, 2015]. These measures provided as basis for which the risk metric
would aim to achieve.
A number of different risk metrics were considered to achieve the objective. Since the
aim of the project was to find an optimal investment region it meant that key points
of each distribution must be considered. One metric that was considered was quantile
matching following ideas used by Sgouropoulos, Yao and Yastremiz in ”Matching a
Distribution by Matching Quantiles Estimation” [Sgouropoulos et al., 2014]. However
this idea was dismissed since an observation was made that it would never be optimal
to control the upside risk of higher returns through investment options and taking on
more risk to reduce expected returns. The only viable way to alter this would be to
20
alter the drawdown amount if excess cash has accumulated. Since this was constricted
for this project this method was rejected. It was also observed that the histogram of
fund life, as in figure 4, was skewed to the right while the density curve of death was
skewed to the left. This made it impractical to match quantiles of upside or downside
risk. Other metrics considered included work by Gerlach et al. (2012) [Gerlach et al.,
2012] where it was considered using their research into time varying quantiles with
respect to the aim in this project. However, since their work focuses on Value-at-Risk
it was deemed not to be applicable to this project.
Due to this observation it was decided that the risk metric that would be considered
would be matching the median of the funds life to the median of expected life. This
reduced the effect of the skew of the distributions. Optimality would also be indicated
through maximising the left hand quantiles of the fund, 5%, 10% and 25%. This would
allow a quantitative analysis of a strategy which would come close to being depleted
at a time close to the expected age of death of a pensioner. The effort to maximise
the lower quantiles meant that the downside risk of the pot running out before death
was also being addressed.
Methods of Optimization
This section covers the different methods of optimization used in the project. The
first method of optimization covered is the principles behind Genetic Algorithms (GA)
to search a space. The second method discussed is Dynamic Programming (DP), in
particular the Bellman Equations, to search the space. Two methods of optimization
are considered due to the different areas that are being searched and for completeness.
Before the methods of optimization are introduced the space that is being searched
must be defined. As discussed in the chapter on the Wilkie Model, there are 3 asset
classes for investment being considered. These represent a higher, medium and low
volatile risk investment options for a pension scheme. Since all other variables have
been constrained it allows for the change of investment strategy to be the only variable.
This results in the space to search which can be represented by the diagram which
follows over the page.
21
Figure 6: Diagram of Investment Region shown by the three assets classes considered
Figure 6 gives an indication of the space which shall be searched. This is the region
for one time step. The investment strategy shall be varied through time in order to
find an optimal region across the entire investment period. In order to restrict the
space that must be searched, limits were put on possible combinations. It was decided
that at most there could be a 50% split between assets at each time. This meant a
possible 6 combinations. These combinations were 100% in equity, bond or cash or a
50% split between the assets, equity and bonds, equity and cash or bonds and cash. It
was also restricted such that 100% of the fund must be invested at all times and there
was no short-selling allowed, which is in-line with current pension regulations. It was
also decided that the search space would be restricted through the number of time
steps considered. Since it is spanning 100 years of investments, it was decided that
investment decisions would be kept constant for decades at a time. These restrictions
on the search space reduced it to a feasible size and were kept constant for both
models considered.
Method One: Genetic Algorithm
This section aims to introduce the methodology of GA (Genetic Algorithm), covering
the principles and process behind it.
GA is a process, based upon evolution and natural selection, that searches a space
until some criteria is met. It requires some real search space, decision variables and
a process to minimise or maximise. This process is known as the objective function.
In this project the decision space is the space of all possible investment strategies
through time and the objective function is the risk metric. GA are considered to be
general in their search of a space using a simplistic algorithm. They continue to search
the space until a suitable solution is found which meets some termination conditions
[Revees and Rowe, 2004, p. 19:49]. The general form of the algorithm is given next.
22
Algorithm 3 GA algorithm
Choose an initial starting point to search from.
Evaluate this information with respect to objective function.
while Termination conditions not satisfied do
repeat
if Use a combination to test space (Crossover) then
end if
if Use mutation to search surrounding space for better areas then
end if
Reject or accept with respect to objective function
until optimal solution is found
Select a new area to search
end while
From this algorithm it is clear to see the methodology of GA. It starts with an ini-
tial area to search against the objective function. Once this area is searched it uses
crossover and mutations to move through the space [sea, 2015].
Crossover in GA can be related to this project in testing combinations and deciding
whether or not different combinations provided better or worse results with respect
to the risk metric. Mutations have been considered in this project by changing an
investment option to test how it affects the metric.
This method has been applied in this project by taking the principles of the algorithm
above. In principle GA applies a heuristic algorithm, meaning it allows the user to
find the best solution through the search of a large space [Eiselt and Sandblom, 2012,
p. 175]. Heuristic methods allow for an optimum to be found, whereas exact methods
of search may just find a feasible region which is not optimal. An exact method which
was considered in this project was the Branch and Bound Method, where linear pro-
gramming is solved in a relaxed environment, honing in on a feasible solution given
the risk metric [Eiselt and Sandblom, 2012, p. 158]. This logic has been applied and
adapted in the searching the investment space. Suboptimal regions, with respect to
the risk metric, are rejected as the space is searched. In order to fully test the space
these suboptimal areas are only again considered in mutations in the GA.
The fact that an optimal investment region is being sought and not a strategy is
due to the method used. Since this project has restricted the space in such a way,
any single investment strategy would be illogical and unsound. Therefore the focus
is on the region of investment strategies that the optimization method hones in on.
Since the same method to search the space is applied to both models of the econ-
omy, the robustness of the region honed in on will be examined. It has also been
23
observed that numerical solutions attained by GA are unlikely to be exact predictions
[Forrest, 1993]. It is more common that GA provides insight into conditions of the
data, that is how the method is being applied in this project, when comparing models.
The actual logic for applying this method in this project is as follows. To begin with
the time for investments was split into two blocks of 50 years. The six combinations of
strategies, at each time, were ran through the Asset-Liability Model. This produced
the median and quantiles of the funds life to use as the risk metric. The best area was
honed in on with the rest being rejected as suboptimal. From there the algorithm
then moved to the next blocks of time, 33 year blocks. The process was repeated
using information gathered from the previous blocks. Again, after an initial search
of the space, the crossover was preformed by using combinations to hone in further.
Mutations were added to test a wider region and areas that had been rejected previ-
ously. This process continued through time blocks until 10 year blocks were reached.
The process of initial search based upon the results of the previous blocks were used
through out, using crossover and mutations to fully search the space. This process of
searching allowed the use of the principles of GA through time. It provided a com-
prehensive search which honed in on an optimal region of investment given the risk
metric.
Method Two: Dynamic Programming
This section aims to introduce the concept of DP (Dynamic Programming) along with
the principles and process behind it. It shall also discuss how it has been applied in
this project.
In particular, this project looks at Bellman’s principle of optimality and the equa-
tions connected to this [Bellman and Kalaba, 1965]. DP can be used to find optimal
solutions to sequential decision problems. A decision will be made upon current infor-
mation with respect to an objective. This is relatable to this project since a decision
must be made on an investment strategy for a decade which will affect the results of
the risk metric.
The framework for DP follows a straight forward process if there is a finite time hori-
zon, which there is in this case. Across a time dependent space, at each time step
or state, a decision must be made. The outcome of the decision made in this state
produces new information which is then carried forward to the next state. Each state
depends on the previous state and the decision that was made. This can be expressed
as transition probability, pss = P(St+1 = s |St = s, at = a), where S is the state and
a is the decision that is made.
24
In the book ”Discrete Dynamic Programming” [Aris, 1964], the terms used in DP are
given. They are given over the page in the context of this project.
ˆ State: The time steps across the lifetime of the investment. Here there are 10
states.
ˆ Decision: The variable in the model. Here it is the choice of investment strategy
at each time step.
ˆ Transformation: The decision at each state produces an output. This is seen in
this project when it is run through the ALM to gain the median and quantiles
of the fund given the investment strategy.
ˆ Constraints: The constraints that are put on both the search space and the
decision variable. These include the 50% split between assets on the decision
variable and the decade time steps on the state.
ˆ Objective Function: In this project this is represented by the risk metric which
determines optimality.
ˆ Parameters: These are fixed constants. These may include the fixed Normal
Age of Retirement or fixed salary throughout.
This list gives an indication of the information needed for the process of DP and how
it is relatable to this project.
This project uses a form of DP introduced by Bellman in 1957. As the process moves
through states it is important to consider the final optimization and not focus on
achieving the best outcome at each time step. The Bellman Equations break the
problem in to a sum of smaller sub-problems with the aim to optimize the whole pro-
cess. It uses Markovian properties, meaning that only the current state is important
irrespective of past states [Judd, 1998, p. 409]. It is achieved through the Principle
of Optimality which states:
”An optimal policy has the property that whatever the initial state and initial
decision are, the remaining decisions must constitute an optimal policy with regard
to the state resulting from the first decision.”
This quote comes from ”Dynamic Programming and Modern Control Theory” [Bell-
man and Kalaba, 1965]. The application of this quote in the Bellman Equations is
that immediate optimality is combined with expected future optimality. The equa-
tions work recursively through time with a decision made at each state linked to the
25
next decision through these equations. The value function of the Bellman equations
has the form V (St) = max
at∈F(St)
f(St, at) + βV (St+1). This equation is the core of DP
and the principle behind it is applied in this project. In the equation, f(St, at) is
the function which indicates optimality in a state with β a weight on the value of
the next state. The F(St) represents the set of all possible decisions in the state.
The equation can also include the expectation of V (St+1) to account for a stochastic
decision process [James, 2011].
While considering DP, it is possible that issues known as ”the Curse of Dimensional-
ity” can arise. These issues come from the fact that there is a large number of choices
available at one point. This makes it too computationally expensive to explore every
future state. In order to correctly apply the Bellman Equations, all future states
must be considered. Due to the large dimensions this project is working in, this is
not possible. Even with the restricted space that is being searched, there are still
over 60 million combinations. If this search was relaxed to allow for assets to have
investments of 33% instead of 50% there would be over 10 billion combinations. This
exponential growth as the dimensions increase is the curse of dimensionality. [James,
2011]
The solution method to this equation is also recursive, given that the equation is
recursive. In this project there is a finite time horizon which means it is possible to
use ”Backward Dynamic Programming” starting from the last state. This requires
that the value function is found at each state as backwards induction is completed.
To apply this solution method, Policy Iteration techniques have been used. This re-
quires, from an initial starting point, DP is carried out reaching a feasible optimum.
At each stage, working backwards through time, a decision is made with respect to
the risk metric. This will search the space from the initial starting point. Then a
different starting point is considered and again DP reaches a feasible optimum and
so on. It can hone in on an optimum by using information gathered from previous
starting points. This would continue until an optimum is found. [James, 2011]
The principles of DP have been applied in this project. In the search of the investment
period while changing models, it is used as a secondary search method for complete-
ness. It is applied again when searching the effect of splitting the space. It uses the
risk metric as the objective function to search through time as the state. The invest-
ment strategy is being used as the decision process. The methodology used to find a
solution is policy iteration. A number of strategies are selected as different starting
points for the search. The optimization process works backwards through time to find
the optimal solution from the given starting point. This was done until convergence
26
and the optimal strategies are compared to select the best option. This searched
the space from various starting strategies, including extreme and sensible strategies
to preform a thorough search. This method of searching is known as Approximate
Dynamic Programming since it allows for sub-optimal, feasible regions to be found
using the concepts of DP. It makes it possible to work around the issues which arise
with the curse of dimensionality.
27
Changing the Model
This chapter will discuss the results obtained from the optimization under the two
models of the economy as defined previously. The differences and similarities in the
characteristics of the investment region found under each model shall be examined.
The key observations are summarised at the end of the chapter.
Results from Searching the Space
This section will discuss the optimal region of investment under each model separately,
along with the key observations. The exact method will be mentioned, however a more
detailed account of the process can be found in Appendix 4, along with tables showing
the output of the risk metric.
As defined in the criteria for this project, the investment period as a whole is being
considered. This will allow for an examination of the optimal strategy from the start
of the investment period right through until death. This is used as an alternative to
considering an optimal investment for accumulation which would be separate from the
de-cumulation period. The parameters and constraints are kept constant throughout,
when searching the space, and were defined in the Asset-Liability Model chapter. The
key parameters and constraints for the search are that the young person starting work
is aged 20 with an upper limit of life at 120. This gives the investment period of 100
years. The salary is locked at £20,000 which only accounts for inflation through the
RPI. The drawdown is fixed at a constant income of 60% of final salary. The reasoning
behind these values can also be found in the chapter on the Asset-Liability Model.
In order for consistency throughout, the Genetic Algorithm (GA) method shall be
used as an initial search method under both models of the economy. The Dynamic
Programming (DP) method shall then be applied for a final search of the space, from
a number of starting points, for completeness. This provides consistent parameters,
constraints and optimization methodology between the models. When discussing the
results the 5%, 10%, 25% and median shall be in the form [5%, 10%, 25%, median]
throughout for consistency.
Before the results and key observations are given, a brief oversight of the exact appli-
28
cation of each method shall be given.
The main process used to hone in on an optimal region for the full area is the GA
method. This uses the principles discussed in the section on the Genetic Algorithm.
As discussed in that chapter, the investment period was searched by moving through
blocks of time. Starting with two large blocks of 50 years the investment period was
searched, providing general results and observations of the attributes of the space.
Using the crossover process this allowed for unsuitable options to be discarded. These
discarded options would only be considered in later blocks of time as mutations to
search the space. This continued to hone in on a region by reducing the block sizes
until 10 year blocks were reached. Using observations from the previous time blocks
meant that the optimal region was found without a computationally expensive search.
The crossover technique was use to test if different combinations were optimal. By
adding in mutations of previously rejected investment options and combinations, the
search of the space was extended to test areas that had been rejected. Using both
these techniques under the GA method allowed for a thorough search of the invest-
ment period. This method was simplistic to run but honed in on a suitably optimal
region.
The second method used was the DP method discussed, in the section on Dynamic
Programming. Since it has been observed that the GA method provides a feasible
optimal region, it may not provide the global optimum. Therefore, for completeness,
it was decided that a second method should be applied to search the space. In order
to apply the DP method, initial starting strategies had to be decided upon. It was
viewed as prudent to start from a number of different extreme and sensible strategies.
The starting strategies decided upon were 100% and 50% split in each asset class
which provided six starting strategies and then another two sensible strategies. The
sensible strategies were the optimal found under the GA method and a basic strategy
which would have 100% in equity when the fund was at its smallest, moving to 100%
in cash when the fund was largest. The DP backwards iteration solution method was
applied to each of these strategies. If it was deem acceptable, it would be run again
from the strategy given from the first iteration. This allowed for an extensive search
of the space, on top of what had been achieved under the GA method.
In order to test optimality of an investment strategy, the strategy was ran through
the Asset-Liability Model. This produced quantiles of the fund’s ruin with respect to
the investment strategy and underlying model of the economy. All other variables of
the model were kept constant as defined above. Optimality was determined by the
risk metric which stated that the median should equal 96 with the lower quantiles of
29
the fund as high as possible. This process was the same under both models of the
economy. This allowed for consistency when checking the effect changing the model
has on the optimal region, and how robust this region is.
Results under the Wilkie Model
This section shall provide the key insights, observations and results under the Wilkie
Model. A full analysis of the process to arrive at these results can be found in
Appendix 4.
Optimal Region under Wilkie Model
This subsection will introduce the optimal region found under the Wilkie Model. A
brief description of the region is given. Using the methods of optimizations described,
an optimal region was found under the Wilkie Model. This optimal region has the
form:
ˆ The first four decades contain 100% Equity.
ˆ The fifth decade contained a 50% split between Equity and Cash
ˆ The sixth decade contained a 50% split between Equity and Cash or Equity and
Bonds
ˆ The seventh decade contained a combination of Cash and Bonds
ˆ The last three decades contained 100% Equity or Bonds
The optimal region allows for flexibility in the sixth and seventh decades. Any in-
vestment strategies of this form provided a suitably optimal strategy. There was also
flexibility in the last 3 decades, it is discussed later in this section that these decades
have little effect on the median. It was found using the GA method. An exact strat-
egy of this form was used when applying the DP method to preform further searches
of the space. The results from using DP to further search the space showed that,
while there were alternative strategies that met the median, they were suboptimal
due to the fact that the quantiles were lower. The quantiles of 5%, 10%. 25% and
50% produced by the above strategy are [75, 78, 84, 96].
Key observations under Wilkie Model
This subsection covers the key observations and characteristics found while searching
the investment period. There are a number of characteristics discussed in this sec-
tion. The main discussion of these characteristics can be found in upcoming section,
30
discussion of results, which compares and contrasts results found under each model.
The first characteristic that was noted was that, with different criteria and risk met-
ric, investing 100% in equity for the entire investment period could be considered
optimal. This provided 5%, 10%, 25% and 50% quantiles as [76, 80, 89, 120], over
the 5000 simulations of the fund. This was considered to be an upper limit on what
the quantiles could achieve in this model. The reason behind this being considered an
upper limit is given in the upcoming discussion section which follows. This median
shows that at least 50% of the funds are active at the age 120. This would imply
these quantiles are as large as can be obtained. It is clear that, if the aim was to
grow the fund as large as possible across the whole investment period this is optimal.
It reinforced the view that quantile matching would not be logical or possible when
only considering altering the investment strategy. Upside risk of higher returns would
be reduced through poor investments which is suboptimal. As discussed in the Op-
timization chapter, the lower quantiles of death for a life alive at NRA were found
to be [75, 81, 89, 96]. By comparing the quantiles, it is observed that the quantiles
produced under this extreme strategy failed to meet the quantiles produced by the
death of a life. Since this extreme strategy produces quantiles which are considered
to be an upper bound, it implies that the quantiles of the fund will never meet those
of death. This was important to note during the search, as an assumption was made
that it was highly unlikely to match the quantiles, while matching the median. This
assumption was confirmed throughout the search when the upper limit on quantiles
was never reached in tandem with matching the medians.
The next characteristic observed from the search was the suboptimal effect of not
holding equity for the first 4 decades. Working through the space using crossover and
mutations highlighted this characteristic. Throughout the search of the investment
period, it was quickly noted that it was optimal to hold equity over anything else
in the early stages of the fund. A number of mutations were considered when using
the GA method to search but the majority were rejected since it caused the median
to fall short of 96. Another result of holding anything other than 100% equity in
the early stages was that the lower quantiles suffered. An example of this would be
holding 100% cash for the first decade while keeping the rest of the strategy as defined
above. This reduced the quantiles being considered to [75, 77, 82, 92]. There were
some investment options that could be held in the early years, however, by doing so
it reduced the flexibility of the optimal regions. This is highlighted by holding a 50%
split between cash and equity in the first decade. In order for the median to be met,
more equity must be held at later times to compensate. By holding 100% equity in the
sixth decade and 50% between cash and equity in the seventh decade to compensate,
31
the quantile values [75, 77, 83, 96] were produced. This shows that while the median
is met, the lower quantiles are impacted. The region of strategies which meet the
median with this criteria is also significantly reduced. This highlights that, while it is
possible to find an optimal strategy without equity in the early stages, it impacts neg-
atively on the optimal region. This characteristic was observed throughout the search.
A final characteristic observed was that the investment strategy in the later time peri-
ods have little impact on the median and quantiles being considered. It was observed
that both equity and bonds provided similar results to median and lower quantiles
when considered for the last 2 decades investments. This is due to the fact that by
age 100, or time 80, a large proportion of the funds are depleted. As observed un-
der the sensible investment strategy in Figure 4, approximately 70% of the funds are
depleted. This means that any investment options considered in these final time peri-
ods are only affecting a small number of simulations. It is likely that the median and
lower quantiles will be mostly dependent on funds which have ran out by this time.
It is also due to the fact that it would be expected that funds are relatively small
by this stage. There would have been 25 years worth of drawdown income taken out
with only the investment returns accounting for accumulation. When the fund is at
its largest, around about retirement, it would be expected that suboptimal strategies
have more impact at that point than at the end. Therefore, as it is important to
only consider equity in the early years, when the fund is smallest and growing, the
investment options in the last year have much less weight with respect to the risks
being considered. By observing this characteristic of the space it allowed for these
areas to be locked and only considered through mutations in the GA search method.
While searching the space with DP, after a feasible optimum region had been found
using GA, the characteristics of the early and late years of the fund were further
observed. It reiterated previous observations that it was suboptimal to not hold equity
in the early years and the final couple of decades had little effect on the median. Using
both methods provided a result which was considered to be the region of investments
which cover a global optimum.
Results under the Geometric Brownian Motion
This section will discuss the optimal region of investment under the Geometric Brow-
nian Motion (GBM). It shall also examine the characteristics of the search space
observed throughout the search. This section will not look to examine or discuss
the differences and similarities between the models in detail. The majority of that
discussion is held in the next section.
32
Optimal Region under GBM
This subsection will look at and give a brief discussion of the optimal regions of
investments under the GBM. Under the GBM the optimal region of investment was
found not to be unique. There were two regions which each contained strategies of
similar optimality. One of the regions was very similar to that of the region found
under the Wilkie Model. The first of the optimal regions has the form:
ˆ The first five decades contain 100% Equity.
ˆ The sixth and seventh decade containing a combination of Bonds and Cash
ˆ The last three decades contained 100% Equity or Bonds
The second optimal region has the form:
ˆ The first four decades contain 100% Equity
ˆ The fifth decade contains a 50% split between Equity and Cash
ˆ The sixth and seventh decades must contain some Equity which can be a combi-
nation with Bonds or Cash
ˆ The eighth decade is flexible in that it can contain combinations or pure invest-
ments across Equity, Bonds and Cash
ˆ The last two decades contained 100% Equity or Bonds
It was noticed that, while these regions are different, they exhibit some similar char-
acteristics. Both of these show that it is optimal to hold equity in the early years
of the fund, with differing degrees of flexibility in the middle years. They also both
show that the final years investments have similar characteristics. The first of the
optimal regions given has a narrower region of flexibility, but provides a slightly im-
proved optimal with respect to the quantiles. Both of these regions provide suitably
optimal strategies under the GBM. They were discovered using the GA approach.
The quantiles produced are approximately [72, 74, 81, 96] for both regions.
Key observations under GBM
This subsection will cover the key characteristics of the investment space observed un-
der the GBM. There are a number of characteristics considered, however, they shall
not be compared to those found under the Wilkie Model. This comparison will take
place in the next section, discussion of results.
33
The first characteristic examined, as before, was the effect of investing 100% in equity.
This was examined to measure the quantiles to gauge a possible upper limit. As with
the Wilkie Model it was observed that the quantiles for 100% equity investment fell
short of the quantiles of death. Under the GBM the quantiles were [72, 74, 83, 120].
These are not only less than the quantiles of death, but also fall short of the quantiles
observed under the Wilkie Model. This was used to give a possible upper limit of
quantiles which could be obtained when searching the space. By comparing these
quantiles to those produced in the optimal region it is observed that the quantiles are
easier to obtain at lower levels. This is due to the constructional differences between
the Wilkie Model and the GBM. Since the GBM is driven by independent normal
shocks [Glasserman, 2003, p.79], the inter quantile range is narrower which allows for
the extreme quantiles to be more easily obtained, as is evident here.
The next characteristic to be observed under the GBM was the impact on the median
and quantiles with respect to the investment in early years of the fund. This is similar
to the characteristic observed under the Wilkie Model. By considering alternative in-
vestments through the mutations in the GA search method, it was discovered that not
holding equity in the first four years was suboptimal and had a negative impact on the
quantiles. This observation was reiterated in the DP search where it was discovered
a number of singly optimal investment strategies. This meant there was no flexibility
in the strategy to form a region of optimum strategies. It was generally observed
that if a combination of equity was held at some point in the first three decades, a
strict strategy had to be followed throughout to meet the criteria of the risk metric.
In this context, a strict strategy is meant to mean that a strict process has to be
followed. In this case it was to hold almost 100% equity throughout to account for
this split in the early years. These strategies were noted, but discounted since there
was no improvement over the optimal regions found. They were also discounted due
to the fact the search space was rather naively constructed with the constriction of
investment options and spread of investments. It seemed illogical to rely on a single,
strict strategy where a region of strategies provided similar, if not better optimality,
with more flexibility and options to investigate the effects. Observing that it was
optimal to only hold equity for the first section of the investment period allowed for
a more focused search on the middle period, to hone in on the optimal region. The
early time periods were only considered as mutations.
The final characteristic observed under the GBM is the fact that the final time periods
have negligible impact on the median and lower quantiles. This will be due to the
same reason as discussed under the Wilkie Model. It would still be expected that as
the strategy moves through the investment period, it is more likely that the fund is
34
depleted. This means that the final years will have less effect on the quantiles over
the 5000 simulations.
Discussion of the Results
This section will look to compare and contrast the key characteristics along with
differences and similarities observed under each model. The optimal regions shall also
be compared.
Comparing the Quantiles and Processes of the models
This subsection will look to compare and contrast some of the general differences
between the models. This shall use the extreme investment strategies of 100% equity
that were considered under each model. Using this extreme strategy allows for the
quantiles to be examined along with the reason behind the differences and similarities.
One of the first points to consider is the difference and similarities between models,
observed under an extreme strategy. It was observed under the Wilkie Model that
with a different risk metric, having 100% invested in equity for the entire period
could be considered optimal. This would provide the highest probability of having
an income at age 120, on average across the 5000 simulations. This was also the
case under the Geometric Brownian Motion (GBM). The graphs that this investment
strategy produced are as follows.
Figure 7: Histograms of the two funds with 100% invested in equity for the entire time period
under each model, as before, time represents the age of the pensioner
35
It is clear from figure 7 that both models have the same conclusion in this respect.
Both show that across the 5000 simulations there is approximately a 55% chance that
the fund will still have a positive value when the person is age 120. This is clearly
optimal if the fund as a whole was looking to be maximised, however, with respect
to the criteria for this project this is not optimal. While both funds have a similar
probability of being positive at age 120, the quantiles differ greatly.
The 5%, 10%, 25% and median under each model highlight a noticeable difference.
Under the Wilkie Model the quantiles are [76, 80, 89, 120] whereas under the GBM
they are [72, 74, 83, 120]. It is clear to see that the lower quantiles under the GBM
are smaller by a consistent amount than those in the Wilkie Model. This brings to
light the underlying differences in the models.
Another point observed from the quantiles produced under this extreme investment
strategy is with respect to the values observed. These higher quantiles observed were
considered to be an upper limit of what was achievable. Due to the skew to the right,
and the large amount of data held in the final bin, other quantiles could not surpass
these. This view was reinforced while searching the space, when no strategies pro-
vided better quantiles. By using this information, it meant that an upper limit could
be applied to what was achievable for the quantiles in the search. Since optimality
was being indicated in part by the quantiles, it meant that an indication of optimality
could be determined by how close the quantiles were to this upper limit. As seen from
the quantiles produced by the optimal region under the GBM it is possible to match
these. This would imply that this is the best optimal region under this search criteria.
Another observation of the quantiles was the consistency in range of quantiles between
the strategies under each model. The range between 5% and 25% under the Wilkie
Model was 13 years while the range under the GBM was 11 years under the extreme
equity strategy. Under both optimal regions the range was 9 years. Since the GBM
provides a narrower range in the extreme strategy, when it achieves the same range
in the optimal, this shows the consistency in the optimal region between the different
models.
As discussed in their respective chapters, the different models are driven by different
processes. The difference in processes is also evident in the graphs. Under the GBM
the years immediately after 65 are much more at risk of running out of money than
under the Wilkie Model. This is clearly the cause of the difference in quantiles but to
examine the cause of this, the accumulation period and driving period of each model
will have to be discussed.
36
As the models move through time the returns are driven by different processes. The
Wilkie Model, being driven by the Auto-Regressive (AR) time series depends on
the previous time step [Sahin et al., 2008] whereas the GBM is driven by independent
normal shocks [Glasserman, 2003, p.79]. This fundamental difference will also account
for some of the higher probability that a fund runs out so soon after retirement under
the GBM. Since they both have similar long term mean returns and volatilities the
AR process provides a smoother movement than the GBM. This means that if the
Wilkie Model is producing an upward trend it is less likely, than the GBM, to suddenly
jump down. The trends at specific points throughout the investment period can be
observed in figures 2 and 3 for Wilkie and GBM respectively. A snapshot of how the
equity total return index is preforming is given for when the fund is at its largest.
Figure 8: Snapshot of TRIS between years 35 and 55 of the investment period (age 55 to 75)
The retirement age, before drawdown occurs, is at time 45 (age 65). It is clear to
see in this snapshot that, while the GBM is more volatile in the movements, it has
a general upwards trend. Work by Milevsky [Milevsky and Abaimova, 2006] shows
that the sequence of returns is as important as the returns themselves. If a negative
return is followed by a positive return, while drawdown is being taken, it is much
more damaging to the fund than the other way around. This snapshot of the smaller
period highlights some of the key differences of the models that have been discussed.
It is observed that the GBM is more likely to change direction due to the independent
normal shocks whereas the Wilkie Model has more prolonged movements. By the time
37
of this snapshot the long term means have been achieved for both models, as indicated
by the dashed yellow line. This is due to the construction and calibration of the GBM.
The work by Milevsky [Milevsky and Abaimova, 2006] looks at the sequencing of
returns and the effect on drawdown pensions. This work is based upon a drawdown
strategy from a fund at its largest time. It looks at how the order of non-deterministic
return affect the ruin rate of the fund. The difference in early years of retirement be-
tween the funds can come down to this sequencing and the difference in structure of
the models. As discussed, the Wilkie Model has smoother movements from the AR
process whereas the GBM is more likely to jump in different directions. The smooth
effect of the AR process impacts less on ruin rate than the jumps. As observed by
Milevsky, a smooth average return of 8% provides a better ruin time than a process
that jumps between larger values, while still obtaining the 8%, if the first jump is
negative. This must be considered in the term of drawdown, when a constant amount
is being removed annually. This information would imply that the underlying driving
process of the model does have an effect on the outcome of an investment region.
Data shown in the Appendix 2 also highlights a difference in rate of returns and
volatility in the early years of the fund, under the different models. Using the equity
total return index as an example, it can be seen that in the first decade of returns the
Wilkie Model has an average return of 6.75% with a volatility of 13.82%, across the
5000 simulations. In order to produce the simple model of the GBM the average of all
10 decades of return and volatility has been used in calibration of the equity index.
Therefore, for all decades under the GBM the mean return is 7.39% with a volatility
of 15.88%. After the first decade in the Wilkie Model it comes close to a constant
long term average with mean return 7.46% and volatility of 16.11%. This discrepancy
between the models may have some input into the higher rate of funds running out of
money under the GBM. As Milevsky discussed, the sequence of returns when the fund
is at its largest, in the fifth decade, can have devastating impacts on the life of the
fund and its ruin age. This is the reason why it is most common to hold less volatile
assets when a pension pot is reaching maturity. This discrepancy in the first decade
of the models will have a slight impact on the fund at its largest, the sequence of re-
turns will also have an impact on ruin along with the process which drives the models.
Comparing the Optimal region of Investment
The next area to compare is the optimal regions found under each model. It was seen
that the optimal region of investment under the Wilkie Model was unique whereas
under the GBM it was not. It is important to note that one of the regions found un-
38
der the GBM was similar to that of the one found under Wilkie. The only difference
being that under GBM the eighth period allowed for more flexibility. However, it had
been noted in both models that investments towards the end of the period had little
influence on the optimality. As when looking at the quantiles for 100% investment in
equity, the quantiles for the optimal regions were different as well. This can be ex-
plained by the process of the models. From observation, it is clear that the quantiles
under the GBM will never reach as high a value as under the Wilkie Model.
In order to compare the models, both optimal regions under the GBM will need to be
considered. To begin with a basic comparison will be carried out. Then the quantiles
produced from the strategies, given in the results for each model, shall be used.
To quickly compare the similarities of the actual strategies, both had the first four
decades in equity. For the fifth decade the Wilkie Model used a split of equity and
cash where the GBM kept invested purely in equity. Again, for the sixth decade the
Wilkie Model used a split of equity where the GBM invested 100% in cash. For the
seventh decade the Wilkie Model invested in a split between bonds and cash whereas
the GBM invested in 100% bonds. From the eighth decade onwards both models
invested purely in any asset, commonly equity or bonds. Just from this basic analysis
it is clear that under both models it is optimal to hold pure equity for the first por-
tion of the investment period. When searching the space under both models it was
found that holding anything else impacted negatively on the quantiles. This was more
pronounced under the GBM. It is also clear that towards the end of the investment
period, the investment options has little effect on either model since the impact on
the optimality was negligible. They both hone in on a similar area of the search space.
The quantiles under Wilkie and GBM are [75, 78, 84, 96] and [72, 74, 81, 96], for the
models respectively. This provides a similar range between the respective quantiles
under each model as observed when looking at the values with 100% in equity. It
is observed that the quantiles under the GBM are approximately 3 or 4 years less
than the quantiles under the Wilkie Model. The reason behind this, as discussed
previously, is the driving process of the models. The quantiles of the AR process push
up the values. Under the second optimum area of the GBM the quantiles are similar
with values [71, 74, 80, 96] which are, again, lower by the same margin.
Graphical comparison of the Optimal region
For further comparison of the strategies under the model, this subsection shall consider
the graphs produced. This shall be an extension from the previous section which
focused on the form of the regions and the quantiles. The graphs produced by the
39
ALM under an optimal strategy from each region are given:
Figure 9: Histograms of the two funds with the optimum investment strategies under each model
Figure 9 shows a more emphasised version of the plots in figure 7. It must be noted
that, even while hitting the desired median, there is still over a 25% chance that the
fund will provide income at age 120. The characteristics of the histograms under the
different models differ slightly. Under the Wilkie Model there is a gradual increase in
funds running out of money between age 70 and 78 where it reaches the most likely
point, excluding age 120, where it levels off. There is a much sharper increase under
the GBM with a jump about age 67 and a rapid climb to about age 72 where it levels
off. Both of the histograms show the tails towards the older ages. After an initial
burst of funds running out of money, there seems to be a decrease in probability
towards the older ages. This would imply that past a certain age there is a high
probability of self-financing. This means the fund will be able to last until age 120
or even be self financing through investment returns. Doing some crude tests with a
deterministic model including the inflation index it was found that only an interest
rate of 5.56% throughout the entire investment period would make the fund last until
age 120. However, this comes back to the work of Milevsky in that the sequence of
returns impacts the fund.
The shape of the two histograms also provide some observations about the processes.
It is clear that the AR process driving the Wilkie Model provides smoother times
for the funds running out. From observation it seems that the GBM is more evenly
40
spread, with a less pronounced centre. It also means that the GBM has slightly larger
probabilities in the tails. From the large tails to the right under both models, it pro-
vides further evidence to the earlier point about the distribution of this data. It was
earlier theorised that the data could be modelled using a Generalised Extreme Value
distribution.
Conclusion of Comparison
There are a number of points to take away from the comparison of the different op-
timal regions of investment under the models. Firstly, from searching the space, it is
clear that both models provide similar results and analysis in the early and late years
of the strategies. There is consistency between the models in terms of quantiles. This
means that on the less sophisticated GBM, the quantiles will be consistently lower by
similar margins than those under the Wilkie Model. It is clear there could be some
issues when focusing on the middle of the investment strategy, if detailed analysis
was required. However, given the size of the space, both models will lead to a similar
optimal region. Both models do provide the same general shapes to the distribution,
with the Wilkie Model providing a smoother, more elegant shape. They both indicate
towards a distribution with a large right-hand tail.
The work by Milevsky [Milevsky and Abaimova, 2006] comes to the fore in some of
the observations. It is observed that the driving process of the models can affect
the investment strategy. Even though the models are calibrated to produce the same
mean return, volatility and correlation, once they have been ran through the ALM the
driving process takes affect. The movement of the indices under the different models
affects the ruin times of the funds under this different models. This can be explained
by considering the sequence of returns on a drawdown pension fund.
In general, it is possible to find an investment region under the simplistic GBM, which
would be similar to that under the Wilkie Model and satisfy the criteria. This region
shall also produce a similar range in quantiles and have the same characteristics.
However, the Wilkie Model does provide a more elegant model of the fund which
would provided better detailed analysis and statistical results. Therefore, in order to
hone in on an investment strategy and observe characteristics of the space, the highly
calibrated simplistic model will provide just as good results as the more complex
model.
41
Splitting the Time
This Chapter looks at the effect of splitting the period into accumulation and de-
cumulation. Instead of considering the time from the start of the working lifetime
until death, the optimal region shall now be searched by freezing one stage while
searching the other and vice versa. This is done using the simplistic method of Dy-
namic Programming (DP) as used as the second search method when considering the
investment period as a whole. A more complete and complex methodology for the
examination of the effect will be discussed.
In order to carry out this search some assumptions have been made. It was decided
that when one stage was frozen, an optimal strategy from the previous search would
be applied in that stage. In order to fully search the space of the side being consid-
ered, the Dynamic Programming (DP) method shall be applied. This uses the same
methodology as when using it for a second search method of the space as a whole.
This means it shall start from a number of different strategies. The backwards iter-
ation process is applied to search through time. The initial strategies used are the
same as before with 6 of the strategies being 100% and 50% split investment in the
asset class throughout the entire time. An optimal strategy will be used to explore
the surrounding space and then a basic sensible strategy, with equity when the fund
is at its smallest moving to cash as the fund reaches its largest. The search spaces
will be from 0 to 45 for the accumulation and from 45 to 100 for the de-cumulation
period. These times are with respect to the investment period, not the age of the
worker. As before, the investments will remain constant for decades, except for the 5
year periods around 45. Using these restrictions and the same criteria of the worker as
before it allowed for a basic investigation into effect of splitting the area. The Wilkie
Model was used as the underlying model of the economy.
Results and Discussion
The optimal strategy used was the one defined previously under the Wilkie Model.
For the accumulation period this meant the first four decades all in equity moving to
a 50% split between equity and cash for the last 5 years. The de-cumulation period
started with a 50% split between equity and cash for the first 15 years before moving
42
into cash for one decade then into bonds for the last three.
To begin the search the accumulation period was frozen while the de-cumulation pe-
riod searched. Starting from the optimal strategy and applying the DP methodology
altered some of the investments but kept it within the optimal region previously de-
fined. Starting from most of the other initial strategies converged to sub-optimal
strategies, meaning they met the median but the quantiles were too low. Some did
converge to optimal strategies within the previously defined region.
This search indicated that there would be no impact from freezing the accumulation
to search the de-cumulation period with the risk metric kept constant.
The next step was to freeze the de-cumulation and alter the accumulation. The same
techniques were used with the same initial strategies. As before, the process con-
verged to the optimal region. One anomaly was found which was that holding a 50%
split of equity and cash in the first decade could be considered optimal since there
was no impact on quantiles. However, as noted in the discussion of results when
changing the model, the resulting investment strategy was strict, consisting of more
equity. This was needed to produce higher returns to cover for the split in the first
period. Therefore, this was rejected since it reduced flexibility of the region without
improving the quantiles.
Since there was no difference in splitting the investment region while still applying
the same risk metric, this search reinforced previous observations about early and late
years in the period. While the fund was at its smallest, it was considered optimal to
hold equity. This observation allowed for more flexibility in the investment strategy
when the fund was at its largest. It was observed that if something other than equity
was held at these times, the flexibility of the investment strategy at its largest was
reduced since more equity would generally have to be held. The previous observation
of the flexibility in the final investment periods was also reinforced. When exploring
the de-cumulation period the flexibility in the final three periods was observed.
Effect of the Risk Metric
This search has been preformed using the same risk metric as previously defined.
That was, matching the median of the fund with the median of death for a life alive
at normal retirement age. This meant that the fund region as a whole was still re-
quired to be considered, even though only one half was being altered.
43
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation
Andrew_Hair_Dissertation

Weitere ähnliche Inhalte

Ähnlich wie Andrew_Hair_Dissertation

Ssrn id670543
Ssrn id670543Ssrn id670543
Ssrn id670543Aslan60
 
Thèse professionnelle sur les indicateurs de performance RSE et le management...
Thèse professionnelle sur les indicateurs de performance RSE et le management...Thèse professionnelle sur les indicateurs de performance RSE et le management...
Thèse professionnelle sur les indicateurs de performance RSE et le management...Chris Delepierre
 
Derivaties project...
Derivaties project...Derivaties project...
Derivaties project...manisankari
 
THE DIGITAL TURN Pathways for higher education in the digital age
THE DIGITAL TURN Pathways for higher education in the digital ageTHE DIGITAL TURN Pathways for higher education in the digital age
THE DIGITAL TURN Pathways for higher education in the digital ageeraser Juan José Calderón
 
Claims Reserving in Non-life Insurance in the Presence of Annuities
Claims Reserving in Non-life Insurance in the Presence of Annuities Claims Reserving in Non-life Insurance in the Presence of Annuities
Claims Reserving in Non-life Insurance in the Presence of Annuities Jarl Sigurdsson
 
EAD Parameter : A stochastic way to model the Credit Conversion Factor
EAD Parameter : A stochastic way to model the Credit Conversion FactorEAD Parameter : A stochastic way to model the Credit Conversion Factor
EAD Parameter : A stochastic way to model the Credit Conversion FactorGenest Benoit
 
Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...
Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...
Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...CamWebby
 
Defined Contribution In Europe: the Direction
Defined Contribution In Europe: the DirectionDefined Contribution In Europe: the Direction
Defined Contribution In Europe: the DirectionOpen Knowledge
 
Graham Ziervogel Masters Dissertation
Graham Ziervogel Masters DissertationGraham Ziervogel Masters Dissertation
Graham Ziervogel Masters DissertationGraham Ziervogel
 
Strategies for a High Performance Revenue Cycle
Strategies for a High Performance Revenue CycleStrategies for a High Performance Revenue Cycle
Strategies for a High Performance Revenue Cyclekarthik Venkilot
 

Ähnlich wie Andrew_Hair_Dissertation (20)

Thesis
ThesisThesis
Thesis
 
myEcoCost Brochure
myEcoCost BrochuremyEcoCost Brochure
myEcoCost Brochure
 
Ssrn id670543
Ssrn id670543Ssrn id670543
Ssrn id670543
 
Thèse professionnelle sur les indicateurs de performance RSE et le management...
Thèse professionnelle sur les indicateurs de performance RSE et le management...Thèse professionnelle sur les indicateurs de performance RSE et le management...
Thèse professionnelle sur les indicateurs de performance RSE et le management...
 
Knustthesis
KnustthesisKnustthesis
Knustthesis
 
Derivaties project...
Derivaties project...Derivaties project...
Derivaties project...
 
Lessons Learned: Experiences from Baltic SCOPE
Lessons Learned: Experiences from Baltic SCOPELessons Learned: Experiences from Baltic SCOPE
Lessons Learned: Experiences from Baltic SCOPE
 
Master_Thesis_Final
Master_Thesis_FinalMaster_Thesis_Final
Master_Thesis_Final
 
Bjr cimpa
Bjr cimpaBjr cimpa
Bjr cimpa
 
tese
tesetese
tese
 
THE DIGITAL TURN Pathways for higher education in the digital age
THE DIGITAL TURN Pathways for higher education in the digital ageTHE DIGITAL TURN Pathways for higher education in the digital age
THE DIGITAL TURN Pathways for higher education in the digital age
 
Claims Reserving in Non-life Insurance in the Presence of Annuities
Claims Reserving in Non-life Insurance in the Presence of Annuities Claims Reserving in Non-life Insurance in the Presence of Annuities
Claims Reserving in Non-life Insurance in the Presence of Annuities
 
EAD Parameter : A stochastic way to model the Credit Conversion Factor
EAD Parameter : A stochastic way to model the Credit Conversion FactorEAD Parameter : A stochastic way to model the Credit Conversion Factor
EAD Parameter : A stochastic way to model the Credit Conversion Factor
 
Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...
Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...
Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...
 
Defined Contribution In Europe: the Direction
Defined Contribution In Europe: the DirectionDefined Contribution In Europe: the Direction
Defined Contribution In Europe: the Direction
 
Graham Ziervogel Masters Dissertation
Graham Ziervogel Masters DissertationGraham Ziervogel Masters Dissertation
Graham Ziervogel Masters Dissertation
 
Strategies for a High Performance Revenue Cycle
Strategies for a High Performance Revenue CycleStrategies for a High Performance Revenue Cycle
Strategies for a High Performance Revenue Cycle
 
Samba0804
Samba0804Samba0804
Samba0804
 
9.8.15.thesis (3)
9.8.15.thesis (3)9.8.15.thesis (3)
9.8.15.thesis (3)
 
EC331_a2
EC331_a2EC331_a2
EC331_a2
 

Andrew_Hair_Dissertation

  • 1. Investigation into the effect on and robustness of the Optimal Region of Investment Strategies for a drawdown pension through the entire investment period under different models of the economy and a splitting of this investment period Andrew Hair H00009767 August 2015 Actuarial Mathematics and Statistics School of Mathematical and Computer Sciences Dissertation submitted as part of the requirements for the award of the degree of MSc in Quantitative Financial Engineering
  • 2.
  • 3. Abstract This dissertation investigates the optimal investment region for a young person en- tering the workforce today. Once this region was found, using the Wilkie Model as a model of the economy, a correlated Geometric Brownian Motion was then used as an alternative model of the economy. This allowed for an investigation into the ro- bustness of the region and any changes in characteristics of the region. A restricted region of possible investment strategies has been searched using a basic Genetic Al- gorithm methodology and a basic Dynamic Programming methodology. A strategy was determined to be optimal using a risk metrics which consisted of a median life and quantiles of the fund. It was found that, while providing a slightly different op- timal region and quantiles, changing the underlying model of the economy honed in on a similar region of the investment period, with the correlated Geometric Brownian Motion having a more flexible region. The area was then divided into accumulation and de-cumlation and these areas were searched briefly. There are a number of possi- ble areas of further research and expansion including a full unrestricted optimization using Stochastic Control to allow for an optimiaztion to be found with relaxed con- straints on parameters. Another area could be full optimization while splitting the area by creating a risk metric for the midpoint, which would be consistent with the risk metric at the end. The motivation behind this project was the introduction of increased flexibility in drawdown pension options for pensioners as of April 2015, as announced in the 2014 UK budget. i
  • 4. Acknowledgements There have been a number of people who have contributed towards this dissertation. Firstly, I would like to thank Professor David Wilkie for providing his model as a basis for the project, along with his model he also provided help and advice throughout. Next, I would thank Dr Timothy Johnson for his guidance, input and assistance as Academic supervisor. His input has been invaluable in the completion of this project along with his ability to clear up a vast number of points, small and large. Lastly, a special thanks to Simon Butlin of Standard Life for his advice and time as Industry supervisor. The time he committed to going through processes and debugging code made it possible for the project to advance in the way it has. I must also thank him for creating the criteria for this project and giving me access to Standard Life for the duration of this project. This access allowed for the project to benefit from the environment Standard Life creates. ii
  • 5. Contents Abstract i Acknowledgements ii Introduction 1 The Wilkie Model 4 Introduction and Background . . . . . . . . . . . . . . . . . . . . . . . . . 4 Total Return Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Mean Return, Volatility and Correlation . . . . . . . . . . . . . . . . . . . 7 Correlated Geometric Brownian Motion 10 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Using this Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Calibration and Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 The Asset-Liability Model 14 Creating the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Calibration and Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Parameters and Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Optimization Methods 19 Risk Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Methods of Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Method One: Genetic Algorithm . . . . . . . . . . . . . . . . . . . . 22 Method Two: Dynamic Programming . . . . . . . . . . . . . . . . . . 24 Changing the Model 28 Results from Searching the Space . . . . . . . . . . . . . . . . . . . . . . . 28 Results under the Wilkie Model . . . . . . . . . . . . . . . . . . . . . 30 Optimal Region under Wilkie Model . . . . . . . . . . . . . . 30 Key observations under Wilkie Model . . . . . . . . . . . . . . 30
  • 6. Results under the Geometric Brownian Motion . . . . . . . . . . . . . 32 Optimal Region under GBM . . . . . . . . . . . . . . . . . . . 33 Key observations under GBM . . . . . . . . . . . . . . . . . . 33 Discussion of the Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Comparing the Quantiles and Processes of the models . . . . . 35 Comparing the Optimal region of Investment . . . . . . . . . . 38 Graphical comparison of the Optimal region . . . . . . . . . . 39 Conclusion of Comparison . . . . . . . . . . . . . . . . . . . . 41 Splitting the Time 42 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Effect of the Risk Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Further Research 45 Stochastic Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Time Consistent Risk Metric . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Conclusion 48 Appendix 50 Appendix 1 - Total Return Indices . . . . . . . . . . . . . . . . . . . . . . 50 Appendix 2 - Mean Return, volatility and Correlation . . . . . . . . . . . . 51 Appendix 3 - Geometric Brownian Motion . . . . . . . . . . . . . . . . . . 53 Appendix 4 - Process of Searching . . . . . . . . . . . . . . . . . . . . . . . 54 Appendix 5 - MatLab Code . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Code for the GBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Code for reading in data . . . . . . . . . . . . . . . . . . . . . . . . . 61 Code Script for risk metric . . . . . . . . . . . . . . . . . . . . . . . . 62 Code function for ALM . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Code function for counting ruin of fund . . . . . . . . . . . . . . . . . 66 Code function for quantiles of death and expected life . . . . . . . . . 67 Bibliography and References 68
  • 7. Introduction This project aims to investigate the effect of changing the underlying model of the economy on a pension fund through finding an optimal investment region for each model. There will be two models being considered, the Wilkie Model and the cor- related Geometric Brownian Motion (GBM). In order to investigate the effect, key characteristics of the investment period shall be examined under each model, with the optimal region of investments compared. For this project, the investment period will be consider to be one period, from the start of saving until death. This is different than the majority of approaches which will split the period into an accumulation and de-cumulation stage. In these approaches, the accumulation stage would aim to grow the pot as large as possible, which is not how this project approaches the optimization. At the end of this project, a brief examination will be taken of the effect of splitting the optimal region into these two stages. The motivation behind this project is the increased drawdown flexibility introduced in the 2014 UK Budget [HMT, 2014]. From April 2015 onwards, pensioners who have pensions in a Defined Contribution (DC) scheme would be able to withdraw directly from the fund, take a lump sum amount from the pot or buy an annuity with the total savings. This change in the pension landscape gives more options and flexibility to people approaching retirement [gov, 2015a]. Because of this, it was deemed im- portant to investigate if the optimal region of investment strategies are comparable across different models. The two models being considered here were chosen due to the fact that the Wilkie Model can be described as a more complex model than the GBM. This project will aim to investigate the similarity and differences observed in the investment region under each model. For the course of this project only drawdown shall be considered as the only income throughout retirement. This means that any alternative income, including lump sum withdrawals and purchasing an annuity are not included in any optimization procedure. This project approaches the optimization of a pension scheme from a different angle. Instead of considering the two stages, with the aim of growing the pot as large as possible in the accumulation stage. It instead takes the region as a whole with the 1
  • 8. aim that, at the time of death of the person, the pension fund would be near de- pletion. The contribution rate, replacement rate and salary are fixed throughout for consistency. Due to these fixed values, it meant that the only variables considered would be the investment strategies and the underlying model of the economy which would be used to grow the fund. This constricted process allowed for a more detailed analysis of the effect on the investment space and optimal region of investment under the different models. The investment space that this project considers is the combination of three assets across the total investment period. The asset classes are made to represent high, medium and low volatility investments. The investment period is considered from the time that the young person, age 20, begins saving for their pension up until an upper age limit of 120. This means that it is a 100 year period. In order to restrict this investment space to a feasible size to search, it was determined that investment strate- gies would remain constant for decades at a time. It was also determined that there would be restrictions and conditions on the investment of assets. It was determined that short-selling was not allowed and 100% of the fund was invested at each time step. There was also a limit set that at most the fund could have a 50% split between assets. This meant that for each of the 10 decades, there was 6 possible investment options. This restriction allowed the investment space to be a feasible size to search while still allowing for characteristics of the fund and space to be observed under the different models. The optimal investment region would be the area the optimization methods converged to with respect to the risk metric being used. While researching around the subject, it was observed that there has been very little published work into finding an optimal investment strategy under drawdown for the whole period. There was a significant amount of work covering the options for workers after retirement. Work by Gerrard, Haberman and Vigna (2004) covers the optimal investment choices for members DC schemes post retirement [Gerrard et al., 2004]. This looks at a more constricted drawdown than is considered in this project. Work by Mowbray (2009) considered investment design for DC schemes [Mowbray, 2009]. Other papers that consider post retirement pension options include Sheshinski (2007), Milevsky (2007), Crouch et al. (2013), Wadsworth et al. (1999), Dutta et al. (2000) and Blake, Cairns and Dowd (2006). While these papers cover most post retirement options and optimizations this project aims to look at the entire period. In order to find the optimal region, two methods of searching the space have been applied. These optimization methods are Genetic Algorithms and Dynamic Program- ming. The theory and principles applied here have been taken from papers and books 2
  • 9. including Aris (1964) and James (2011). Using these optimizations on the constricted space. Investment strategies were run through an Asset-Liability Model that was created as part of this project. Quantiles produced from this model were used as part of a risk metric to determine optimal- ity. Due to the fixed replication rate for drawdown it meant that upside risk could not be addressed. Therefore, only the downside risk of running out too early was addressed in the risk metric. The criteria for this project meant a single risk metric was considered for the whole period, not separate metrics for the two stages. This investigation provided results which showed, in general, the simplistic model exhibits similar properties to the more complex model. This is observed in the early and late years of the fund, along with extreme theorems. While slightly different investment strategies are given, the simplistic model does provide results with less finesse than the complex model provides. Due to the construction of the simplistic model the quantile range observed is narrower and the time of ruin is impacted. It was deter- mined that, in general, the characteristics of investment period, strategies and fund could be observed equally well on both models. However, for more detailed analysis the complex model provided a model with more finesse. Due to the constrictions and fixed parameters used, it is realised that the investment strategies found in this project are contrived and unrealistic. However, the consistency between the search spaces under the different models allowed for comparisons of the effects and characteristics of the fund and investment region. These constrictions meant that further research could be carried out on this subject. Some areas of further research are discussed in the final section of this project. These include a feasible way to search a space with relaxed constraints and to fully examine splitting the investment period under the criteria for this project. 3
  • 10. The Wilkie Model In this chapter the Wilkie Model shall be introduced and some context will be given for its use in this project. The total return indices will be given and finally, the mean return and volatility across decades along with the correlation between indices shall be examined. Introduction and Background The model being used in this project was provided by Professor Wilkie along with the accompanying papers ”Yet More on a Stochastic Economic Model” parts 1 and 2 [Wilkie et al., 2011, Wilkie and S¸ahin, 2015]. Since the model is not the main area of this project only some background will be given. These papers are used to update the model which was first defined in the paper ”More on a Stochastic Investment Model for Actuarial Use” in 1995 [Wilkie, 1995]. The papers update the inputs, bringing them up-to-date and making slight adjustments to certain parts of the model through updating and re-basing where necessary. They then go on to examine how the model has faired in its predictions and look at the parameters of the model. In the main, the original model is unchanged from 1995 by these papers for use in this project. The Wilkie Model of the economy is a stochastic model based upon Box-Jenkins time series models [Sahin et al., 2008]. The model has a cascading structure meaning that there is a central value defining the state of the market. All other parameters come from this central parameter [Fraysse, 2015]. This means that the model is stable in times of crisis while providing acceptable forecasts of the future. The model used in this project requires 14 initial conditions along with 35 parameters which give 13 outputs for the areas of the economy modelled. Parameters include means, standard deviations, weighting factors and deviation from the mean [Wilkie et al., 2011]. These parameters are input for all areas of the model. A number of areas include Retail Prices, Wages, Share Dividends, long-term Bond Yields and a Base Rate as an example of typical input areas. The weighting factors are used with respect to the inflation rate calculated from the Retail Prices. This is due to the fact that inflation is the central value in the cascading model. Some of the input parameters are with respect to an Auto-Regressive Conditional Heteroskedasticity (ARCH) model which is introduced 4
  • 11. as an alternative model for inflation [Sahin et al., 2008]. For this project only five of the outputs are used. In the model, the parameters mostly come from least-squares estimation meaning that they are unbias and linear with the actual models being determined by either a stationary or integrated auto-regressive model of order one (AR(1) or ARIMA(1,1,0)) [Sahin et al., 2008]. The structure of the model, as stated above is cascading, which is centred around inflation. A diagram of this cascading model can be found below: Figure 1: Cascading Structure of the Wilkie Model centred around Inflation This diagram shows the cascade effect of the Wilkie Model for most of the major components used in this project. While the basic concepts of the Wilkie Model follow least-squares estimates and auto-regressive models there is some ARCH characteris- tics for the residuals to account for fatter tails where necessary. There is a fair amount of criticism of the Wilkie Model highlighting issues and areas where it may fall short. Short-comings that are highlighted are that the model is highly dependent on the initial parameters due to over parametrisation. There are claims of biases in the estimations from inflation due to events in the market [Huber, 1995]. One criticism of the model which will be examined later is that in ”A Stochastic Asset Model & Calibration for Long-term Financial Planning Purposes” [Hibbert et al., 2001]. This shows that there there maybe an annual decrease in the standardised volatility. There are also academic critics who highlight that the model does not hold the no arbitrage or market efficiency theorem during times of stress [Fraysse, 2015]. There are upsides to using this model for this project. The outputs provide a good basis for the total return indices used along with the access and simplicity of the model. Since we are looking at pension fund investments over a long time span, the Wilkie Model returning annual values was advantageous. 5
  • 12. Total Return Indices This section shall examine the total return indices used for this project. It was de- cided to constrain the investment strategy to three investments in order to restrict the search space. Effectively the three investments chosen are Equity, long-term Bonds and Cash to replicate total returns indices with different risk-return profiles. The risk considered when creating the index was the level of volatility produced by each of the asset classes. When speaking about different levels of risk, the volatility produced is used to distinguish between the high, median and low risk assets. The inflation index has also been calculated to allow the salary and drawdown to grow with inflation. The formula for the inflation index is: Inflt = Inflt−1.exp{I(t)} In this formula the index is given by Infl and I is the inflation output from the Wilkie Model. The formula for the total return index for equity is: TRISt = TRISt−1.(P(t)+D(t)) P(t−1) Here the TRIS is the total return index for shares which consists of the price index of shares, P, and the dividends paid at that time, D. This replicates higher risk of volatile movements with higher expected return investments. The formula for the total return index for long-term bonds is: TRICt = TRICt−1. ( 1 C(t) +1) ( 1 C(t−1) ) The TRIC represents the total return index of Consol yield, Government Stocks, which are assumed to pay an annual coupon of one. Even though perpetual stocks are not available any more this represents a medium risk investment. Finally, this is the formula to represent a cash total return index: TRIBt = TRIBt−1. (1 + B(t)) This is not a true cash total return index since B represents short-term bond yield which here shall represent a Cash total return Index. This is meant to represent a safe, low risk investment. 6
  • 13. All of the notation used can be found in the paper ”Yet More on a Stochastic Eco- nomic Model: Part 1” [Wilkie et al., 2011]. The derivation of these formulae can be found in Appendix 1. Tax is not considered here since that goes beyond the scope of this project. Since the aim of the project is to test robustness in optimal regions of investment strategies across models it was important not to over complicate the inputs. Since the second model used, which is defined in the next chapter, is calibrated to the outputs of these indices it was deemed acceptable to ignore tax. As discussed above, while these do not match the exact description or are totally correct, the main idea is to test the robustness and characteristics of a region across the models. It was decided that as long as the investments are consistent in returns, volatility and correlation between assets across the models, it would be acceptable in testing the robustness. Mean Return, Volatility and Correlation It was required that the a mean return, volatility and correlation between investment options was found in order to calibrate the second model used in this project. To achieve this, the average rate over 10 year periods was calculated for each simulation. For the purpose of this project, it was deemed that 5000 simulations of the Wilkie Model for 100 years will be used. This allowed for 5000 simulations of the value of the pension fund across the investment period from the start of working life. It also meant that it was possible to reduce the bias and error in the model due to the large sample size. It implied that the sample mean return and volatility of the model could be determined to a high degree of accuracy. Using this large number of simulations meant that the bias cause by any outliers in the simulation was reduced. The graphs of the simulations follow over the page. 7
  • 14. Figure 2: Graphs of the 5000 simulations of each index as produced by the Wilkie Model Each graph represents the values of each index for the 5000 simulations. The dashed yellow line in each graph is a line of the average value each index takes. As can be seen in the graphs of total return for cash and bonds there is one outlier which reaches extreme values. Due to the large number of simulations, these outliers do not effect the average return or volatility. This was possible due to the simplicity of the Wilkie Model and the speed at which it produced 5000 simulations. The return over 10 years has been calculated using Rt = ln TRt+10 TRt for each index in each simulation. The average of this 10 year return across the 5000 simulations was then taken using ¯Rt = ( 5000 n=1 Rt,n) n which gives 10 decades of average 10 year returns across the 100 years that is being considered. In order to annualise these values they have been divided by 10. A similar approach has been used for standard deviation and correlation as for the mean. Both of these values have been calculated from Rt. The sample standard de- viation for 10 year returns has been calculated using σt = 5000 n=1 (Rt,n− ¯Rt) 2 (n−1) with the square root of 10 being taken in order to annualise the volatility. This was examined carefully due to the criticism of the Wilkie Model in ”A Stochastic Asset Model & Calibration for long-term financial planning purposes” [Hibbert et al., 2001]. To vali- date the standard deviation found, the standard deviation was calculated for the first 8
  • 15. decade by hand. This used the same formula but individually calculating each part on excel to validate the final value [Sto, 2015]. For both mean return and volatility, Rt has been 10 year return for the index that was being calculated. When calculating the cor- relation between the indices the formula used is ρi,j = 5000 n=1 (Ri t,n− ¯Ri t)(Rj t,n− ¯Rj t ) 5000 n=1 (Ri t,n− ¯Ri t) 2 5000 n=1 (Rj t,n− ¯Rj t ) 2 , where i, j = inflation, equity, bonds and cash. The values produced by these formulae can be found in Appendix 2. It was observed from the output that after the first 20 years volatility and mean return remained consistent with correlation remaining consistent throughout. In order to simplify the second model it was decided that a single value, an average of the 10 decades, shall be used to reflect the annual return and volatility. The implication and justification of this decision is discussed later in the paper on the chapter on Optimization. 9
  • 16. Correlated Geometric Brownian Motion In this chapter the second model used in this project shall be introduced, the Geomet- ric Brownian Motion. The creation and justification of choosing this model shall be discussed followed by the mathematics behind the model and the use of the Cholesky Decomposition. Finally, the calibration and validation process shall be covered. Introduction It was decided that the second model to be used in the project should be a simplistic model in comparison with the Wilkie Model. Therefore, the Geometric Brownian Motion (GBM) was selected since it is a simple log-normal model [epi, 2015]. The GBM can be used as a fundamental model of the economy since it produces a basic exponential growth of assets. A stochastic process, Xt, follows a Brownian Motion with the stochastic differential equation, dXt = µXtdt + σXtdWt, where dWt is a standard Brownian Motion, N(0, t) [Glasserman, 2003, p. 93:104]. Using Itˆo’s formula it follows that the process has the form Xt = X0exp{(µ − 1 2 σ2 )t + σWt}, with the full proof found in Appendix 3. This is the basic mathematics behind the second model used in this project. Using this Model The use of the GBM comes from the methodology used previously in calculating the 10 year rate of return. From the form of the process in the previous section, Xt = X0exp{(µ − 1 2 σ2 )t + σWt}, where Xt represents the annual total return index, TRt, as calculated under the Wilkie Model. Rearranging this process gives ln TRt TRt−1 = (µ− 1 2 σ2 )t+σWt which is an annual rate of return. Taking expectations and variance of this formula gives E[ln TRt TRt−1 ] = (µ − 1 2 σ2 )t which is an annualised average rate of return, ¯Rt. The variance is the volatility of the Wilkie Model, ¯σt, for each decade. This means that the formula used to model the annual total returns has the form: TRt = TRt−1exp{ ¯R + ¯σ } In this formula ¯R is the annual rate of return under the Wilkie Model. ¯σ is the volatil- ity under the Wilkie Model. Both of these values are the average for each decade, 10
  • 17. calculated as ¯R = 10 t=1 ¯Rt 10 and ¯σ = 10 t=1 ¯σt 10 . Due to it being an annual total return, t is 1 and the error term, , is distributed as a Multi-variate Normal, MV N(0, 1), with correlation between indices. This is created using the Cholesky Decomposition. The Multi-variate Normal (MV N) is created using the correlation matrix, ρ, which is taken from the Wilkie Model. Since the errors in the GBM are normally distributed with mean 0 and variance 1 the MV N is distributed N(0, Σ) where 0 is a vector of zeros of length d for the mean and Σ is the covariance matrix which has the form: Σ =       σ1 σ2 ... σd             ρ11 ρ12 . . . ρ1d ρ21 ρ22 . . . ρ2d ... ... ... ρd1 ρd2 . . . ρdd             σ1 σ2 ... σd       . In this formula for Σ all σ are 1. Since this is true it means that the errors can be generated using Σ = AAT . This means that the Cholesky Decomposition can be used to produce a lower triangular matrix for A which reduces the price of the computation. In the code used to create the error term at the back of this paper the Cholesky Decomposition is applied such that: Σ =       A11 A21 A22 ... ... ... Ad1 Ad2 . . . Add             A11 A12 . . . A1d A22 . . . Ad2 ... ... Add       This is applied to the positive semi-definite covariance matrix. Using this MV N means that the errors of the GBM are correlated between assets which gives the total return indices created here a similar level of correlation as produced in the Wilkie Model [Glasserman, 2003, p. 71:72]. The algorithm, which is given over the page, provides the structure for constructing the model of the economy using the GBM. 11
  • 18. Algorithm 1 GBM model algorithm for loop over each simulation, z do Generate the correlated error for each simulation, err for loop for each index, j do Set the starting value for each index, S0(1, j) for loop over the time, i do Generate the GBM simulations: Si,j,z = Si−1,j,z.exp{µj + σj.err} The GBM grows the previous value of the index by the exponential of the mean and volatility multipled by the error term end for end for end for This algorithm shows the logical progression of the function. The actual code used can be found in Appendix 5. The code makes use of the ”SDETools” toolkit in MatLab [Horchler, 2011]. The error is generated and correlated using the function mvnrnd which applies the Cholesky Decomposition as described above. Calibration and Validation Since this project is based upon testing the robustness and characteristics of an opti- mal investment region across different models it was deemed important that there is a similarity across the models. For this reason it was important to validate that the calibration of the GBM was consistent across the mean return, volatility and corre- lation with that produced by the Wilkie Model. In order to validate the output of the GBM, 4 areas were validated. These areas were the mean return, volatility and correlation output from the total return indices in the GBM along with the correlation between errors to confirm the source of the correlation. This was achieved through similar methods as used when validating the output of the Wilkie Model. The out- puts of the total return indices from the GBM was used to calculate 10 year rate of returns which were then used to get the annualised average 10 year rate of returns and volatility across the 5000 simulations. The correlation was also calculated from this in the same manner as under the Wilkie Model. The correlation between error terms was also examined over a large sample to determine that the correlation from the errors was consistent with the correlation between the indices. For completeness, the GBM was reproduced with independent standard normal distributions and the correlation tested to confirm that the source of the correlation was the error term. The graphs of the total return indices follow over the page. 12
  • 19. Figure 3: Graphs of the 5000 simulations of each index under the GBM Figure 3 shows the 4 graphs and can be used to highlight the differences of the models. One point of difference is that the Wilkie Model has more extreme values and outliers than produced from the GBM. It is also clear that it is less likely to return from these extreme values than the GBM. This is caused by what drives each process with the correlated errors of the GBM being independent through time whereas the Wilkie Model is driven by the auto-regressive process. Most of the paths in the GBM do not have the smoothness which is present in the Wilkie Model for a similar reason. While the volatilities of the paths are the same, each graph is decidedly different across the models. This shall be discussed later in the Optimization chapter. 13
  • 20. The Asset-Liability Model In order for the pension fund to be accumulated and de-cumulated as required, it meant that an Asset-Liability Model (ALM) had to be created. This chapter will cover the creation and validation of the ALM. Creating the Model The objective of the ALM for this project is to allow a pension fund for a young person to start saving from a given age and accumulate up to the Normal Retirement Age (NRA). From the NRA a constant drawdown will be taken which will be a fixed percentage of the final salary at retirement, this is known as the replacement rate. This model was built for this project in order to control and understand where and when the cash flows occurred. It was also created to meet the criteria of the project. It was required that the entire investment area be considered as a whole. This ALM allows for this by tackling accumulation and de-cumulation in the same model. The project also only allows for a consistent drawdown to be taken from the pot until it is depleted. This means that a number of options that are available to workers at retirement have been omitted. These include the purchase of an annuity at retire- ment or taking any of the pension pot as a lump sum. There are a number of papers that discuss optimal pension options for post-retirement produces. However, these assume the aim of the accumulation is to grow the pot as large as possible. In this project, the aim of the whole process is to optimise a full strategy with the aim of go- ing into drawdown. This allows for the characteristics of the full space to be examined. A number of other assumptions and simplifications have also been made to allow for the testing of the different models to be the main focus. A full list of the parameters and constraints, along with the reasons behind choosing the values, can be found at the end of this chapter. Some of the main parameters include constant pension con- tribution rate, no early or late retirement, constant replacement rate, no promotions or bonuses and an upper limit of life at 120 years old. The pension contributions for this project are locked at 8% which is the minimum required by the government by October 2018 [pas, 2015a]. With the aim of investigating the different models of the economy the starting age is set at 20, along with the NRA at 65 with a salary of 14
  • 21. £20,000. The fact that this is a fairly low income meant that the replacement rate was fixed at 60% of the final salary which is normal according to Standard Life. In order to simplify the optimization it was decided that there would be no salary inflation through promotion and only RPI inflation would be considered to keep the buying power of the salary constant. The basics of the ALM are to accumulate the pension fund throughout the working lifetime of a person. It then de-cumulates the fund through a drawdown pension where a constant amount, with respect to inflation, is taken annually. The fund is annually accumulated with respect to investment returns on an underlying model of the economy with respect to the given investment strategy. This is run over a given number of simulations from the model of the economy. The following algorithm gives the structure of the ALM. Algorithm 2 Asset-Liability Model algorithm for loop for the accumulation period do Each contribution is inflated from the previous contribution with the annual rate of inflation. The fund is then accumulated up until NRA. This is done by accumulating the fund with investment returns and then adding that years contribution. end for for loop for the de-cumulation period do The initial drawdown is calculated with respect to the salary inflated up to NRA. The drawdown for each time-step is then calculated using the previous draw- down amount and the annual inflation. The fund is then de-cumulated through time. This is done by accumulating the fund with respect to investment returns the taking the annual drawdown amount. end for It was decided that the fund would start off empty with the first contribution being accounted for at the end of the first year. It was also decided that the drawdown was received instantly at NRA. The code for the ALM written in MatLab can be found in Appendix 5. The investment returns in the ALM are accounted for using a weighted average on the investment strategy. The returns are taken to be continuous so have the form: exp{Weqi.ln TRISt TRISt−1 + Wbon.ln TRICt TRICt−1 + Wcas.ln TRIBt TRIBt−1 } In this equation, TRIS, TRIC and TRIB are the total returns indices for equity, bond and cash respectively. Wi represents the weights of the returns with respect to 15
  • 22. the investment strategy, where i indicates the index the weight is relevant too. The final function of the ALM is to count the time at which the fund goes negative on a simulation. With the time at which the fund runs out a histogram is constructed to display information about the life of the fund. This histogram and count of when the fund goes negative is used in producing the risk metric which is used in the opti- mization later. A standard histogram produced by the ALM with sensible investment strategy is produced below. A sensible investment strategy is decided to be a strategy which starts of in equity and moves to cash as the fund reaches the NRA, as the fund starts to deplete it moves back to equity. Figure 4: Histogram of the ruin times of a fund with a sensible investment strategy over 5000 simulations, produced from the ALM, with time representing the life of the worker Figure 4 highlights some common points of the histogram produced by observing the time a fund dies across a suitably large number of simulations. The main point of the histogram is the large amount of data contained in the last bin of the histogram. This large spike is due to the fact that the fund does not run out within the time period being considered. It would be observed that the tail extended further than age 120, at which the life has been capped at in this project. Using Bayes Information Criteria (BIC) to run a diagnostic of the histogram it is found that the data most closely follows a Generalized Extreme Value distribution. This was found using ”allfitdist” function on MatLab [Sheppard, 2012]. It is clear that for a sensible investment strategy the life of the fund is skewed to the right. This observation was used later in determining 16
  • 23. the risk metric for the optimization. Calibration and Validation In order to calibrate and validate the ALM, to confirm cash flows are falling at the correct times, the model was first built using the algorithm in the previous section as an aid. Once this model was created with flexible inputs it was then calibrated to a basic model to ensure cash flows were being taken at the correct time. The basic model of the ALM had three stages, each built on top of the previous, to check different cash flows of the final model. This started with a simple deterministic return with no inflation, then introduced inflation to the deterministic model and a final full model dependent on a model of the economy for investment returns. The deterministic return was set at 5% to confirm that the contributions and drawdowns were taken at the correct time with respect to the annual growth of the fund. The deterministic model with inflation was used to confirm that the salary, contributions and drawdown were taking account of inflation at the correct time. The final full model was used to validate that the ALM being used was providing correct consistent values with respect to all values being fed into it. The ALM was checked with the contrived models as they progressed with any alterations being made. This process resulted in a fully calibrated and validated ALM which ran correctly. Parameters and Constraints This section will cover all the parameters and constraints that are relevant for search- ing the space. The logic and reasoning behind choosing these values are also covered. Since the main aim of this project was to consider the investment period as a whole, it was deemed prudent to choose sensible values for all the inputs in the ALM. These values would be kept constant throughout the process so that only the investment strategy would be varied throughout the search. The parameters in this project are considered to be the inputs to the ALM. These include the start age of 20 years, the Normal Retirement Age (NRA) of 65 and the final age considered as 120. In order to give a large investment period, the start age was set at 20. This age was decided upon since it could cover both young people fin- ishing apprenticeships, graduates from universities and people who had gone straight into work from high school. 20 was decided to be a fair median age between these groups. The NRA was set to 65 due to the fact it is a median for the age at which the current state pension is received [gov, 2015b]. Previously, 65 was the NRA in the UK 17
  • 24. but this has been phased out with no definite upper limit in most professions. It was also determined that the life would be alive at NRA, since the aim of the project was to examine the drawdown pension. The age 120 as an upper limit is being considered due to the restriction of the life tables used. It is fairly common that 120 is considered to be an upper limit for life. Other parameters include the fixed salary of £20,000, the fixed contribution rate of 8% and the fixed replacement rate of 60%. This salary was decided upon since it rep- resented an income which was considered low as it is below the nation average [ONS, 2013], but would keep an average family out of poverty [jrf, 2008]. It was important that it wasn’t too low as it was decided that the salary would only grow with inflation to keep the purchasing power, but would not grow with promotions or bonuses. The contribution was set at 8% of the salary since it is a minimum rate required by the government by 2018. The contribution will be made up from 4% from the employee, 3% from the employer and 1% from the government through tax breaks [pas, 2015a]. By keeping it constant at a low rate it avoided introducing lifestyle into the optimiza- tion. In general a contribution rate would vary as the employee matured, with the need of buying a house and supporting a family being removed from this project. The replacement rate, the percentage of final salary received through drawdown, was set at 60%. This is an industry standard where persons on a low income receive 60% of the final salary as an income. By fixing these parameters it meant that the only variables being considered by the model was the underlying model of the economy and the investment strategy. This allowed for a focus on searching the investment period for an optimal strategy and the robustness of this strategy. The constraints on the inputs are discussed in the next chapter concerning how the investment period has been constricted. By constricting this region it allowed for the search region to be reduced to a feasible size. 18
  • 25. Optimization Methods This chapter will introduce the risk metric used to determine an optimal investment strategy, what is considered to be optimal under this metric and other metrics that could be considered. Once this has been introduced the different methods, Genetic Algorithms and Dynamic Programming, for searching the space of possible investment strategies shall be discussed. Risk Metric In general a risk metric is a quantifiable way to measure risk. The risk being consid- ered in this project focuses on the risk of running out of income too early through drawdown. It is also constrained by the fact this project does not want an excessively large pot left at the death of the worker. If a large amount of money is left in the pension pot it would imply the worker has not used the drawdown system to its full advantage. It is assumed that the worker will not be leaving any money to beneficia- ries which is why the pot must be as close to empty as possible. This project is only considering altering investment strategy and not the contribution rate or drawdown income. Therefore the risk being considered must aim to find an investment strategy which provides a fund lifetime which will aim to be near depletion at the time of death of a worker. This criteria was decided upon with the aim that the pensioner would use their income to the full. A full optimization, which would allow for variable draw- down and contribution rates is considered in the chapter on further research. This uses Stochastic Control and would allow for the maximization of the workers income and monetary needs through work and retirement. To begin with, a measure for the workers life had to be decided upon. It was assumed that the worker would survive up to the NRA (Normal Retirement Age) since only the drawdown optimization was being considered. Using a life table provided by Standard Life it was possible to create a density curve of the death of a life, alive at NRA. The tables used were ”RMC00 with CMI 2013 M [1.5%] advanced improvements”. This meant that there was a minimum improvement of 1.5% on the mortality rate through time. These tables provided values of probability for dying at a given year and at a given age. Using this information, a number of people alive at a given age was 19
  • 26. calculated starting with a suitably large number of lives at the NRA. This allowed the number of deaths in a year to be found which was used to create a density of death by dividing the annual deaths by the total starting life [lon, 2009]. A density curve was produced and is displayed: Figure 5: Conditional Density Curve of death for a life alive at NRA This density was validated using numerical integration to confirm the area under the graph was approximately equal to one. From the information gathered a number of other measures of a life alive at NRA were found. The curtate expectation of life was found using the equation ex = inf k=1 PK x [Ciecka and Ciecka, 1996]. This came out as 94.24 years of age. The 5%, 25%, 50% and 75% quantiles were also found using numerical integration techniques. These were found to be 75, 89, 96 and 102 respectively. These values are consistent with the graph provided. Since the graph is skewed to the left, it implies that the mean would be slightly less than the median as is the case [epi, 2015]. These measures provided as basis for which the risk metric would aim to achieve. A number of different risk metrics were considered to achieve the objective. Since the aim of the project was to find an optimal investment region it meant that key points of each distribution must be considered. One metric that was considered was quantile matching following ideas used by Sgouropoulos, Yao and Yastremiz in ”Matching a Distribution by Matching Quantiles Estimation” [Sgouropoulos et al., 2014]. However this idea was dismissed since an observation was made that it would never be optimal to control the upside risk of higher returns through investment options and taking on more risk to reduce expected returns. The only viable way to alter this would be to 20
  • 27. alter the drawdown amount if excess cash has accumulated. Since this was constricted for this project this method was rejected. It was also observed that the histogram of fund life, as in figure 4, was skewed to the right while the density curve of death was skewed to the left. This made it impractical to match quantiles of upside or downside risk. Other metrics considered included work by Gerlach et al. (2012) [Gerlach et al., 2012] where it was considered using their research into time varying quantiles with respect to the aim in this project. However, since their work focuses on Value-at-Risk it was deemed not to be applicable to this project. Due to this observation it was decided that the risk metric that would be considered would be matching the median of the funds life to the median of expected life. This reduced the effect of the skew of the distributions. Optimality would also be indicated through maximising the left hand quantiles of the fund, 5%, 10% and 25%. This would allow a quantitative analysis of a strategy which would come close to being depleted at a time close to the expected age of death of a pensioner. The effort to maximise the lower quantiles meant that the downside risk of the pot running out before death was also being addressed. Methods of Optimization This section covers the different methods of optimization used in the project. The first method of optimization covered is the principles behind Genetic Algorithms (GA) to search a space. The second method discussed is Dynamic Programming (DP), in particular the Bellman Equations, to search the space. Two methods of optimization are considered due to the different areas that are being searched and for completeness. Before the methods of optimization are introduced the space that is being searched must be defined. As discussed in the chapter on the Wilkie Model, there are 3 asset classes for investment being considered. These represent a higher, medium and low volatile risk investment options for a pension scheme. Since all other variables have been constrained it allows for the change of investment strategy to be the only variable. This results in the space to search which can be represented by the diagram which follows over the page. 21
  • 28. Figure 6: Diagram of Investment Region shown by the three assets classes considered Figure 6 gives an indication of the space which shall be searched. This is the region for one time step. The investment strategy shall be varied through time in order to find an optimal region across the entire investment period. In order to restrict the space that must be searched, limits were put on possible combinations. It was decided that at most there could be a 50% split between assets at each time. This meant a possible 6 combinations. These combinations were 100% in equity, bond or cash or a 50% split between the assets, equity and bonds, equity and cash or bonds and cash. It was also restricted such that 100% of the fund must be invested at all times and there was no short-selling allowed, which is in-line with current pension regulations. It was also decided that the search space would be restricted through the number of time steps considered. Since it is spanning 100 years of investments, it was decided that investment decisions would be kept constant for decades at a time. These restrictions on the search space reduced it to a feasible size and were kept constant for both models considered. Method One: Genetic Algorithm This section aims to introduce the methodology of GA (Genetic Algorithm), covering the principles and process behind it. GA is a process, based upon evolution and natural selection, that searches a space until some criteria is met. It requires some real search space, decision variables and a process to minimise or maximise. This process is known as the objective function. In this project the decision space is the space of all possible investment strategies through time and the objective function is the risk metric. GA are considered to be general in their search of a space using a simplistic algorithm. They continue to search the space until a suitable solution is found which meets some termination conditions [Revees and Rowe, 2004, p. 19:49]. The general form of the algorithm is given next. 22
  • 29. Algorithm 3 GA algorithm Choose an initial starting point to search from. Evaluate this information with respect to objective function. while Termination conditions not satisfied do repeat if Use a combination to test space (Crossover) then end if if Use mutation to search surrounding space for better areas then end if Reject or accept with respect to objective function until optimal solution is found Select a new area to search end while From this algorithm it is clear to see the methodology of GA. It starts with an ini- tial area to search against the objective function. Once this area is searched it uses crossover and mutations to move through the space [sea, 2015]. Crossover in GA can be related to this project in testing combinations and deciding whether or not different combinations provided better or worse results with respect to the risk metric. Mutations have been considered in this project by changing an investment option to test how it affects the metric. This method has been applied in this project by taking the principles of the algorithm above. In principle GA applies a heuristic algorithm, meaning it allows the user to find the best solution through the search of a large space [Eiselt and Sandblom, 2012, p. 175]. Heuristic methods allow for an optimum to be found, whereas exact methods of search may just find a feasible region which is not optimal. An exact method which was considered in this project was the Branch and Bound Method, where linear pro- gramming is solved in a relaxed environment, honing in on a feasible solution given the risk metric [Eiselt and Sandblom, 2012, p. 158]. This logic has been applied and adapted in the searching the investment space. Suboptimal regions, with respect to the risk metric, are rejected as the space is searched. In order to fully test the space these suboptimal areas are only again considered in mutations in the GA. The fact that an optimal investment region is being sought and not a strategy is due to the method used. Since this project has restricted the space in such a way, any single investment strategy would be illogical and unsound. Therefore the focus is on the region of investment strategies that the optimization method hones in on. Since the same method to search the space is applied to both models of the econ- omy, the robustness of the region honed in on will be examined. It has also been 23
  • 30. observed that numerical solutions attained by GA are unlikely to be exact predictions [Forrest, 1993]. It is more common that GA provides insight into conditions of the data, that is how the method is being applied in this project, when comparing models. The actual logic for applying this method in this project is as follows. To begin with the time for investments was split into two blocks of 50 years. The six combinations of strategies, at each time, were ran through the Asset-Liability Model. This produced the median and quantiles of the funds life to use as the risk metric. The best area was honed in on with the rest being rejected as suboptimal. From there the algorithm then moved to the next blocks of time, 33 year blocks. The process was repeated using information gathered from the previous blocks. Again, after an initial search of the space, the crossover was preformed by using combinations to hone in further. Mutations were added to test a wider region and areas that had been rejected previ- ously. This process continued through time blocks until 10 year blocks were reached. The process of initial search based upon the results of the previous blocks were used through out, using crossover and mutations to fully search the space. This process of searching allowed the use of the principles of GA through time. It provided a com- prehensive search which honed in on an optimal region of investment given the risk metric. Method Two: Dynamic Programming This section aims to introduce the concept of DP (Dynamic Programming) along with the principles and process behind it. It shall also discuss how it has been applied in this project. In particular, this project looks at Bellman’s principle of optimality and the equa- tions connected to this [Bellman and Kalaba, 1965]. DP can be used to find optimal solutions to sequential decision problems. A decision will be made upon current infor- mation with respect to an objective. This is relatable to this project since a decision must be made on an investment strategy for a decade which will affect the results of the risk metric. The framework for DP follows a straight forward process if there is a finite time hori- zon, which there is in this case. Across a time dependent space, at each time step or state, a decision must be made. The outcome of the decision made in this state produces new information which is then carried forward to the next state. Each state depends on the previous state and the decision that was made. This can be expressed as transition probability, pss = P(St+1 = s |St = s, at = a), where S is the state and a is the decision that is made. 24
  • 31. In the book ”Discrete Dynamic Programming” [Aris, 1964], the terms used in DP are given. They are given over the page in the context of this project. ˆ State: The time steps across the lifetime of the investment. Here there are 10 states. ˆ Decision: The variable in the model. Here it is the choice of investment strategy at each time step. ˆ Transformation: The decision at each state produces an output. This is seen in this project when it is run through the ALM to gain the median and quantiles of the fund given the investment strategy. ˆ Constraints: The constraints that are put on both the search space and the decision variable. These include the 50% split between assets on the decision variable and the decade time steps on the state. ˆ Objective Function: In this project this is represented by the risk metric which determines optimality. ˆ Parameters: These are fixed constants. These may include the fixed Normal Age of Retirement or fixed salary throughout. This list gives an indication of the information needed for the process of DP and how it is relatable to this project. This project uses a form of DP introduced by Bellman in 1957. As the process moves through states it is important to consider the final optimization and not focus on achieving the best outcome at each time step. The Bellman Equations break the problem in to a sum of smaller sub-problems with the aim to optimize the whole pro- cess. It uses Markovian properties, meaning that only the current state is important irrespective of past states [Judd, 1998, p. 409]. It is achieved through the Principle of Optimality which states: ”An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision.” This quote comes from ”Dynamic Programming and Modern Control Theory” [Bell- man and Kalaba, 1965]. The application of this quote in the Bellman Equations is that immediate optimality is combined with expected future optimality. The equa- tions work recursively through time with a decision made at each state linked to the 25
  • 32. next decision through these equations. The value function of the Bellman equations has the form V (St) = max at∈F(St) f(St, at) + βV (St+1). This equation is the core of DP and the principle behind it is applied in this project. In the equation, f(St, at) is the function which indicates optimality in a state with β a weight on the value of the next state. The F(St) represents the set of all possible decisions in the state. The equation can also include the expectation of V (St+1) to account for a stochastic decision process [James, 2011]. While considering DP, it is possible that issues known as ”the Curse of Dimensional- ity” can arise. These issues come from the fact that there is a large number of choices available at one point. This makes it too computationally expensive to explore every future state. In order to correctly apply the Bellman Equations, all future states must be considered. Due to the large dimensions this project is working in, this is not possible. Even with the restricted space that is being searched, there are still over 60 million combinations. If this search was relaxed to allow for assets to have investments of 33% instead of 50% there would be over 10 billion combinations. This exponential growth as the dimensions increase is the curse of dimensionality. [James, 2011] The solution method to this equation is also recursive, given that the equation is recursive. In this project there is a finite time horizon which means it is possible to use ”Backward Dynamic Programming” starting from the last state. This requires that the value function is found at each state as backwards induction is completed. To apply this solution method, Policy Iteration techniques have been used. This re- quires, from an initial starting point, DP is carried out reaching a feasible optimum. At each stage, working backwards through time, a decision is made with respect to the risk metric. This will search the space from the initial starting point. Then a different starting point is considered and again DP reaches a feasible optimum and so on. It can hone in on an optimum by using information gathered from previous starting points. This would continue until an optimum is found. [James, 2011] The principles of DP have been applied in this project. In the search of the investment period while changing models, it is used as a secondary search method for complete- ness. It is applied again when searching the effect of splitting the space. It uses the risk metric as the objective function to search through time as the state. The invest- ment strategy is being used as the decision process. The methodology used to find a solution is policy iteration. A number of strategies are selected as different starting points for the search. The optimization process works backwards through time to find the optimal solution from the given starting point. This was done until convergence 26
  • 33. and the optimal strategies are compared to select the best option. This searched the space from various starting strategies, including extreme and sensible strategies to preform a thorough search. This method of searching is known as Approximate Dynamic Programming since it allows for sub-optimal, feasible regions to be found using the concepts of DP. It makes it possible to work around the issues which arise with the curse of dimensionality. 27
  • 34. Changing the Model This chapter will discuss the results obtained from the optimization under the two models of the economy as defined previously. The differences and similarities in the characteristics of the investment region found under each model shall be examined. The key observations are summarised at the end of the chapter. Results from Searching the Space This section will discuss the optimal region of investment under each model separately, along with the key observations. The exact method will be mentioned, however a more detailed account of the process can be found in Appendix 4, along with tables showing the output of the risk metric. As defined in the criteria for this project, the investment period as a whole is being considered. This will allow for an examination of the optimal strategy from the start of the investment period right through until death. This is used as an alternative to considering an optimal investment for accumulation which would be separate from the de-cumulation period. The parameters and constraints are kept constant throughout, when searching the space, and were defined in the Asset-Liability Model chapter. The key parameters and constraints for the search are that the young person starting work is aged 20 with an upper limit of life at 120. This gives the investment period of 100 years. The salary is locked at £20,000 which only accounts for inflation through the RPI. The drawdown is fixed at a constant income of 60% of final salary. The reasoning behind these values can also be found in the chapter on the Asset-Liability Model. In order for consistency throughout, the Genetic Algorithm (GA) method shall be used as an initial search method under both models of the economy. The Dynamic Programming (DP) method shall then be applied for a final search of the space, from a number of starting points, for completeness. This provides consistent parameters, constraints and optimization methodology between the models. When discussing the results the 5%, 10%, 25% and median shall be in the form [5%, 10%, 25%, median] throughout for consistency. Before the results and key observations are given, a brief oversight of the exact appli- 28
  • 35. cation of each method shall be given. The main process used to hone in on an optimal region for the full area is the GA method. This uses the principles discussed in the section on the Genetic Algorithm. As discussed in that chapter, the investment period was searched by moving through blocks of time. Starting with two large blocks of 50 years the investment period was searched, providing general results and observations of the attributes of the space. Using the crossover process this allowed for unsuitable options to be discarded. These discarded options would only be considered in later blocks of time as mutations to search the space. This continued to hone in on a region by reducing the block sizes until 10 year blocks were reached. Using observations from the previous time blocks meant that the optimal region was found without a computationally expensive search. The crossover technique was use to test if different combinations were optimal. By adding in mutations of previously rejected investment options and combinations, the search of the space was extended to test areas that had been rejected. Using both these techniques under the GA method allowed for a thorough search of the invest- ment period. This method was simplistic to run but honed in on a suitably optimal region. The second method used was the DP method discussed, in the section on Dynamic Programming. Since it has been observed that the GA method provides a feasible optimal region, it may not provide the global optimum. Therefore, for completeness, it was decided that a second method should be applied to search the space. In order to apply the DP method, initial starting strategies had to be decided upon. It was viewed as prudent to start from a number of different extreme and sensible strategies. The starting strategies decided upon were 100% and 50% split in each asset class which provided six starting strategies and then another two sensible strategies. The sensible strategies were the optimal found under the GA method and a basic strategy which would have 100% in equity when the fund was at its smallest, moving to 100% in cash when the fund was largest. The DP backwards iteration solution method was applied to each of these strategies. If it was deem acceptable, it would be run again from the strategy given from the first iteration. This allowed for an extensive search of the space, on top of what had been achieved under the GA method. In order to test optimality of an investment strategy, the strategy was ran through the Asset-Liability Model. This produced quantiles of the fund’s ruin with respect to the investment strategy and underlying model of the economy. All other variables of the model were kept constant as defined above. Optimality was determined by the risk metric which stated that the median should equal 96 with the lower quantiles of 29
  • 36. the fund as high as possible. This process was the same under both models of the economy. This allowed for consistency when checking the effect changing the model has on the optimal region, and how robust this region is. Results under the Wilkie Model This section shall provide the key insights, observations and results under the Wilkie Model. A full analysis of the process to arrive at these results can be found in Appendix 4. Optimal Region under Wilkie Model This subsection will introduce the optimal region found under the Wilkie Model. A brief description of the region is given. Using the methods of optimizations described, an optimal region was found under the Wilkie Model. This optimal region has the form: ˆ The first four decades contain 100% Equity. ˆ The fifth decade contained a 50% split between Equity and Cash ˆ The sixth decade contained a 50% split between Equity and Cash or Equity and Bonds ˆ The seventh decade contained a combination of Cash and Bonds ˆ The last three decades contained 100% Equity or Bonds The optimal region allows for flexibility in the sixth and seventh decades. Any in- vestment strategies of this form provided a suitably optimal strategy. There was also flexibility in the last 3 decades, it is discussed later in this section that these decades have little effect on the median. It was found using the GA method. An exact strat- egy of this form was used when applying the DP method to preform further searches of the space. The results from using DP to further search the space showed that, while there were alternative strategies that met the median, they were suboptimal due to the fact that the quantiles were lower. The quantiles of 5%, 10%. 25% and 50% produced by the above strategy are [75, 78, 84, 96]. Key observations under Wilkie Model This subsection covers the key observations and characteristics found while searching the investment period. There are a number of characteristics discussed in this sec- tion. The main discussion of these characteristics can be found in upcoming section, 30
  • 37. discussion of results, which compares and contrasts results found under each model. The first characteristic that was noted was that, with different criteria and risk met- ric, investing 100% in equity for the entire investment period could be considered optimal. This provided 5%, 10%, 25% and 50% quantiles as [76, 80, 89, 120], over the 5000 simulations of the fund. This was considered to be an upper limit on what the quantiles could achieve in this model. The reason behind this being considered an upper limit is given in the upcoming discussion section which follows. This median shows that at least 50% of the funds are active at the age 120. This would imply these quantiles are as large as can be obtained. It is clear that, if the aim was to grow the fund as large as possible across the whole investment period this is optimal. It reinforced the view that quantile matching would not be logical or possible when only considering altering the investment strategy. Upside risk of higher returns would be reduced through poor investments which is suboptimal. As discussed in the Op- timization chapter, the lower quantiles of death for a life alive at NRA were found to be [75, 81, 89, 96]. By comparing the quantiles, it is observed that the quantiles produced under this extreme strategy failed to meet the quantiles produced by the death of a life. Since this extreme strategy produces quantiles which are considered to be an upper bound, it implies that the quantiles of the fund will never meet those of death. This was important to note during the search, as an assumption was made that it was highly unlikely to match the quantiles, while matching the median. This assumption was confirmed throughout the search when the upper limit on quantiles was never reached in tandem with matching the medians. The next characteristic observed from the search was the suboptimal effect of not holding equity for the first 4 decades. Working through the space using crossover and mutations highlighted this characteristic. Throughout the search of the investment period, it was quickly noted that it was optimal to hold equity over anything else in the early stages of the fund. A number of mutations were considered when using the GA method to search but the majority were rejected since it caused the median to fall short of 96. Another result of holding anything other than 100% equity in the early stages was that the lower quantiles suffered. An example of this would be holding 100% cash for the first decade while keeping the rest of the strategy as defined above. This reduced the quantiles being considered to [75, 77, 82, 92]. There were some investment options that could be held in the early years, however, by doing so it reduced the flexibility of the optimal regions. This is highlighted by holding a 50% split between cash and equity in the first decade. In order for the median to be met, more equity must be held at later times to compensate. By holding 100% equity in the sixth decade and 50% between cash and equity in the seventh decade to compensate, 31
  • 38. the quantile values [75, 77, 83, 96] were produced. This shows that while the median is met, the lower quantiles are impacted. The region of strategies which meet the median with this criteria is also significantly reduced. This highlights that, while it is possible to find an optimal strategy without equity in the early stages, it impacts neg- atively on the optimal region. This characteristic was observed throughout the search. A final characteristic observed was that the investment strategy in the later time peri- ods have little impact on the median and quantiles being considered. It was observed that both equity and bonds provided similar results to median and lower quantiles when considered for the last 2 decades investments. This is due to the fact that by age 100, or time 80, a large proportion of the funds are depleted. As observed un- der the sensible investment strategy in Figure 4, approximately 70% of the funds are depleted. This means that any investment options considered in these final time peri- ods are only affecting a small number of simulations. It is likely that the median and lower quantiles will be mostly dependent on funds which have ran out by this time. It is also due to the fact that it would be expected that funds are relatively small by this stage. There would have been 25 years worth of drawdown income taken out with only the investment returns accounting for accumulation. When the fund is at its largest, around about retirement, it would be expected that suboptimal strategies have more impact at that point than at the end. Therefore, as it is important to only consider equity in the early years, when the fund is smallest and growing, the investment options in the last year have much less weight with respect to the risks being considered. By observing this characteristic of the space it allowed for these areas to be locked and only considered through mutations in the GA search method. While searching the space with DP, after a feasible optimum region had been found using GA, the characteristics of the early and late years of the fund were further observed. It reiterated previous observations that it was suboptimal to not hold equity in the early years and the final couple of decades had little effect on the median. Using both methods provided a result which was considered to be the region of investments which cover a global optimum. Results under the Geometric Brownian Motion This section will discuss the optimal region of investment under the Geometric Brow- nian Motion (GBM). It shall also examine the characteristics of the search space observed throughout the search. This section will not look to examine or discuss the differences and similarities between the models in detail. The majority of that discussion is held in the next section. 32
  • 39. Optimal Region under GBM This subsection will look at and give a brief discussion of the optimal regions of investments under the GBM. Under the GBM the optimal region of investment was found not to be unique. There were two regions which each contained strategies of similar optimality. One of the regions was very similar to that of the region found under the Wilkie Model. The first of the optimal regions has the form: ˆ The first five decades contain 100% Equity. ˆ The sixth and seventh decade containing a combination of Bonds and Cash ˆ The last three decades contained 100% Equity or Bonds The second optimal region has the form: ˆ The first four decades contain 100% Equity ˆ The fifth decade contains a 50% split between Equity and Cash ˆ The sixth and seventh decades must contain some Equity which can be a combi- nation with Bonds or Cash ˆ The eighth decade is flexible in that it can contain combinations or pure invest- ments across Equity, Bonds and Cash ˆ The last two decades contained 100% Equity or Bonds It was noticed that, while these regions are different, they exhibit some similar char- acteristics. Both of these show that it is optimal to hold equity in the early years of the fund, with differing degrees of flexibility in the middle years. They also both show that the final years investments have similar characteristics. The first of the optimal regions given has a narrower region of flexibility, but provides a slightly im- proved optimal with respect to the quantiles. Both of these regions provide suitably optimal strategies under the GBM. They were discovered using the GA approach. The quantiles produced are approximately [72, 74, 81, 96] for both regions. Key observations under GBM This subsection will cover the key characteristics of the investment space observed un- der the GBM. There are a number of characteristics considered, however, they shall not be compared to those found under the Wilkie Model. This comparison will take place in the next section, discussion of results. 33
  • 40. The first characteristic examined, as before, was the effect of investing 100% in equity. This was examined to measure the quantiles to gauge a possible upper limit. As with the Wilkie Model it was observed that the quantiles for 100% equity investment fell short of the quantiles of death. Under the GBM the quantiles were [72, 74, 83, 120]. These are not only less than the quantiles of death, but also fall short of the quantiles observed under the Wilkie Model. This was used to give a possible upper limit of quantiles which could be obtained when searching the space. By comparing these quantiles to those produced in the optimal region it is observed that the quantiles are easier to obtain at lower levels. This is due to the constructional differences between the Wilkie Model and the GBM. Since the GBM is driven by independent normal shocks [Glasserman, 2003, p.79], the inter quantile range is narrower which allows for the extreme quantiles to be more easily obtained, as is evident here. The next characteristic to be observed under the GBM was the impact on the median and quantiles with respect to the investment in early years of the fund. This is similar to the characteristic observed under the Wilkie Model. By considering alternative in- vestments through the mutations in the GA search method, it was discovered that not holding equity in the first four years was suboptimal and had a negative impact on the quantiles. This observation was reiterated in the DP search where it was discovered a number of singly optimal investment strategies. This meant there was no flexibility in the strategy to form a region of optimum strategies. It was generally observed that if a combination of equity was held at some point in the first three decades, a strict strategy had to be followed throughout to meet the criteria of the risk metric. In this context, a strict strategy is meant to mean that a strict process has to be followed. In this case it was to hold almost 100% equity throughout to account for this split in the early years. These strategies were noted, but discounted since there was no improvement over the optimal regions found. They were also discounted due to the fact the search space was rather naively constructed with the constriction of investment options and spread of investments. It seemed illogical to rely on a single, strict strategy where a region of strategies provided similar, if not better optimality, with more flexibility and options to investigate the effects. Observing that it was optimal to only hold equity for the first section of the investment period allowed for a more focused search on the middle period, to hone in on the optimal region. The early time periods were only considered as mutations. The final characteristic observed under the GBM is the fact that the final time periods have negligible impact on the median and lower quantiles. This will be due to the same reason as discussed under the Wilkie Model. It would still be expected that as the strategy moves through the investment period, it is more likely that the fund is 34
  • 41. depleted. This means that the final years will have less effect on the quantiles over the 5000 simulations. Discussion of the Results This section will look to compare and contrast the key characteristics along with differences and similarities observed under each model. The optimal regions shall also be compared. Comparing the Quantiles and Processes of the models This subsection will look to compare and contrast some of the general differences between the models. This shall use the extreme investment strategies of 100% equity that were considered under each model. Using this extreme strategy allows for the quantiles to be examined along with the reason behind the differences and similarities. One of the first points to consider is the difference and similarities between models, observed under an extreme strategy. It was observed under the Wilkie Model that with a different risk metric, having 100% invested in equity for the entire period could be considered optimal. This would provide the highest probability of having an income at age 120, on average across the 5000 simulations. This was also the case under the Geometric Brownian Motion (GBM). The graphs that this investment strategy produced are as follows. Figure 7: Histograms of the two funds with 100% invested in equity for the entire time period under each model, as before, time represents the age of the pensioner 35
  • 42. It is clear from figure 7 that both models have the same conclusion in this respect. Both show that across the 5000 simulations there is approximately a 55% chance that the fund will still have a positive value when the person is age 120. This is clearly optimal if the fund as a whole was looking to be maximised, however, with respect to the criteria for this project this is not optimal. While both funds have a similar probability of being positive at age 120, the quantiles differ greatly. The 5%, 10%, 25% and median under each model highlight a noticeable difference. Under the Wilkie Model the quantiles are [76, 80, 89, 120] whereas under the GBM they are [72, 74, 83, 120]. It is clear to see that the lower quantiles under the GBM are smaller by a consistent amount than those in the Wilkie Model. This brings to light the underlying differences in the models. Another point observed from the quantiles produced under this extreme investment strategy is with respect to the values observed. These higher quantiles observed were considered to be an upper limit of what was achievable. Due to the skew to the right, and the large amount of data held in the final bin, other quantiles could not surpass these. This view was reinforced while searching the space, when no strategies pro- vided better quantiles. By using this information, it meant that an upper limit could be applied to what was achievable for the quantiles in the search. Since optimality was being indicated in part by the quantiles, it meant that an indication of optimality could be determined by how close the quantiles were to this upper limit. As seen from the quantiles produced by the optimal region under the GBM it is possible to match these. This would imply that this is the best optimal region under this search criteria. Another observation of the quantiles was the consistency in range of quantiles between the strategies under each model. The range between 5% and 25% under the Wilkie Model was 13 years while the range under the GBM was 11 years under the extreme equity strategy. Under both optimal regions the range was 9 years. Since the GBM provides a narrower range in the extreme strategy, when it achieves the same range in the optimal, this shows the consistency in the optimal region between the different models. As discussed in their respective chapters, the different models are driven by different processes. The difference in processes is also evident in the graphs. Under the GBM the years immediately after 65 are much more at risk of running out of money than under the Wilkie Model. This is clearly the cause of the difference in quantiles but to examine the cause of this, the accumulation period and driving period of each model will have to be discussed. 36
  • 43. As the models move through time the returns are driven by different processes. The Wilkie Model, being driven by the Auto-Regressive (AR) time series depends on the previous time step [Sahin et al., 2008] whereas the GBM is driven by independent normal shocks [Glasserman, 2003, p.79]. This fundamental difference will also account for some of the higher probability that a fund runs out so soon after retirement under the GBM. Since they both have similar long term mean returns and volatilities the AR process provides a smoother movement than the GBM. This means that if the Wilkie Model is producing an upward trend it is less likely, than the GBM, to suddenly jump down. The trends at specific points throughout the investment period can be observed in figures 2 and 3 for Wilkie and GBM respectively. A snapshot of how the equity total return index is preforming is given for when the fund is at its largest. Figure 8: Snapshot of TRIS between years 35 and 55 of the investment period (age 55 to 75) The retirement age, before drawdown occurs, is at time 45 (age 65). It is clear to see in this snapshot that, while the GBM is more volatile in the movements, it has a general upwards trend. Work by Milevsky [Milevsky and Abaimova, 2006] shows that the sequence of returns is as important as the returns themselves. If a negative return is followed by a positive return, while drawdown is being taken, it is much more damaging to the fund than the other way around. This snapshot of the smaller period highlights some of the key differences of the models that have been discussed. It is observed that the GBM is more likely to change direction due to the independent normal shocks whereas the Wilkie Model has more prolonged movements. By the time 37
  • 44. of this snapshot the long term means have been achieved for both models, as indicated by the dashed yellow line. This is due to the construction and calibration of the GBM. The work by Milevsky [Milevsky and Abaimova, 2006] looks at the sequencing of returns and the effect on drawdown pensions. This work is based upon a drawdown strategy from a fund at its largest time. It looks at how the order of non-deterministic return affect the ruin rate of the fund. The difference in early years of retirement be- tween the funds can come down to this sequencing and the difference in structure of the models. As discussed, the Wilkie Model has smoother movements from the AR process whereas the GBM is more likely to jump in different directions. The smooth effect of the AR process impacts less on ruin rate than the jumps. As observed by Milevsky, a smooth average return of 8% provides a better ruin time than a process that jumps between larger values, while still obtaining the 8%, if the first jump is negative. This must be considered in the term of drawdown, when a constant amount is being removed annually. This information would imply that the underlying driving process of the model does have an effect on the outcome of an investment region. Data shown in the Appendix 2 also highlights a difference in rate of returns and volatility in the early years of the fund, under the different models. Using the equity total return index as an example, it can be seen that in the first decade of returns the Wilkie Model has an average return of 6.75% with a volatility of 13.82%, across the 5000 simulations. In order to produce the simple model of the GBM the average of all 10 decades of return and volatility has been used in calibration of the equity index. Therefore, for all decades under the GBM the mean return is 7.39% with a volatility of 15.88%. After the first decade in the Wilkie Model it comes close to a constant long term average with mean return 7.46% and volatility of 16.11%. This discrepancy between the models may have some input into the higher rate of funds running out of money under the GBM. As Milevsky discussed, the sequence of returns when the fund is at its largest, in the fifth decade, can have devastating impacts on the life of the fund and its ruin age. This is the reason why it is most common to hold less volatile assets when a pension pot is reaching maturity. This discrepancy in the first decade of the models will have a slight impact on the fund at its largest, the sequence of re- turns will also have an impact on ruin along with the process which drives the models. Comparing the Optimal region of Investment The next area to compare is the optimal regions found under each model. It was seen that the optimal region of investment under the Wilkie Model was unique whereas under the GBM it was not. It is important to note that one of the regions found un- 38
  • 45. der the GBM was similar to that of the one found under Wilkie. The only difference being that under GBM the eighth period allowed for more flexibility. However, it had been noted in both models that investments towards the end of the period had little influence on the optimality. As when looking at the quantiles for 100% investment in equity, the quantiles for the optimal regions were different as well. This can be ex- plained by the process of the models. From observation, it is clear that the quantiles under the GBM will never reach as high a value as under the Wilkie Model. In order to compare the models, both optimal regions under the GBM will need to be considered. To begin with a basic comparison will be carried out. Then the quantiles produced from the strategies, given in the results for each model, shall be used. To quickly compare the similarities of the actual strategies, both had the first four decades in equity. For the fifth decade the Wilkie Model used a split of equity and cash where the GBM kept invested purely in equity. Again, for the sixth decade the Wilkie Model used a split of equity where the GBM invested 100% in cash. For the seventh decade the Wilkie Model invested in a split between bonds and cash whereas the GBM invested in 100% bonds. From the eighth decade onwards both models invested purely in any asset, commonly equity or bonds. Just from this basic analysis it is clear that under both models it is optimal to hold pure equity for the first por- tion of the investment period. When searching the space under both models it was found that holding anything else impacted negatively on the quantiles. This was more pronounced under the GBM. It is also clear that towards the end of the investment period, the investment options has little effect on either model since the impact on the optimality was negligible. They both hone in on a similar area of the search space. The quantiles under Wilkie and GBM are [75, 78, 84, 96] and [72, 74, 81, 96], for the models respectively. This provides a similar range between the respective quantiles under each model as observed when looking at the values with 100% in equity. It is observed that the quantiles under the GBM are approximately 3 or 4 years less than the quantiles under the Wilkie Model. The reason behind this, as discussed previously, is the driving process of the models. The quantiles of the AR process push up the values. Under the second optimum area of the GBM the quantiles are similar with values [71, 74, 80, 96] which are, again, lower by the same margin. Graphical comparison of the Optimal region For further comparison of the strategies under the model, this subsection shall consider the graphs produced. This shall be an extension from the previous section which focused on the form of the regions and the quantiles. The graphs produced by the 39
  • 46. ALM under an optimal strategy from each region are given: Figure 9: Histograms of the two funds with the optimum investment strategies under each model Figure 9 shows a more emphasised version of the plots in figure 7. It must be noted that, even while hitting the desired median, there is still over a 25% chance that the fund will provide income at age 120. The characteristics of the histograms under the different models differ slightly. Under the Wilkie Model there is a gradual increase in funds running out of money between age 70 and 78 where it reaches the most likely point, excluding age 120, where it levels off. There is a much sharper increase under the GBM with a jump about age 67 and a rapid climb to about age 72 where it levels off. Both of the histograms show the tails towards the older ages. After an initial burst of funds running out of money, there seems to be a decrease in probability towards the older ages. This would imply that past a certain age there is a high probability of self-financing. This means the fund will be able to last until age 120 or even be self financing through investment returns. Doing some crude tests with a deterministic model including the inflation index it was found that only an interest rate of 5.56% throughout the entire investment period would make the fund last until age 120. However, this comes back to the work of Milevsky in that the sequence of returns impacts the fund. The shape of the two histograms also provide some observations about the processes. It is clear that the AR process driving the Wilkie Model provides smoother times for the funds running out. From observation it seems that the GBM is more evenly 40
  • 47. spread, with a less pronounced centre. It also means that the GBM has slightly larger probabilities in the tails. From the large tails to the right under both models, it pro- vides further evidence to the earlier point about the distribution of this data. It was earlier theorised that the data could be modelled using a Generalised Extreme Value distribution. Conclusion of Comparison There are a number of points to take away from the comparison of the different op- timal regions of investment under the models. Firstly, from searching the space, it is clear that both models provide similar results and analysis in the early and late years of the strategies. There is consistency between the models in terms of quantiles. This means that on the less sophisticated GBM, the quantiles will be consistently lower by similar margins than those under the Wilkie Model. It is clear there could be some issues when focusing on the middle of the investment strategy, if detailed analysis was required. However, given the size of the space, both models will lead to a similar optimal region. Both models do provide the same general shapes to the distribution, with the Wilkie Model providing a smoother, more elegant shape. They both indicate towards a distribution with a large right-hand tail. The work by Milevsky [Milevsky and Abaimova, 2006] comes to the fore in some of the observations. It is observed that the driving process of the models can affect the investment strategy. Even though the models are calibrated to produce the same mean return, volatility and correlation, once they have been ran through the ALM the driving process takes affect. The movement of the indices under the different models affects the ruin times of the funds under this different models. This can be explained by considering the sequence of returns on a drawdown pension fund. In general, it is possible to find an investment region under the simplistic GBM, which would be similar to that under the Wilkie Model and satisfy the criteria. This region shall also produce a similar range in quantiles and have the same characteristics. However, the Wilkie Model does provide a more elegant model of the fund which would provided better detailed analysis and statistical results. Therefore, in order to hone in on an investment strategy and observe characteristics of the space, the highly calibrated simplistic model will provide just as good results as the more complex model. 41
  • 48. Splitting the Time This Chapter looks at the effect of splitting the period into accumulation and de- cumulation. Instead of considering the time from the start of the working lifetime until death, the optimal region shall now be searched by freezing one stage while searching the other and vice versa. This is done using the simplistic method of Dy- namic Programming (DP) as used as the second search method when considering the investment period as a whole. A more complete and complex methodology for the examination of the effect will be discussed. In order to carry out this search some assumptions have been made. It was decided that when one stage was frozen, an optimal strategy from the previous search would be applied in that stage. In order to fully search the space of the side being consid- ered, the Dynamic Programming (DP) method shall be applied. This uses the same methodology as when using it for a second search method of the space as a whole. This means it shall start from a number of different strategies. The backwards iter- ation process is applied to search through time. The initial strategies used are the same as before with 6 of the strategies being 100% and 50% split investment in the asset class throughout the entire time. An optimal strategy will be used to explore the surrounding space and then a basic sensible strategy, with equity when the fund is at its smallest moving to cash as the fund reaches its largest. The search spaces will be from 0 to 45 for the accumulation and from 45 to 100 for the de-cumulation period. These times are with respect to the investment period, not the age of the worker. As before, the investments will remain constant for decades, except for the 5 year periods around 45. Using these restrictions and the same criteria of the worker as before it allowed for a basic investigation into effect of splitting the area. The Wilkie Model was used as the underlying model of the economy. Results and Discussion The optimal strategy used was the one defined previously under the Wilkie Model. For the accumulation period this meant the first four decades all in equity moving to a 50% split between equity and cash for the last 5 years. The de-cumulation period started with a 50% split between equity and cash for the first 15 years before moving 42
  • 49. into cash for one decade then into bonds for the last three. To begin the search the accumulation period was frozen while the de-cumulation pe- riod searched. Starting from the optimal strategy and applying the DP methodology altered some of the investments but kept it within the optimal region previously de- fined. Starting from most of the other initial strategies converged to sub-optimal strategies, meaning they met the median but the quantiles were too low. Some did converge to optimal strategies within the previously defined region. This search indicated that there would be no impact from freezing the accumulation to search the de-cumulation period with the risk metric kept constant. The next step was to freeze the de-cumulation and alter the accumulation. The same techniques were used with the same initial strategies. As before, the process con- verged to the optimal region. One anomaly was found which was that holding a 50% split of equity and cash in the first decade could be considered optimal since there was no impact on quantiles. However, as noted in the discussion of results when changing the model, the resulting investment strategy was strict, consisting of more equity. This was needed to produce higher returns to cover for the split in the first period. Therefore, this was rejected since it reduced flexibility of the region without improving the quantiles. Since there was no difference in splitting the investment region while still applying the same risk metric, this search reinforced previous observations about early and late years in the period. While the fund was at its smallest, it was considered optimal to hold equity. This observation allowed for more flexibility in the investment strategy when the fund was at its largest. It was observed that if something other than equity was held at these times, the flexibility of the investment strategy at its largest was reduced since more equity would generally have to be held. The previous observation of the flexibility in the final investment periods was also reinforced. When exploring the de-cumulation period the flexibility in the final three periods was observed. Effect of the Risk Metric This search has been preformed using the same risk metric as previously defined. That was, matching the median of the fund with the median of death for a life alive at normal retirement age. This meant that the fund region as a whole was still re- quired to be considered, even though only one half was being altered. 43