1. Model validation aims to identify and document model assumptions and limitations to help mitigate model risk arising from pricing models.
2. A robust model validation process focuses on assessing model assumptions, limitations, and potential impact given model usage.
3. Effective model validation is firmly embedded within a comprehensive model governance framework involving collaboration between model validation, model owners, risk managers, controls functions, and audit.
1. Model Risk & Validation
Determining The Expectations of a Model
V lid ti F ti B t P tiValidation Function – Best Practice
CFP EventsCFP Events
QUANT RISK MANAGEMENT CONGRESS 2014
Presented by
h l Alb h
London, October 7-8, 2014
Raphael Albrecht
IndependentValidation Unit
Barclays
1
Barclays
2. Disclaimer
All f th i i d i thi t ti l l thAll of the opinions expressed in this presentation are solely those
of the speaker and should under no circumstances be taken to
represent those of any bank, regulatory agency or otherp y , g y g y
institution, financial or otherwise.
In particular, any views regarding “best practice of modelp y g g p
validation” are personal opinions of the speaker and DO NOT
necessarily comply with any particular internal or regulatory
d l d h d l d bguidance – please consider them as idealised statements about a
“model validation heaven”
All f th t d l h ld b id d h th ti lAll of the quoted examples should be considered hypothetical
2
3. Model Risk – a working Definition
Model Risk is the potential for adverse consequences (financialModel Risk is the potential for adverse consequences (financial
or other) from decisions based on incorrect or misused model
outputs.
This can arise from fundamental model flaws leading to
inaccurate outputs, errors in implementation, or
incorrect/inappropriate use.
Quantitative ModelValidation can help mitigating model risk
in pricing models by identifying and documenting model
assumptions and limitations
3
4. Elements of Model Risk Governance
TheThree Lines of Defence
1. Model Owner – Business , Developer (QA) and IT perform
UATTesting
2 Approver Internal Control Functions2. Approver – Internal Control Functions
ModelValidation: Confirms conceptual soundness and
documents limitations
Market Risk Manager: Confirms risk representation is
adequate, if necessary defines RNIVs or Add-Ons
Product /Valuation Controls:Product /Valuation Controls:
Mitigate limitations through reserves (FVAs and PVAs)
set up price testing (input & output testing), monitor input data and
cailbration performancecailbration performance
3. Audit
validates governance process and double-checks reviews
4
g p
5. Desirable Aspects Governance
The three lines should be independent of one anotherThe three lines should be independent of one another
Model owner (FO) needs to be incentivised to provide support to
ModelValidation (MV) during review( ) g
MV should be part of the trade approval process but not own it!
It makes more sense to approve trades rather than modelsIt makes more sense to approve trades rather than models
TradeApproval process should go far beyond what model review
can achieve
MV should avoid making any model-related recommendations as
this would make them into part-owners of the modelsp
Ideally, the model review process should be based on priorities,
objectives and procedures, not on deadlines
5
6. Scope of a Pricing Model Review?
Sophisticated pricing models that are used :
To value trades with exotic payoffs if they are marked to a model
metrics (sensitivities) used by the internalVaR model or other risk metrics
for adjustments and reserves
In curve builds andVol-surface builds
Less likely candidates for a review:
V ill P ff ( ill S i l ti CDS t )Vanilla Payoffs (vanilla Swaps, simple options, CDS etc)
Industry-wide standard pricing models
Example: CD swaptions priced on credit-adjusted Black
Note: Underlying (forward) credit curve build should still be subject to a review
Approximate booking (if the target model was either subject to a review or is not in scope )
Example: amortising CSO booked to standard CSO
Trader tools used only to estimate some model inputs (unless those tools are part of the pricing
algorithm)algorithm)
Example: cash flow profiles of mortgage-related securities generated using Intex from input
prepayment rates and indices
6
7. Key Objectives of a Model Review
To identify and document assumptions and limitations ofTo identify and document assumptions and limitations of
the model used for pricing a given payout
Focus on adequacy of the risk representation rather thanq y p
pricing accuracy
Prices of exotic derivatives are given by trader marks on key
input parameters
Any price (within a range) can be matched by shifting the
i tinputs
The pricing range is model-dependent, particular value are not
Risk representation is determined by the quality and numberRisk representation is determined by the quality and number
of risk factors and their calibration – this is model-dependent
7
8. When is a model ready for review?
1. Has the model been prioritised for review?p
2. Is the model properly documented?
Is the theory fully documented with all reasoning (non-standard derivations)
and detailed references to accessible sources?and detailed references to accessible sources?
Is the implementation fully documented?
Examples:
PDE solver schemePDE solver scheme
Calibration routines
Integration and interpolation schemes
Random number generator (if relevant)
3. Is a testable implementation available?
Are interfaces to intermediate values available?
facilitate implementation testingp g
Does prototype allow easy scenario runs?
Classical horror scenario: command-line pricer with xml configuration-file
8
9. Theoretical Review
Identify assumptions and limitations of the model used for pricing /risking a given family
f tof payouts
Are model assumptions adequate for the given the payout?
Are all key risk factors for the payout being modelled? – discuss with the RM!
Example: 1F HJM IR model may be appropriate for range accruals but missing
de-correlation of different tenors might have a more significant impact for
spread options
Is the calibration set up reasonable?Is the calibration set-up reasonable?
I.e. can we expect to get an adequate risk representation?
Investigate conceptual soundness of the model
h k i t ith th d l d f i ilcheck consistency with other models used for a similar purpose
Does the model represent best practice among industry peers?
Counterexample: CDO pricing with Gaussian copula and constant recovery
(past 2008)(past 2008)
Apply Occam’s razor: Is this the simplest solution to the given pricing
problem?
9
10. Testing Design
Philosophy: Establish results and let them speak for themselves!
Amount and detail of testing should be commensurate with the potential
model risk
Test design should follow standardised review templatesTest design should follow standardised review templates
Design your standard templates and make them part of governance
Often, it makes sense to consolidate common, repetitive elements into a
single doc referred to by other reviewssingle doc referred to by other reviews
Model Reviews for workhorse models used for a number of payouts
Examples: N-Factor-HJM, LMM, LSV-engine
Focus on calibration and re-pricing of vanillasFocus on calibration and re pricing of vanillas
Payout Reviews
Examples: various range-accrual types and spread options on N-Factor-HJM would
have separate reviews
Focus on payout implementation, behavioural tests & convergence
Curve /Vol Build Reviews
Focus on repricing input vanillas, build accuracy and stability
10
11. Implementation Testing
To verify the model is implemented in agreement with documentationy p g
Standard procedure: comparison to independent implementation
(replica)
li d ll k i i l i di id llapplied to all key pricing elements individually
Are different pricer versions used for various sub-tasks?
Example: Pricing on PDE, calibration on analytic approximation
If implementation is analytical, agreement should be to within
numerical accuracy
Otherwise it should be 100% clear that diffs are “small” and “random”
(i.e. no systematic bias), in worst case argue by showing regression
Benchmarking: if model is too complex to replicate (ex: general
PDE solver) or if natural benchmark models are readily availablePDE solver) or if natural benchmark models are readily available
and were previously reviewed
11
12. Model and Calibration Analysis
To verify the model behaviour is as expected from theoryy p y
Impact on model and calibration when varying important input
parameters
I t t k t d t ( i ld l t )Input parameters are market data (e.g., yield curve, vols, etc.) or
model parameters (ex: IR-CR correlation in a hybrid model)
Stress tests – to see if and when the model breaks down/fails to
l b h d f dcalibrate when wide ranges of parameters are used
Note that it is not the primary intention of MV testing to determine
precisely when the model will brake (this is generally impossible!)y g y
only if we see that it brakes we would like to investigate why this is the
case and determine if in that particular case this can be expected or not
ex. when a no-arbitrage bound is violatedg
depending on the specific model/calibration in question it may not be
possible/necessary to perform all of the above tests
12
13. Testing related to Limitations and
Approximations
To investigate the impact of model- and implementation-To investigate the impact of model- and implementation-
related assumptions and limitations on pricing and risk for
representative tradesep ese tat ve t a es
Examples:
Impact of counter intuitive model assumptions (ex negativeImpact of counter-intuitive model assumptions (ex. negative
hazard rates) on pricing and sensitivities
Convergence of PV and sensitivities with number of MCConvergence of PV and sensitivities with number of MC
runs, number of grid points in a lattice
Impact of using a simplified analytic pricer for calibration onImpact of using a simplified analytic pricer for calibration on
pricing accuracy
13
14. Additional Testing
Fixed slot for any testing not fitting into any of the standardFixed slot for any testing not fitting into any of the standard
categories above
Possible tests would include:
Thought experiments – E.Mach’s “Gedankenexperiment”
Pricing model example: ad-hoc scenarios to show the effect ofg p
missing correlation
Comparison to alternative models (usually ad-hoc)
Consistency among various payoff-variants
Example:
CLN CDS + D l + k ZCLN = CDS + recovery Digital + risky Zero
on risky credit curve build (in terms of PVs)
14
15. The Write Up – Elements of good style
Provide all relevant details in the model description, but try to be more concisep y
than QA and avoid any verbatim repeats, refer to derivations or them give in an
appendix
Be matter-of-fact and to the point!
Stick closely to the template for the given review type
workhorse model, payout, curve build, ...
Make sure
Front end – conclusions and executive summary are crystal-clear with extremely
user-friendly wording
conclusions flow smoothly from testing observations and are worded in matter-y g
of-fact way
any limitations are referred to testing demonstrating potential impact where
possiblep
wording and references are consistent with all other existing model reviews
Use internal peer review to sanity-check before circulating
15
16. After the Review is Completed
Walk your counterparts inTrading Risk and PC through theWalk your counterparts inTrading, Risk and PC through the
conclusions of the review and note their responses/suggested
mitigantst ga ts
Desks might volunteer to run periodic tests (possibly inspired by
your testing) for the sake of monitoring the impact of morey g g p
critical limitations
measures are pre-agreed to be taken if impact estimate exceeds thresholds
Discuss with other control functions (Risk and PC) their
suggestions for model-related FairValue Adjustments and how
those reflect your testing resultsthose reflect your testing results
Again, those might be inspired by your testing
16
17. Conclusion
Model Validation Function can help mitigating model riskModel Validation Function can help mitigating model risk
through
Following a robust model validation process focussed ong p
ModelAssumptions & Limitations and their potential impact
in the context of its usage
MV process being firmly based within an well-established
model governance process encompassing all stakeholders
Including required “fringe” measures (Pricetesting,
FVAs,RNIVs, Model Risk estimates) provided by otherp y
functions (PCG, RMs, other control functions and Audit)
17