2. CONTENTS
• Model building , Verification and Validation
• Verification of simulation models
• Calibration and validation of models
3. QUESTION BANK
• What is verification of simulation model? List the suggestions given for
verification of models
• Describe the three step approach to validation by Naylor and finger
• Explain validating input-output transformations with an example
• With a flow diagram, explain the transitional relationship between the model
building, verification and validation
• What is the purpose of model verification? Explain how verification of models is
done?
• Distinguish between verification and validation
4. INTRODUCTION
• Most important and difficult tasks facing a model developer is the verification and
validation of the simulation model
• The goal of validation process :
• To produce a model that represents true system behaviour closely enough for the model
to be used as a substitute for the actual system for purpose of experimenting with the
system
• To increase the capability of the model to an acceptance level so that the model will be
used by managers and other decision makers.
• Validation is an integral part of model development.
5. INTRODUCTION
• Verification and validation process consists of the following components:
• Verification is concerned with building the model correctly
• Or concerned with building the model right.
• The comparison of the conceptual model to the computer representation that
implements that conception
• Validation is concerned with building the correct model
• concerned with building the right model
• Confirms that a model is an accurate representation of the real system.
7. MODEL BUILDING, VERIFICATION &
VALIDATION
• First step observing the real system and interaction among its various
components and of collecting data on their behaviour.
• Second step construction of conceptual model- a collection of assumptions
about the components and the structure of system, hypothesis about the values of
model input parameters.
• Third step implementation of operational model, using simulation software by
incorporating the assumptions of the conceptual model into the world view and
concepts of the simulation software.
8. VERIFICATION OF SIMULATION
MODELS
• Purpose of model verification is to assure that the conceptual model is reflected
accurately in the operational model
• Verification asks the following questions:
• Is the conceptual model accurately represented by the operational model?
• Many common-sense suggestions can be used in the verification process such as:
9. VERIFICATION OF SIMULATION
MODELS
1. Have the code checked by someone other than the programmer.
2. Make a flow diagram which includes each logically possible action a system
can take when an event occurs, and follow the model logic for each action
for each event type.
3. Closely examine the model output for reasonableness under a variety of
settings of the input parameters. Have the code print out a wide variety of
output statistics.
4. Have the computerized model print the input parameters at the end of the
simulation, to be sure that these parameter values have not been changed
inadvertently.
10. VERIFICATION OF SIMULATION
MODELS
5. Make the computer code as self-documenting as possible. Give a precise
definition of every variable used, and a general description of the purpose
of each major section of code.
6. If the operational model is animated, verify that what is seen in the
animation imitates the actual system.
7. Graphical interfaces are recommended for accomplishing verification and
validation
11. VERIFICATION OF SIMULATION
MODELS
8. The Interactive Run Controller (IRC) or debugger is essential component of
successful simulation building. IRC does the following jobs:
1. Finds and corrects the errors made by analysts
2. The simulation can be monitored as it progresses
3. Attention can be focused on particular entity, line of code or procedure
4. Values of selected component can be observed
5. The simulation can be temporarily suspended or paused
12. VERIFICATION OF SIMULATION
MODELS
• Two sets of statistics give a quick reasonableness are:
• Current contents
• Refers to number of items in each component of the system at a given time.
• Total count
• Refers to the total number of items that has entered each component of the system by a
give time
13. VERIFICATION OF SIMULATION
MODELS
• Most simulation software has a built in capability to conduct a trace without the
programmer having to do any extensive programming
• Some software's allow a selective trace
• Eg
• A trace could be set for specific location in the model or could be triggered to begin at
specified simulation time.
• Some simulation allows tracing a selected entity any time the designated entity is made
active then the trace is activated.
• To set the trace for the occurrence of particular condition like whether queue reaches a
length of 5 turns on the trace.
14. DOCUMENTATION
• Important way to aid verification process is documentation phase
• If model builder writes a brief comments in the operational model, plus definition of
all variables and parameters plus description of each major section of the model, it
becomes much simpler for another model builder to analyse or for the same model
builder on later date, to verify the model logic.
• Of the three class of techniques
• The common sense technique
• Traces
• Through documentation
• It is recommended that a modeler should carry out the first and third always.
15. SOPHISTICATED TECHNIQUE FOR
VERIFICATION IS USE OF “TRACE”
• Trace is detailed computer printout which gets the value of every variable in a
computer program every time that one of these variables change in value
• Is designed specifically for use in a simulation program would give the value of
selected variable each time the simulation clock was incremented
• Simulation trace is nothing but detailed printout of the state of the simulation
model as it changes over time
16. CALIBRATION AND VALIDATION OF
MODELS
• They are different by usually conducted simultaneously by the modeler.
• Validation is overall process of comparing the model and its behaviour to the real
system and its behaviour.
• Calibration is the iterative process of comparing the model to the real system,
making adjustments to the model, comparing the revised model to reality, making
additional adjustments , comparing again and so on.
• The following figure shows the relationship of model calibration to overall validation
process
18. VARITY OF TESTS TO COMPARE THE
MODEL TO REALITY
• Subjective test:
• involves people, who are knowledgeable about one or more aspects of the system,
making judgements about the model and its output.
• Objective test:
• require data on the system’s behaviour, plus the corresponding data produced by the
model.
• Statistical tests:
• performed to compare some aspects of system data set with aspect of the model data
set.
• If unacceptable discrepancies between the model & real system are discovered in
the final validation effort, the modeler must return to calibration phase & modify
until it becomes acceptable.
19. NAYLOR AND FINGER – THREE STEP
APPROACH
• Step 1 : build a model that has high face validity
• Step 2: validate model assumptions.
• Step 3: compare the model input output transformations
to corresponding input output transformations for the
real system.
• Next 5 sub sections explain these three steps
20. NAYLOR AND FINGER – THREE STEP
APPROACH
• Face validity
• Validations of model assumptions
• Validating input-output transformations
• Input-output validations:
• using historical input data
• Using a Turing test
21. FACE VALIDITY
• Goal of the modeler is to construct a model that appears reasonable on its face to
model users and others who are knowledgeable about real system being simulated.
• Potential users of the model should be involved in model construction from
conceptualization stage to implementation stage so that there is high degree of
realism .
• Another advantage of having users involved is the increase in the model’s perceived
validity or credibility , without which a manager would not be willing to trust
simulation results as basis for decision making.
22. FACE VALIDITY
• Sensitivity analysis can be used to check model’s face validity the model user is
asked whether the model behaves in expected way when one or more input variable
is changed.
• The model builder must attempt to choose the most critical input variables for
testing if its too expensive or time consuming to vary all input variables.
23. VALIDATION OF MODEL
ASSUMPTIONS
• Two categories of model assumptions : structural assumptions and data
assumptions
• Structural assumptions
• involve questions of how the system operates and usually involve simplifications and
abstractions of reality
• Eg. : customer queuing and service facility in a bank.
• Customers can form one line or there can be an individual line for each teller
• If there are many lines, customers could be served strictly on FIFO order or some
customers change lines if one line is moving faster.
• The number of tellers could be fixed or variable
24. VALIDATION OF MODEL
ASSUMPTIONS
• Data assumptions
• Is based on the collection of reliable data and correct statistical analysis of the data.
• Eg. :
• Interarrival times of customers during several 2 hour periods of peak loading
• Interarrival times during a slack period
• Service times for commercial accounts
• Service times for personal accounts.
25. VALIDATION OF MODEL
ASSUMPTIONS
• Whether done manually or special purpose software, the analysis consists of three
steps
1. Identify an appropriate probability distribution
2. Estimate the parameters of hypothesized distribution
3. Validate the assumed statistical model by goodness of fit test
such as chi-square or K-S test and by graphical methods.
26. VALIDATING INPUT OUTPUT
TRANSFORMATIONS
• Ultimate test of the model
• Model accepts the value of the input parameters and transforms these inputs into
output measures of performance.
• Instead of validating the model by predicting the future, the modeler could use
historical data that have been reserved for validation process
• The modeler should use the main responses of interest as the primary criteria for
validating a model.
• If the model is used later for a purpose different from its original purpose the model
should be revalidated in terms of new responses of interest under new input
conditions
27. VALIDATING INPUT OUTPUT
TRANSFORMATIONS
• Eg.
• In queuing system, the response may be server utilization and customer delay and input
condition may be number of servers
• In production system the response may be throughput & input condition may be
machines that run at different speed
28. VALIDATING INPUT OUTPUT
TRANSFORMATIONS
• If the proposed system is modification of the existing system , the modeler hopes
that confidence in the model of the existing system can be transferred to the model
of the new system
• Minor changes of single numerical parameters such as the speed of a machine, the
arrival rate of customers , the number of servers.
• Minor changes of the form of a statistical distribution such as the distribution of a
service time or a time to failure of a machine
• Major changes in the logical structure of the subsystem, such as change in queue
discipline, change in scheduling rule
• Major changes involve a different design for the new system such as computerized
inventory control system replacing non computerized system.
29. INPUT-OUTPUT VALIDATION- USING
HISTORICAL INPUT DATA
• To conduct validation based on historical data, important point is that all the input
data and all the system response data such as average delay should be collected
during the same time period
• If not taken on same time then , comparison of model responses to system
responses could be misleading.
• Implementation of this technique is difficult for a large system, because collecting
all the data required simultaneously from all input variables & those responses
variables of primary interest.
30. INPUT OUTPUT VALIDATION : USING
A TURING TEST
• When no statistical test is readily applicable, persons knowledge about the system
behaviour can be used to compare model output to system output.
• 5 years reports of system performance over five different days are prepared,
simulation output data are used to produce 5 fake reports
• All 10 reports should be in the same format
• They are randomly shuffled and given to engineer who is asked to decide which
reports are fake and which is real.
31. INPUT OUTPUT VALIDATION : USING
A TURING TEST
• If the engineer identifies a substantial number of fake reports, the model builder
questions the engineer and uses the information gained to improve the model or
else modeler will conclude that this test provides no evidence of model inadequacy.
• This type of validation test is called Turing test
• It is valuable tool in detecting model inadequacies and eventually , in increasing
model creditability as the model is improved & rejected.