SlideShare ist ein Scribd-Unternehmen logo
1 von 5
Downloaden Sie, um offline zu lesen
Does Optimal Mean Best?
WHITE PAPER
J. Marczyk, Ph.D.
Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not
smooth, nor does lightning travel in a straight line.
Benoit Mandelbrot.
The understanding, assessment and management of risk and uncertainty is important not only
in engineering, but in all spheres of social life. Given that the complexity of man-made products,
and the related manufacturing processes, is quickly increasing, these products are becoming more
and more exposed to risk, given that complexity, in combination with uncertainty, inevitably leads
to fragility. Complex systems are characterized by a huge number of possible failure modes and it
is a practical impossibility to analyze them all. Therefore, the alternative is to design systems that
are robust, i.e. that possess built-in capacity to absorb both expected and unexpected random
variations of operational conditions, without failing or compromising their function. This capacity
of resilience, main characteristic of robust systems, is reflected in the fact that the system is no
longer optimal, a property that is linked to a single and precisely defined operational condition,
but results acceptable (fit for the function) in a wide range of conditions. In fact, contrary to
popular belief, robustness and optimality are mutually exclusive. Complex systems are driven by
so many interacting variables, and are designed to operate over such wide ranges of conditions,
that their design must favor robustness and not optimality. In other words, robustness is equiva-
lent to an acceptable compromise, while optimality is synonymous to specialization. An optimal
system is no longer such as soon as a single variable changes - something quite possible in a world
of ubiquitous uncertainty. As the ancient Romans already knew, corruptio optimi pessima - when
something is perfect, it can only get worse. When you’re sitting on a peak, the only way is down
- when you’re optimal, your performance can only degrade. It is for this reason, that optimal
systems are fragile. It is for this reason that a state of optimality is not the most probable state
of a system. Recently, I have tried to translate the above intuitions into something a bit more
analytical and technical. The result is the theorem below.
Theorem
Let y = f(x) = x2
be a response surface with minimum at x0 = 0. Let x be a stochastic vari-
able, with a uniform distribution, pX (x) = 1
b−a
, with a < x < b, a ≥ 0 and with fixed width h =
b − a. Let pY (y) be the probability distribution function of y and Hy(a, b) =
R
pY (y) log pY (y)dy
the corresponding entropy. Then, there exist positive values of h such that H(a, a+h) is minimum
for a = 0.
Proof
Since y = x2
, the PDF of the output is given by
fY (y) = fX (y)
dx
dy
=
1
2(b − a)
√
y
Taking into account that b = a + h, the entropy of y is
1
Hy(a, b) =
Z (a+h)2
a2
pY (y) log pY (y)dy =
=
2a3
¡
4 + 3 log( 1
2 a h )
¢
9 h
−
2 (a + h)
3
³
4 + 3 log( 1
2 h (a+h) )
´
9 h
It is easy to show that
lim
a−>0
H(a, a + h) =
8
9
h2
−
2
3
log 2h2
= H(0, h) > 0
It now remains to show that H(a, a + h) > H(0, h) for a > 0. To this end, let us compute the
Taylor series expansion of H(a, a + h) for a given h. Limiting the expansion to order two yields
H(a, a + h) ' (1 − 2 log 2h2
)a2
+ 2h(1 − log 2h2
)a +
8
9
h2
−
2
3
log 2h2
=
= (1 − 2 log 2h2
)a2
+ 2h(1 − log 2h2
)a + H(0, h)
It now remains to show that the first two terms of the above equation are positive for certain
values of h. Now, since a > 0, each of the two terms must be positive. Indeed, one can show that
1 − 2 log 2h2
> 0 for 0 < h <
e1/4
√
2
' 0.907
2h(1 − log 2h2
) > 0 for 0 < h <
p
e/2 ' 1.166
Therefore, for 0 < h < e1/4
√
2
, both the terms are positive and H(0, h) is the lowest value of
entropy for a ≥ 0. Moreover, H(0, h) is positive for all h > 0. This completes the proof. 2
The implications of this simple theorem are very important. Entropy reflects the level of orga-
nization of a system. However, in virtue of the second principle of thermodynamics, the entropy of
a closed system tends only to increase, reflecting the incessant urge of things towards lower levels
of organization. What the above theorem proves is that for the class of systems in question, i.e.
systems whose behavior can be locally approximated by a second-order response surface (some-
thing quite popular nowadays) are not willing to spend much time being optimal. In practice,
such systems will not privilege states of optimality, given the fact that these correspond to states
of minimum entropy. Since entropy tends to increase, this will tend to remove the system from
its state of grace. Given the chance, a system with minimum entropy, will try to increase it.
The important thing, however, is the fact that the inevitable increase in entropy is more likely
when you’re close to a minimum (maximum). This is because in the vicinity of extremal points
of a function, the entropy gradient is the highest. It is also true that no matter what state a
system is in, it will try to increase its entropy - even a robust systems will. But it is for optimal
systems that this increase is more probable and dramatic. The proof of this statement, which
I intentionally omit here, is based on the fact that the curvature of a function is highest in the
vicinity of a minimum (maximum) and this translates to a higher skewness of pY (y). It so happens
that skewness is a measure of entropy. In short, I believe the theorem explains why being optimal
is risky. Nature doesn’t privilege optimality at all. Self-organization - the main engine behind
the evolution of biospheres - prefers to favor fitness instead. But although omnis ars imitatio est
naturae - all arts are imitation of nature - in twenty first century CAE it is still popular to pursue
placebo-generating states of numerical optimality with physically poor surrogates, i.e. response
surfaces. Clearly, the theorem can be easily extended to other distributions and other more general
classes of response surfaces, but I leave that to the academics.
2
Optimization is an example of anthropocentric narcissism that characterizes our wasteful so-
ciety. It precludes comprehension since it forces one’s mind into a very restricted portion of the
entire space. It gives no holistic view since it is fruit of reductionism, search for details and frag-
mentation. The danger behind the practice of optimization is that it reinforces and propagates
a Panglossian vision of life. Our ancestors were wiser than we are today. William Occam said
”nunquam ponenda est pluralitas sine necesitate” which in ordinary parlance means ”choose the
simplest explanation for the observed facts”. CAE actually does the exact opposite. Parapher-
nalia of modern algorithms are bound together in complex numerical cathedrals, monuments to
our black and white mathematics. Along the same lines Gell-Mann states in [5]: ”Why are el-
egance and simplicity suitable criteria to apply in seeking to describe nature, especially at the
fundamental level? Science has made notable progress in elucidating the basic laws that govern
the behavior of all matter everywhere in the universe-the laws of the elementary particles and
their interactions, which are responsible for all the forces of nature. And it is well known that a
theory in elementary particle physics is more likely to be successful in describing and predicting
observations if it is simple and elegant. ... Need the description of the fundamental laws of na-
ture make use of mathematics as we understand the term, or is there some totally different way
of describing the same laws?” I believe that we need to review our math, and to make it more
”natural”. According to Heisenberg: ”What we observe is not nature itself, but nature exposed
to our method of questioning”. The computer is probably the most remarkable piece of machin-
ery conceived by mankind. However, surprisingly it has not contributed to any major scientific
discovery. Something is wrong. Maybe it is our black and white math.
Van Doren points out in [8] that ”Chaos has made us realize, looking back at the history
of science, how often we have oversimplified situations in the attempt to understand them”. In
CAE the parallel is as follows. We first build super-complex models, with many elements, spend
hundreds of CPU hours, and then we kill the obtained information by throwing a response surface
on top. What is the sense of all this? What is the logic? I presume the logic reflects the general
character of the average human being. Humanity is characterized by an increasingly wasteful
existence. Expensive in energy and producing lots of garbage, of all types, numerical too. The
more detail we want, the more intrusive is the process of getting information, but we mustn’t
forget that knowledge can never be certain. So, let us summarize the salient characteristics and
disadvantages of optimization and of the philosophy on which it thrives:
• Optimization is not a natural process. From a philosophical point of view, a method that
is not natural, cannot be used as a good tool to understand Nature if it distorts and warps.
This lack of a natural flavor explains why so many techniques and respective variants must
exist. Each problem is best attacked with a specific optimization algorithm. The desire to
optimize reflects a sort of anthropomorphic perversion of mankind.
• Optimization is expensive. The fact that the curse of dimension exists, sustains the claim
behind the artificial character of the method. If Nature acted based on optimization, the
”design” of systems like a human being would be a task requiring cosmological time-scales to
complete. Evidently, Nature does not ”know” the concept of design variable or dimension.
• Optimization leads to fragile results. Optimization is indeed possible, and many people
pursue it. See for example our economy. Companies want the highest possible profit, in the
shortest possible time, with the smallest possible investment and with the smallest possible
risk and, possibly, with little or no R&D at all. I believe this minimax approach sounds
familiar, doesn’t it? Of course, all this is possible, but the side effect is that the economy
becomes fragile, stock market crashes become more and more frequent, and the entire system
becomes very sensitive to ”butterfly effects”. Extremes are in general not very good.
• Optimization induces excessive optimism. The reason for this unjustified optimism is that
the size of the optimal set is very small with respect to the acceptable set. Therefore,
3
a system that is optimal, easily ”pops out” of the optimal corner of a design space, and
quickly occupies states corresponding to lower-than-expected performance. What causes
this popping out is the fact that uncertainties exist. Given that a system that is optimal is,
by definition, impossible to improve, the only way it can evolve is towards lower performance.
As the Romans suggested, corruptio optimi pessima, what is optimal can only get worse.
• Optimal is the opposite to robust. A system can indeed be made optimal, but only for one
particular condition or function. If a certain design has to perform in an acceptable manner
under changing conditions, or in different environments, then a compromise is necessary. The
system is no longer optimal under each separate condition, but performs sufficiently under
all conditions. In Nature, this property is known as fitness, in engineering as robustness. In
effect, Nature makes designs that are fit for a function, not optimal. However, sometimes
excessively specialized designs do show up. Unfortunately, these are the first to become
extinct, given that their optimality for a certain environment precludes adaptation should
the environment change. As E.O. Wilson said, specialization is a tender trap of evolutionary
opportunism.
• Optimization promotes further fragmentation of CAE. The fact that the number of algo-
rithms is so high, and quickly increasing, favors further fractalization of CAE and deepens
its state of crisis. As T. Kuhn argued, lack of new ideas in a certain discipline reflects a state
of crisis in which minor variants of a certain paradigm are proposed and elaborated. This
proliferation inhibits innovation, given that the complexity and multitude of optimization
techniques displaces interest from the problem to the method. This fact is also responsible
for the difficulty in disseminating and deploying optimization in the industry. There are
simply not enough experts in the industry to cope with such complex techniques.
• Optimization is built on fragile grounds. Sampling of the design space is most often per-
formed with DOE, which is independent of the physics of a problem. Therefore, surrogate
models built on these physics-less tables of numbers become weak, almost Byzantian car-
icatures of reality. Moreover, once a surrogate model has been built, it can only deliver
(unwrap) what has been prepackaged into it. Early modeling has the great danger of forcing
conclusions at the outset. In effect, smooth and differentiable response surfaces cannot show
anomalies, discontinuities, bifurcations or outliers, and these, unfortunately, account for a
huge chunk of physics. As the history of physics teaches, it is precisely through the study of
anomalies that the greatest advances have been made.
• Optimization is fragile. The results of optimization problems depend often on the method
chosen, the starting point, and the numerical conditioning of the associated numerical prob-
lem. The skill in optimization lies in the ability to select the right combination of method,
starting point, stopping criteria, and the tuning of certain parameters. However, and most
importantly, optimal systems are hypersensitive to changes in parameters that have not been
included in the optimization process as design variables. This is the main shortcoming of
optimization as a philosophy of design. In fact, there will always be variables that are not
taken into account.
• Optimization is Panglossian. In effect, the desire to optimize is very much in line with the
Panglossian paradigm, according to which we live in the best of all possible worlds. Clearly,
due to the quantum nature of matter, such claim is unfounded. If it were possible to start
evolution again, we are sure that it would not follow the same path. It could also lead to
a different math from the one that we have built in ”this world”. In effect, nobody can
guarantee that our math is the best of all possible maths.
• A holy grail optimization algorithm does not exist. Those who insist on searching for the
global optimum are forgetting the existence of the famous NFL theorems. NFL theorems
state that for an average optimization problem, no method is more efficient than simple
random search. This result, in effect, is a bit embarrassing, especially in the face of those
4
who dedicate years of study in refining some esoteric optimization method. It all sounds a
bit like attempting circle-squaring in the twenty first century.
So what lies beyond optimization? Stochastic simulation in the first place. The credibility, and
therefore the future of CAE, and computing in general, hinges on realistic models, which include
uncertainty, and not on huge but simplistic and physically deficient surrogates. Once realistic
models as available, engineers should use them to achieve designs with acceptable but robust
performance, and not pursue delicate and expensive to find states of optimality. The science
is there. What we need is the right philosophy underneath. There is a general need for more
philosophy in science. We also need to be more aware of the consequences of our math. What
is needed are system-like studies, more holism, less fragmentation and sophisticated teraflopism
and hair-splitting. The intellectual and commercial failure of CAE is due to the lack of a sense of
direction. CAE does to physics what humanity does to the ecosystem. People don’t understand
how components interact, yet they manipulate the whole system. There is no sense of unity,
no solid roadmap, just mere fragmentation and futile refinement. No ethics. ”It is proved that
things cannot be other than they are, for since everything is made for a purpose, it follows that
everything is made for the best purpose” as was sustained by Dr. Pangloss, Voltaire’s eternal
optimist. Gould and Lewontin, in a famous 1979 paper, argued that to study the natural world
with the assumption that it is optimally designed is the modern equivalent of subscribing to Dr.
Pangloss’ ridiculous world view. The advent of uncertainty in CAE is simply inevitable. There
is no way to stop it. Uncertainty will quickly erode optimization as people realize that the more
models become realistic, the less optimization algorithms work. The response surface method is an
intellectual balloon that is going to burst due to its thin argumentation and empty moral claims,
and under the crunching train of logic which is Monte Carlo Simulation. Error communis facit
jus.1
References
[1] Marczyk, J., Principles of Simulation-Based Computer-Aided Engineering, FIM Publications,
Madrid, 1999.
[2] Wilson, E.O., The Diversity of Life, Penguin Books, 1992.
[3] Marczyk, J., editor, Computational Stochastic Mechanics in a Meta-Computing Perspective,
International Center for Numerical Methods in Engineering (CIMNE), Barcelona, December,
1997.
[4] Marczyk, J., Stochastic Design Improvement: Beyond Optimization, AIAA/NASA/USAF
Conference on Multi Disciplinary Optimization, Long Beach, USA, September 2000.
[5] Gell-Mann, M., The Quark And The Jaguar, W.H. Freeman and Company, New York, 1994.
[6] Marczyk, J., et.al. Uncertainty Management in Automotive Crash: From Analysis To Simu-
lation, ASME 2000 Conference, Baltimore, USA, September 2000.
[7] Marczyk, J., Beyond Optimization In Computer-Aided Engineering, International Center for
Numerical Methods in Engineering (CIMNE), Barcelona, September, 2002.
[8] Van Doren, C., A History Of Knowledge, Past, Present And Future, Ballantine Books, New
York, 1991.
1Common error becomes law. Digestus
5

Weitere ähnliche Inhalte

Andere mochten auch

Trabajando con fotos y textos
Trabajando con fotos y textosTrabajando con fotos y textos
Trabajando con fotos y textosSilvana Fiel
 
OntoCare_DataSheet_v2010_OntoMed
OntoCare_DataSheet_v2010_OntoMedOntoCare_DataSheet_v2010_OntoMed
OntoCare_DataSheet_v2010_OntoMedJacek Marczyk
 
NAFEMS_Complexity_CAE
NAFEMS_Complexity_CAENAFEMS_Complexity_CAE
NAFEMS_Complexity_CAEJacek Marczyk
 
Priporočilo Leonardo - Jaka Cebe
Priporočilo Leonardo - Jaka CebePriporočilo Leonardo - Jaka Cebe
Priporočilo Leonardo - Jaka CebeJaka Cebe
 
The Oberoi, Bali - Luxury Hotels and Beach Resorts in Bali, Indonesia
The Oberoi, Bali - Luxury Hotels and Beach Resorts in Bali, Indonesia The Oberoi, Bali - Luxury Hotels and Beach Resorts in Bali, Indonesia
The Oberoi, Bali - Luxury Hotels and Beach Resorts in Bali, Indonesia Gaurav Nikalje
 
The Oberoi Amarvilas, Agra
The Oberoi Amarvilas, AgraThe Oberoi Amarvilas, Agra
The Oberoi Amarvilas, AgraGaurav Nikalje
 

Andere mochten auch (12)

nafems_1999
nafems_1999nafems_1999
nafems_1999
 
Trabajando con fotos y textos
Trabajando con fotos y textosTrabajando con fotos y textos
Trabajando con fotos y textos
 
OntoCare_DataSheet_v2010_OntoMed
OntoCare_DataSheet_v2010_OntoMedOntoCare_DataSheet_v2010_OntoMed
OntoCare_DataSheet_v2010_OntoMed
 
Famous Quiz
Famous QuizFamous Quiz
Famous Quiz
 
CAD_Plus
CAD_PlusCAD_Plus
CAD_Plus
 
Teoria socio cultural
Teoria socio culturalTeoria socio cultural
Teoria socio cultural
 
NAFEMS_Complexity_CAE
NAFEMS_Complexity_CAENAFEMS_Complexity_CAE
NAFEMS_Complexity_CAE
 
Priporočilo Leonardo - Jaka Cebe
Priporočilo Leonardo - Jaka CebePriporočilo Leonardo - Jaka Cebe
Priporočilo Leonardo - Jaka Cebe
 
paper_ANEC_2010
paper_ANEC_2010paper_ANEC_2010
paper_ANEC_2010
 
The Oberoi, Bali - Luxury Hotels and Beach Resorts in Bali, Indonesia
The Oberoi, Bali - Luxury Hotels and Beach Resorts in Bali, Indonesia The Oberoi, Bali - Luxury Hotels and Beach Resorts in Bali, Indonesia
The Oberoi, Bali - Luxury Hotels and Beach Resorts in Bali, Indonesia
 
The Oberoi Amarvilas, Agra
The Oberoi Amarvilas, AgraThe Oberoi Amarvilas, Agra
The Oberoi Amarvilas, Agra
 
Hilton Hotels
Hilton HotelsHilton Hotels
Hilton Hotels
 

Ähnlich wie Does Optimal Always Mean Best? Complex Systems Require Robustness Over Specialization

Toward a theory of chaos
Toward a theory of chaosToward a theory of chaos
Toward a theory of chaosSergio Zaina
 
Eliano Pessa
Eliano PessaEliano Pessa
Eliano Pessaagrilinea
 
Agroforestry Systems Complex or worse? by Clas Andersson, Dept. of Energy and...
Agroforestry SystemsComplex or worse? by Clas Andersson, Dept. of Energy and...Agroforestry SystemsComplex or worse? by Clas Andersson, Dept. of Energy and...
Agroforestry Systems Complex or worse? by Clas Andersson, Dept. of Energy and...SIANI
 
Chaos Theory: An Introduction
Chaos Theory: An IntroductionChaos Theory: An Introduction
Chaos Theory: An IntroductionAntha Ceorote
 
What Is Complexity Science? A View from Different Directions.pdf
What Is Complexity Science? A View from Different Directions.pdfWhat Is Complexity Science? A View from Different Directions.pdf
What Is Complexity Science? A View from Different Directions.pdfKizito Lubano
 
Why finding the TOE took so long v9
Why finding the TOE took so long v9Why finding the TOE took so long v9
Why finding the TOE took so long v9Scott S Gordon
 
TJ_Murphy_Epistemology_Final_Paper
TJ_Murphy_Epistemology_Final_PaperTJ_Murphy_Epistemology_Final_Paper
TJ_Murphy_Epistemology_Final_PaperTimothy J. Murphy
 
Nature-Inspired Mateheuristic Algorithms: Success and New Challenges
Nature-Inspired Mateheuristic Algorithms: Success and New Challenges  Nature-Inspired Mateheuristic Algorithms: Success and New Challenges
Nature-Inspired Mateheuristic Algorithms: Success and New Challenges Xin-She Yang
 
The theory of zero point energy of vacuum
The theory of zero point energy of vacuumThe theory of zero point energy of vacuum
The theory of zero point energy of vacuumAlexander Decker
 
Fundamental Characteristics of a Complex System
Fundamental Characteristics of a Complex SystemFundamental Characteristics of a Complex System
Fundamental Characteristics of a Complex Systemijtsrd
 

Ähnlich wie Does Optimal Always Mean Best? Complex Systems Require Robustness Over Specialization (20)

aiaamdo
aiaamdoaiaamdo
aiaamdo
 
Gradu.Final
Gradu.FinalGradu.Final
Gradu.Final
 
Toward a theory of chaos
Toward a theory of chaosToward a theory of chaos
Toward a theory of chaos
 
On theories
On theoriesOn theories
On theories
 
Chaos Theory
Chaos TheoryChaos Theory
Chaos Theory
 
Wolfram 1
Wolfram 1Wolfram 1
Wolfram 1
 
Skepticism
SkepticismSkepticism
Skepticism
 
Eliano Pessa
Eliano PessaEliano Pessa
Eliano Pessa
 
Agroforestry Systems Complex or worse? by Clas Andersson, Dept. of Energy and...
Agroforestry SystemsComplex or worse? by Clas Andersson, Dept. of Energy and...Agroforestry SystemsComplex or worse? by Clas Andersson, Dept. of Energy and...
Agroforestry Systems Complex or worse? by Clas Andersson, Dept. of Energy and...
 
Chaos Theory: An Introduction
Chaos Theory: An IntroductionChaos Theory: An Introduction
Chaos Theory: An Introduction
 
What Is Complexity Science? A View from Different Directions.pdf
What Is Complexity Science? A View from Different Directions.pdfWhat Is Complexity Science? A View from Different Directions.pdf
What Is Complexity Science? A View from Different Directions.pdf
 
Why finding the TOE took so long v9
Why finding the TOE took so long v9Why finding the TOE took so long v9
Why finding the TOE took so long v9
 
Academic Course: 04 Introduction to complex systems and agent based modeling
Academic Course: 04 Introduction to complex systems and agent based modelingAcademic Course: 04 Introduction to complex systems and agent based modeling
Academic Course: 04 Introduction to complex systems and agent based modeling
 
Applied Science - Engineering Systems
Applied Science - Engineering SystemsApplied Science - Engineering Systems
Applied Science - Engineering Systems
 
Waterloo.ppt
Waterloo.pptWaterloo.ppt
Waterloo.ppt
 
TJ_Murphy_Epistemology_Final_Paper
TJ_Murphy_Epistemology_Final_PaperTJ_Murphy_Epistemology_Final_Paper
TJ_Murphy_Epistemology_Final_Paper
 
Nature-Inspired Mateheuristic Algorithms: Success and New Challenges
Nature-Inspired Mateheuristic Algorithms: Success and New Challenges  Nature-Inspired Mateheuristic Algorithms: Success and New Challenges
Nature-Inspired Mateheuristic Algorithms: Success and New Challenges
 
Prolegomena 4 0
Prolegomena 4 0Prolegomena 4 0
Prolegomena 4 0
 
The theory of zero point energy of vacuum
The theory of zero point energy of vacuumThe theory of zero point energy of vacuum
The theory of zero point energy of vacuum
 
Fundamental Characteristics of a Complex System
Fundamental Characteristics of a Complex SystemFundamental Characteristics of a Complex System
Fundamental Characteristics of a Complex System
 

Mehr von Jacek Marczyk

Mehr von Jacek Marczyk (8)

HBRP Complexity
HBRP ComplexityHBRP Complexity
HBRP Complexity
 
Articolo_ABI_v1
Articolo_ABI_v1Articolo_ABI_v1
Articolo_ABI_v1
 
INVESTIRE_Rating
INVESTIRE_RatingINVESTIRE_Rating
INVESTIRE_Rating
 
COSMOS_Data_Sheet
COSMOS_Data_SheetCOSMOS_Data_Sheet
COSMOS_Data_Sheet
 
USA_ISR_Poster
USA_ISR_PosterUSA_ISR_Poster
USA_ISR_Poster
 
Engineering_Mag_Toulouse
Engineering_Mag_ToulouseEngineering_Mag_Toulouse
Engineering_Mag_Toulouse
 
pam_1997
pam_1997pam_1997
pam_1997
 
NW_Complexity_article
NW_Complexity_articleNW_Complexity_article
NW_Complexity_article
 

Does Optimal Always Mean Best? Complex Systems Require Robustness Over Specialization

  • 1. Does Optimal Mean Best? WHITE PAPER J. Marczyk, Ph.D. Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line. Benoit Mandelbrot. The understanding, assessment and management of risk and uncertainty is important not only in engineering, but in all spheres of social life. Given that the complexity of man-made products, and the related manufacturing processes, is quickly increasing, these products are becoming more and more exposed to risk, given that complexity, in combination with uncertainty, inevitably leads to fragility. Complex systems are characterized by a huge number of possible failure modes and it is a practical impossibility to analyze them all. Therefore, the alternative is to design systems that are robust, i.e. that possess built-in capacity to absorb both expected and unexpected random variations of operational conditions, without failing or compromising their function. This capacity of resilience, main characteristic of robust systems, is reflected in the fact that the system is no longer optimal, a property that is linked to a single and precisely defined operational condition, but results acceptable (fit for the function) in a wide range of conditions. In fact, contrary to popular belief, robustness and optimality are mutually exclusive. Complex systems are driven by so many interacting variables, and are designed to operate over such wide ranges of conditions, that their design must favor robustness and not optimality. In other words, robustness is equiva- lent to an acceptable compromise, while optimality is synonymous to specialization. An optimal system is no longer such as soon as a single variable changes - something quite possible in a world of ubiquitous uncertainty. As the ancient Romans already knew, corruptio optimi pessima - when something is perfect, it can only get worse. When you’re sitting on a peak, the only way is down - when you’re optimal, your performance can only degrade. It is for this reason, that optimal systems are fragile. It is for this reason that a state of optimality is not the most probable state of a system. Recently, I have tried to translate the above intuitions into something a bit more analytical and technical. The result is the theorem below. Theorem Let y = f(x) = x2 be a response surface with minimum at x0 = 0. Let x be a stochastic vari- able, with a uniform distribution, pX (x) = 1 b−a , with a < x < b, a ≥ 0 and with fixed width h = b − a. Let pY (y) be the probability distribution function of y and Hy(a, b) = R pY (y) log pY (y)dy the corresponding entropy. Then, there exist positive values of h such that H(a, a+h) is minimum for a = 0. Proof Since y = x2 , the PDF of the output is given by fY (y) = fX (y) dx dy = 1 2(b − a) √ y Taking into account that b = a + h, the entropy of y is 1
  • 2. Hy(a, b) = Z (a+h)2 a2 pY (y) log pY (y)dy = = 2a3 ¡ 4 + 3 log( 1 2 a h ) ¢ 9 h − 2 (a + h) 3 ³ 4 + 3 log( 1 2 h (a+h) ) ´ 9 h It is easy to show that lim a−>0 H(a, a + h) = 8 9 h2 − 2 3 log 2h2 = H(0, h) > 0 It now remains to show that H(a, a + h) > H(0, h) for a > 0. To this end, let us compute the Taylor series expansion of H(a, a + h) for a given h. Limiting the expansion to order two yields H(a, a + h) ' (1 − 2 log 2h2 )a2 + 2h(1 − log 2h2 )a + 8 9 h2 − 2 3 log 2h2 = = (1 − 2 log 2h2 )a2 + 2h(1 − log 2h2 )a + H(0, h) It now remains to show that the first two terms of the above equation are positive for certain values of h. Now, since a > 0, each of the two terms must be positive. Indeed, one can show that 1 − 2 log 2h2 > 0 for 0 < h < e1/4 √ 2 ' 0.907 2h(1 − log 2h2 ) > 0 for 0 < h < p e/2 ' 1.166 Therefore, for 0 < h < e1/4 √ 2 , both the terms are positive and H(0, h) is the lowest value of entropy for a ≥ 0. Moreover, H(0, h) is positive for all h > 0. This completes the proof. 2 The implications of this simple theorem are very important. Entropy reflects the level of orga- nization of a system. However, in virtue of the second principle of thermodynamics, the entropy of a closed system tends only to increase, reflecting the incessant urge of things towards lower levels of organization. What the above theorem proves is that for the class of systems in question, i.e. systems whose behavior can be locally approximated by a second-order response surface (some- thing quite popular nowadays) are not willing to spend much time being optimal. In practice, such systems will not privilege states of optimality, given the fact that these correspond to states of minimum entropy. Since entropy tends to increase, this will tend to remove the system from its state of grace. Given the chance, a system with minimum entropy, will try to increase it. The important thing, however, is the fact that the inevitable increase in entropy is more likely when you’re close to a minimum (maximum). This is because in the vicinity of extremal points of a function, the entropy gradient is the highest. It is also true that no matter what state a system is in, it will try to increase its entropy - even a robust systems will. But it is for optimal systems that this increase is more probable and dramatic. The proof of this statement, which I intentionally omit here, is based on the fact that the curvature of a function is highest in the vicinity of a minimum (maximum) and this translates to a higher skewness of pY (y). It so happens that skewness is a measure of entropy. In short, I believe the theorem explains why being optimal is risky. Nature doesn’t privilege optimality at all. Self-organization - the main engine behind the evolution of biospheres - prefers to favor fitness instead. But although omnis ars imitatio est naturae - all arts are imitation of nature - in twenty first century CAE it is still popular to pursue placebo-generating states of numerical optimality with physically poor surrogates, i.e. response surfaces. Clearly, the theorem can be easily extended to other distributions and other more general classes of response surfaces, but I leave that to the academics. 2
  • 3. Optimization is an example of anthropocentric narcissism that characterizes our wasteful so- ciety. It precludes comprehension since it forces one’s mind into a very restricted portion of the entire space. It gives no holistic view since it is fruit of reductionism, search for details and frag- mentation. The danger behind the practice of optimization is that it reinforces and propagates a Panglossian vision of life. Our ancestors were wiser than we are today. William Occam said ”nunquam ponenda est pluralitas sine necesitate” which in ordinary parlance means ”choose the simplest explanation for the observed facts”. CAE actually does the exact opposite. Parapher- nalia of modern algorithms are bound together in complex numerical cathedrals, monuments to our black and white mathematics. Along the same lines Gell-Mann states in [5]: ”Why are el- egance and simplicity suitable criteria to apply in seeking to describe nature, especially at the fundamental level? Science has made notable progress in elucidating the basic laws that govern the behavior of all matter everywhere in the universe-the laws of the elementary particles and their interactions, which are responsible for all the forces of nature. And it is well known that a theory in elementary particle physics is more likely to be successful in describing and predicting observations if it is simple and elegant. ... Need the description of the fundamental laws of na- ture make use of mathematics as we understand the term, or is there some totally different way of describing the same laws?” I believe that we need to review our math, and to make it more ”natural”. According to Heisenberg: ”What we observe is not nature itself, but nature exposed to our method of questioning”. The computer is probably the most remarkable piece of machin- ery conceived by mankind. However, surprisingly it has not contributed to any major scientific discovery. Something is wrong. Maybe it is our black and white math. Van Doren points out in [8] that ”Chaos has made us realize, looking back at the history of science, how often we have oversimplified situations in the attempt to understand them”. In CAE the parallel is as follows. We first build super-complex models, with many elements, spend hundreds of CPU hours, and then we kill the obtained information by throwing a response surface on top. What is the sense of all this? What is the logic? I presume the logic reflects the general character of the average human being. Humanity is characterized by an increasingly wasteful existence. Expensive in energy and producing lots of garbage, of all types, numerical too. The more detail we want, the more intrusive is the process of getting information, but we mustn’t forget that knowledge can never be certain. So, let us summarize the salient characteristics and disadvantages of optimization and of the philosophy on which it thrives: • Optimization is not a natural process. From a philosophical point of view, a method that is not natural, cannot be used as a good tool to understand Nature if it distorts and warps. This lack of a natural flavor explains why so many techniques and respective variants must exist. Each problem is best attacked with a specific optimization algorithm. The desire to optimize reflects a sort of anthropomorphic perversion of mankind. • Optimization is expensive. The fact that the curse of dimension exists, sustains the claim behind the artificial character of the method. If Nature acted based on optimization, the ”design” of systems like a human being would be a task requiring cosmological time-scales to complete. Evidently, Nature does not ”know” the concept of design variable or dimension. • Optimization leads to fragile results. Optimization is indeed possible, and many people pursue it. See for example our economy. Companies want the highest possible profit, in the shortest possible time, with the smallest possible investment and with the smallest possible risk and, possibly, with little or no R&D at all. I believe this minimax approach sounds familiar, doesn’t it? Of course, all this is possible, but the side effect is that the economy becomes fragile, stock market crashes become more and more frequent, and the entire system becomes very sensitive to ”butterfly effects”. Extremes are in general not very good. • Optimization induces excessive optimism. The reason for this unjustified optimism is that the size of the optimal set is very small with respect to the acceptable set. Therefore, 3
  • 4. a system that is optimal, easily ”pops out” of the optimal corner of a design space, and quickly occupies states corresponding to lower-than-expected performance. What causes this popping out is the fact that uncertainties exist. Given that a system that is optimal is, by definition, impossible to improve, the only way it can evolve is towards lower performance. As the Romans suggested, corruptio optimi pessima, what is optimal can only get worse. • Optimal is the opposite to robust. A system can indeed be made optimal, but only for one particular condition or function. If a certain design has to perform in an acceptable manner under changing conditions, or in different environments, then a compromise is necessary. The system is no longer optimal under each separate condition, but performs sufficiently under all conditions. In Nature, this property is known as fitness, in engineering as robustness. In effect, Nature makes designs that are fit for a function, not optimal. However, sometimes excessively specialized designs do show up. Unfortunately, these are the first to become extinct, given that their optimality for a certain environment precludes adaptation should the environment change. As E.O. Wilson said, specialization is a tender trap of evolutionary opportunism. • Optimization promotes further fragmentation of CAE. The fact that the number of algo- rithms is so high, and quickly increasing, favors further fractalization of CAE and deepens its state of crisis. As T. Kuhn argued, lack of new ideas in a certain discipline reflects a state of crisis in which minor variants of a certain paradigm are proposed and elaborated. This proliferation inhibits innovation, given that the complexity and multitude of optimization techniques displaces interest from the problem to the method. This fact is also responsible for the difficulty in disseminating and deploying optimization in the industry. There are simply not enough experts in the industry to cope with such complex techniques. • Optimization is built on fragile grounds. Sampling of the design space is most often per- formed with DOE, which is independent of the physics of a problem. Therefore, surrogate models built on these physics-less tables of numbers become weak, almost Byzantian car- icatures of reality. Moreover, once a surrogate model has been built, it can only deliver (unwrap) what has been prepackaged into it. Early modeling has the great danger of forcing conclusions at the outset. In effect, smooth and differentiable response surfaces cannot show anomalies, discontinuities, bifurcations or outliers, and these, unfortunately, account for a huge chunk of physics. As the history of physics teaches, it is precisely through the study of anomalies that the greatest advances have been made. • Optimization is fragile. The results of optimization problems depend often on the method chosen, the starting point, and the numerical conditioning of the associated numerical prob- lem. The skill in optimization lies in the ability to select the right combination of method, starting point, stopping criteria, and the tuning of certain parameters. However, and most importantly, optimal systems are hypersensitive to changes in parameters that have not been included in the optimization process as design variables. This is the main shortcoming of optimization as a philosophy of design. In fact, there will always be variables that are not taken into account. • Optimization is Panglossian. In effect, the desire to optimize is very much in line with the Panglossian paradigm, according to which we live in the best of all possible worlds. Clearly, due to the quantum nature of matter, such claim is unfounded. If it were possible to start evolution again, we are sure that it would not follow the same path. It could also lead to a different math from the one that we have built in ”this world”. In effect, nobody can guarantee that our math is the best of all possible maths. • A holy grail optimization algorithm does not exist. Those who insist on searching for the global optimum are forgetting the existence of the famous NFL theorems. NFL theorems state that for an average optimization problem, no method is more efficient than simple random search. This result, in effect, is a bit embarrassing, especially in the face of those 4
  • 5. who dedicate years of study in refining some esoteric optimization method. It all sounds a bit like attempting circle-squaring in the twenty first century. So what lies beyond optimization? Stochastic simulation in the first place. The credibility, and therefore the future of CAE, and computing in general, hinges on realistic models, which include uncertainty, and not on huge but simplistic and physically deficient surrogates. Once realistic models as available, engineers should use them to achieve designs with acceptable but robust performance, and not pursue delicate and expensive to find states of optimality. The science is there. What we need is the right philosophy underneath. There is a general need for more philosophy in science. We also need to be more aware of the consequences of our math. What is needed are system-like studies, more holism, less fragmentation and sophisticated teraflopism and hair-splitting. The intellectual and commercial failure of CAE is due to the lack of a sense of direction. CAE does to physics what humanity does to the ecosystem. People don’t understand how components interact, yet they manipulate the whole system. There is no sense of unity, no solid roadmap, just mere fragmentation and futile refinement. No ethics. ”It is proved that things cannot be other than they are, for since everything is made for a purpose, it follows that everything is made for the best purpose” as was sustained by Dr. Pangloss, Voltaire’s eternal optimist. Gould and Lewontin, in a famous 1979 paper, argued that to study the natural world with the assumption that it is optimally designed is the modern equivalent of subscribing to Dr. Pangloss’ ridiculous world view. The advent of uncertainty in CAE is simply inevitable. There is no way to stop it. Uncertainty will quickly erode optimization as people realize that the more models become realistic, the less optimization algorithms work. The response surface method is an intellectual balloon that is going to burst due to its thin argumentation and empty moral claims, and under the crunching train of logic which is Monte Carlo Simulation. Error communis facit jus.1 References [1] Marczyk, J., Principles of Simulation-Based Computer-Aided Engineering, FIM Publications, Madrid, 1999. [2] Wilson, E.O., The Diversity of Life, Penguin Books, 1992. [3] Marczyk, J., editor, Computational Stochastic Mechanics in a Meta-Computing Perspective, International Center for Numerical Methods in Engineering (CIMNE), Barcelona, December, 1997. [4] Marczyk, J., Stochastic Design Improvement: Beyond Optimization, AIAA/NASA/USAF Conference on Multi Disciplinary Optimization, Long Beach, USA, September 2000. [5] Gell-Mann, M., The Quark And The Jaguar, W.H. Freeman and Company, New York, 1994. [6] Marczyk, J., et.al. Uncertainty Management in Automotive Crash: From Analysis To Simu- lation, ASME 2000 Conference, Baltimore, USA, September 2000. [7] Marczyk, J., Beyond Optimization In Computer-Aided Engineering, International Center for Numerical Methods in Engineering (CIMNE), Barcelona, September, 2002. [8] Van Doren, C., A History Of Knowledge, Past, Present And Future, Ballantine Books, New York, 1991. 1Common error becomes law. Digestus 5