# Theory of Risk

#### By Bertrand Munier

In 1947 the second edition of the Theory of Games and Economic Behavior was published by the Princeton University Press and won the book much greater fame than its 1st (1944) edition had obtained. Von Neumann and Morgenstern had exhumed from oblivion Daniel Bernoulli’s paper, published (in the Latin of the day) by the Saint-Petersburg Academy of Sciences in 1738 (known as the Saint-Petersburg Paradox) and had given it a first axiomatic foundation, thus initiating the modern trend of rational choices by maximization of expected utility (“EU” in what follows). Maurice Allais was almost alone at the time in opposing this vision of rationality. He refused to allow rationality to be assimilated to the respect of a system of axioms, whatever the system, and defined it in a much more general way by reference to the use of suitable means to attain the goals one has chosen. Furthermore he wished to show that the EU system of axioms is not a faithful description of observable behaviour.

Already in 1948 Allais was thinking of experimentation to reveal the error to which EU gave rise. In 1953 he published, in the Journal de la Société de Statistique de Paris, entitled (in translation) “The psychology of the rational man in the face of risk – theory and experience”, a set of questionnaires which underpin his experiments (carried out on more than 250 people). Then, after over a year of unrelenting struggle [1], he published an article in Econometrica destined to become famous. It was entitled “The behaviour of the rational man in the face of risk – critique of the postulates and axioms of the American school”, often encapsulated in the term “the Allais Paradox”. The fuller publication of the findings and interpretations was to take place only after a quarter of a century in the work co-edited by Ole Hagen and Maurice Allais entitled “Expected Utility Hypotheses and the Allais Paradox” (1979) which gave rise to a strong school of thought tending to agree with Maurice Allais’s vision. This movement was particularly developed in the series of FUR conferences, organized since 1982 and which have subsequently acquired a considerable audience. Maurice Allais entrusted the coordination to me in 1986. I took charge of it until 2005, learning much and contributing much in the process.

What is known as the “Allais Paradox” is in fact a counterexample [2] to EU theory. It involves a sequence of two questions extracted from the questionnaires used in 1948-1953. The first of them asks the participant to choose between a 100% certain windfall of value 100 (choice A1) and a gamble offering an 89% chance of winning 100, a 10% chance of winning 500 and a 1% chance of winning nothing (choice A2). The second question invites the subject to choose between two gambles B1 and B2, such that B1 offers an 89% chance of zero winnings and an 11% chance of winnings of value 100, while B2 offers a 90% chance of zero winnings but a 10% chance of winnings of value 500. These trials have been repeated thousands of times, either under laboratory conditions by researchers belonging to very varied disciplines or on varied terrains of project engineering in the fields of nuclear and general energy, transport, etc. The results are extremely robust: between ⅔ and ¾ of subjects interrogated choose A1 for the first choice and B2 for the second. But the bare fact that these two choices are both made by the same individual at the same time suffices to invalidate the rule of expected utility.

For it is evidently impossible for us to have at the same time:

$0,01*U(0)+0,89*U(100)+0,10*U(500)

and: $0,89*U(0)+0,11*U(100)<0,10*U(500)+0,90*U(0)$

It follows from this counterexample that there are at least certain cases in which the mathematical expression of preference score of a gamble cannot be “linear in probabilities”, contrary to what is implied by the axioms of von Neumann and Morgenstern. It should be noted that this conclusion is entirely independent of the notion of utility used; neither does it depend to any extent on the fact that the numerical values given are very high. Discussions of this “Paradox” in the 50s however involved such confusions, so that no clear lesson could be drawn, no doubt owing to the insufficiency of the mathematical expression (though it is quite simple!) and, finally, the discussion become bogged down without reaching any clear result. The field was abandoned, so to speak, owing to the exhaustion of the protagonists… The impression lingered here and there that Allais had been wrong… Until the experiments of psychologists and engineers on a world-wide scale revived the question at the end of the 70s. The 1979 work mentioned above did the rest, by opening the field to a great many other experimental research projects, and then to the quest for an alternative model to expected utility.

In due course what is called the “independence axiom” of expected utility was clearly challenged. The scores explicitly or implicitly attributed to an asset or to a risky project are not “separable” according to the events liable to influence the results of the asset or project and are therefore not polynomials in probability. People do not judge a distribution event by event, without taking account of what happens in the neighbouring event or even in all the complementary events. In other words, to evaluate the risks and award them a score in consequence, the whole set of the relevant distribution of these risks must be considered – it cannot be sufficient to tot up the products of the probabilities by the psychological values or utilities. This is the fundamental point of the risk theory innovation for which Allais must be given credit, an innovation today taken up by Kahneman and Tversky, Machina, Wakker and too many others for me to mention them all here.

To this overall lesson must be added three considerations set out by Maurice Allais in the 1953 article cited above and reprised in several of his writings from the years 1984-1988 [3] :

• The utility function should be adjusted by positing that it is zero for the value of the individual’s “psychological capital”. It may be noted that this corresponds to the idea of the “anchor point” so highly acclaimed – forty years later – by Kahneman and Tversky in proposing the “Cumulative Prospect Theory”.
• Account should be taken of a discontinuity of the utility function at the point , its concavity being more pronounced for gains than for losses. This may be said to correspond to the idea that the utility function is not the same for losses as for gains – an idea taken up forty years later by Kahneman and Tversky in the Cumulative Prospect Theory (1992), although it is a more general version – less restrictive and doubtless better corroborated by the experimentation – than what Kahneman and Tversky were to call, in the 80s, the “reflection effect”.
• The gradient of the utility function is steeper for losses than for gains. It will be noted that this corresponds to the idea later corroborated by the “loss aversion” experiments carried out by Thaler.

As has been shown in the foregoing, the most up-to-date theory of risk, for which reference is widely made to Kahneman and Tversky’s 1992 article, was almost entirely formulated by Maurice Allais as long ago as 1953, less clearly and explicitly than in his 1984-88 writings, it is true, but in any event prior to 1992. To this extent Maurice Allais not only paved the way for all the other authors in contemporary risk theory, but actually anticipated them. It goes without saying that many other developments are involved in our day which do not owe their inspiration to the reflections of Maurice Allais. But for how many others, past present and future, has he been or will he be the source? This is why it is advisable to re-read his writings and locate in their pages the nuggets of rich innovations yet to come.

[1] Eventually the article was only published – in French – “on the author’s exclusive responsibility”, as the editor of the review insisted on explicitly disclaiming any responsibility on his own part – a decision he must have regretted later on!

[2] This is not an isolated case in decision-making theory. In this field the habit has grown up, since the early XVIIIth century, of using the term “paradox” to denote any clear experimental finding which spells difficulties for an established theory – a counterexample. Hence the “Paradox of Saint-Petersburg”, etc.

[3] Worthy of special mention are:

1983 : “Fréquence, probabilité et hasard” [Frequency, probability and chance], Journal de la société de Statistique de Paris, vol. 124, n° 2, pp. 70-102 and n° 3, pp. 144-221.

1984 : “The Foundations of the Theory of Utility and Risk”, in: O. Hagen and F. Wensop, eds., Progress in Decision Theory, Reidel, Dordrecht, pp. 3-131.

1988 : “The General Theory of Random choices in Relation to the Invariant Cardinal Utility Function and the Specific Probability Function, the  Model: A General Overview” in: B. Munier, (ed.) Risk, Decision and Rationality, Dordrecht/Boston, Reidel, pp. 231-289.