(1) The precautionary principle


There is…considerable danger in applying the method of exact science to problems…of political economy; the grace and logical accuracy of the mathematical procedure are apt to so fascinate the descriptive scientist that he seeks for…explanations which fit his mathematical reasoning and this without first ascertaining whether the basis of his hypothesis is as broad…as the theory to which the theory is to be applied.

Karl Pearson, 1889, speaking at the Men’s and Women’s Club Financial risk management is in a state of confusion. It has become obsessively focused on measuring risk. At the same time, it is forgetting that managing risk is about making decisions under uncertainty. It also seems to hold on to two dangerous beliefs: first, that our risk metrics can be estimated to five decimal places; second, that once we have done so the results will self-evidently guide our risk management choices.

They do not. Even if they did, our risk metrics cannot be anywhere as precise as they are made out to be. This is not because we must “try harder”-say, collect more data, or use cleverer statistical techniques. It is because, given the problem at hand, this degree of precision is intrinsically unattainable.

Given what is at stake, this state of confusion is dangerous. To get out of this impasse we must tackle the task from a radically different angle: we must revisit our ideas about probability in financial risk management, and we must put decision making back at center stage.

The sound management of financial risk affects not just bankers, traders, and market professionals but the public at large, and more directly so than is often appreciated. Unfortunately, for all its apparent quantitative sophistication, much of the current approach to the management of financial risk rests on conceptually shaky foundations. Many of the questions posed in the quest for control over financial risk are not simply difficult to answer-I believe they are close to meaningless.

In great part this is because in looking at the control and regulation of financial risk we are not even clear what “type of” probability is of relevance or when either type could be used more profitably. This matters because members of the species Homo sapiens can be surprisingly good at dealing with certain types of uncertain events, but spectacularly bad at dealing with others.

Unfortunately, this does not seem to be taken into account by much of current risk management, which, if anything, works “against the grain”: it pushes us toward those areas of probability where we make systematically poor decisions, and it neglects the domains where we are, after all, not so bad.

There are more fundamental problems with current financial risk management. These are to be found in its focus on measuring risk and in its scant attention to how we should reach decisions based on this information. Ultimately, managing risk is about making decisions under uncertainty. There are well-established disciplines (e.g., decision theory) devoted to this topic, but these have, by and large, been neglected.

To understand whether this neglect is justified or whether we are missing out on some useful tools we will have to look at what utility and prospect theory have to offer. My conclusion will be that a straightforward (and rather old-fashioned) application of these theoretical tools has limited applicability in practical risk management applications.

This does not mean, however, that the decisional (as opposed to measurement) problems we are faced with are any less important there is a more satisfactory way to look at these matters, an approach that has been successfully employed in many of the physical and social sciences.

This approach clearly distinguishes between different types of probability and employs them appropriately to create risk management tools that are cognitively resonant. Probabilities-as-degree of-belief and probabilities-as-revealed-by-actions will be shown to be the keys to better decision making under financial uncertainty. If these probabilities will seem less “sharp” and precise than those that current risk management appears to offer, it is because they are.

They have one great advantage, though: they keep us honest and humble and can save us from the hubris of spurious precision. Not a small achievement if “ignorance is preferable to error and he is less remote from the truth who believes nothing than he who believes what is wrong” (Thomas Jefferson, 1781). These days, risk management appears to be pervasive in every area of human endeavor. We seem to think, speak, and breathe risk management. In short, we appear to be living in a risk management culture. Presumably, we should be better at managing risk than we have ever been.

It is true that we have made remarkable progress in recognizing and handling risk. Some current developments in risk management thinking, however, make me fear that we may have reached an inflection point, and that our attempts at managing risk may be becoming more complex and cumbersome, but less effective.

Let me give a few examples. One of the distinguishing and, from a historical point of view, unprecedented features of the current approach to risk management is its focus on low-probability but potentially catastrophic events. This novel attitude to risk makes us evaluate risky prospects in a very different way than we used to.

Take the case of polio and smallpox vaccinations. With the medical knowledge and technology available at the time, administering a vaccine that was too virulent and that could cause a healthy subject to contract the illness it was supposed to prevent had a low, but certainly no negligible, probability. Being cautious is obviously commendable, but I sometimes wonder how much longer it would have taken for the polio and smallpox vaccinations to be developed under current safety and risk-avoidance standards-or whether, indeed, they would have been developed at all.

This modern attitude to risk is by no means limited to the medical domain: in many other areas, from nuclear energy to nanotechnology to the genetic modification of livestock and crops, we currently display a similar (and, historically speaking, very new) attitude to risk. Whether “new” necessarily equates to “better,” however, is debatable.

The examples mentioned above share an important common feature: in a cost–benefit analysis, we currently place much greater weight on unlikely but catastrophic events than we used to. This is neither “right” nor “wrong” per se (in the sense that a mathematical statement can be). It indicates, however, a behavioral response to rare events that is difficult to reconcile with everyday experiences of risk taking.

This attitude to risk, called the “precautionary principle,” in one of its milder forms states: [W]hen an activity raises threat of harm to human health or the environment; precautionary measures should be taken even if some cause and effect relationships are not established scientifically.

A stronger formulation is the following:

[T]he precautionary principle mandates that when there is a risk of significant health or environmental damage…and when there is scientific uncertainty as to the nature of that damage or the likelihood of the risk, then decisions should be made so as to prevent such activities from being conducted unless and until scientific evidence shows that damage will not occur.

This is not the place to launch into a critique of the precautionary principle. For the purpose of this discussion the most relevant aspect of this principle is its focus on the occurrence of events whose probability of outcome is either extremely low or so imperfectly known that the most we can say about them is that it is nonzero. Rightly or wrongly, we seem to be increasingly willing to sacrifice tangible and immediate benefits to avoid very remote but seriously negative outcomes.

This response to risk is historically new, and is spreading to more and more areas. Unsurprisingly, as we shall see, a variant of the precautionary principle has appeared in financial risk management. Why have we become so preoccupied with managing the risk of very rare but catastrophic events? I venture two explanations.

The first is that, throughout the history of the human species, we have always been subject to events of devastating power and consequences: earthquakes, floods, volcanic eruptions, epidemics, etc. In all these instances, Homo sapiens have, by and large, been at the receiving end of the slings and arrows of outrageous fortune. On an evolutionary scale it has only been in the last five or so minutes of their lives that humans have found themselves able to create by their own actions catastrophes of comparable magnitude and import as the natural ones.

Nuclear weapons are the most obvious, but not the only, example: think, for instance, of global warming, environmental pollution, antibiotic-resistant bacteria, etc. Indeed, sometimes it seems as though we feel startled by the ability of our actions to have far-reaching consequences-and in some domains we probably attribute to ourselves far more destructive power than we actually have: in Victorian times the “mighty agency…capable of almost unlimited good or evil” was nothing more sinister than good old friendly steam; and Mary Shelley’s Frankenstein preyed on the then-current fears about that other terrible fiend, electricity.

It is plausible to speculate that, given this new-found consciousness of the destructive power of our own actions, we may feel that we have a greater responsibility to control and manage the risks that they have given rise to. Hence, if this view is correct, we can begin to understand our interest in the management of risk in general and of the risk associated with catastrophic events in particular. There is more, though.

There is a second, and probably linked, factor in our current attitude to risk management. As our control over our physical environment, over our biological constraints, over economic events, etc., has increased, we accept less and less that bad things may happen because of “bad luck.”

Our immediate reaction to a plane disaster, to an unexpected financial crisis, to the spilling of a noxious chemical into a river, or to a train derailment is to set up a fact-finding commission to conduct an enquiry into “who was responsible.” In this respect, our attitude to “Fate” has changed beyond recognition in the last one hundred years or so.

To convince ourselves, just consider that when life insurance was first introduced in the nineteenth century, many households were reluctant to enter into these contracts because it was felt that doing so would be tantamount to “tempting Fate.” We are separated from this attitude toward risk of the grandparents of our grandparents by a cognitive gulf that is difficult for us to even comprehend.

Fate has all but disappeared from our conceptualization of risk and, indeed, almost from everyday language. In general, we appear to be much more willing and inclined to pursue the logical chain of causal links from a bad outcome to its identifiable causes than we used to. I understand that a similar trend toward the “responsibilization” of outcomes has occurred in the legal area as well.

For instance, more cases of negligence are currently brought to court than ever before. And only twenty or thirty years ago the idea that someone could scald herself with a hot drink purchased at a fast-food outlet and sue the company would have struck one as ludicrous. In the legal area, this attitude may be the consequence of a much wider change: the growth of the tort of negligence that brought together previously disjointed categories (“pockets”) of liability related to negligent conduct inherited from the nineteenth-century body of English law.

The judges active in the 1930s began to organize all these different pockets of liability as instances of one overarching idea, i.e., that we owe a duty of care to our “neighbor.” Much of the development of the concept of the tort of negligence has then been the refinement and, by and large, the extension of the concept of what constitutes our neighbor.

But as we begin to think in terms of “neighbors” to whom a duty of care may be owed, the idea of impersonal “victims of Fate” recedes in the background: we, not the wanton gods, become responsible. In sum: there has been a general shift in attitude toward ascribing a responsibility or a cause of negative outcomes from Fate to ourselves. We have also come to recognize that, for the first time in our evolutionary history, we can create our own “man-made catastrophes.”

Taken together, these two factors go a long way toward explaining why we are more concerned than we have ever been before about risk management in general, and about the management of the risk connected with remote but catastrophic events in particular. I intend to look mainly at a narrower and more specific aspect of risk management, namely, at the management of financial risk, as practiced by financial institutions and as suggested (or, more often, imposed) by regulators.

 This admittedly narrower topic is still very wide-ranging and affects us in ways more direct than we imagine: if we are too lax or ineffective in mandating the minimum standards of financial risk management, the whole globalized economy may be at risk; if these standards are too strict, they may end up stifling innovation and development in one of the most dynamic sectors of the world economy.

 The financial regulators obviously have a great role to play in this, but, as I will argue, the financial industry and the academic risk-management community share the burden at the very least to a comparable extent. It is important to understand that regulating and reducing financial risk (as with most instances of regulation and risk reduction) has obvious transparent benefits but also more opaque, yet potentially very great, costs.

Excessive regulation or voluntary risk avoidance will, in fact, not only reduce the profitability of financial institutions (the public at large is more likely to display Schadenfreude about this than to shed overabundant tears), it will also curb financial innovation. These matters deeply, because the benefits of this financial innovation are more widespread and more important for the general public than is generally appreciated.

To quote one example out of many, Alan Greenspan marveled in 2004 at the resilience of the banking sector during the economic downturn that followed the events of September 11, 2001. This resilience, many have argued, was made possible by the ability afforded to banks by new financial instruments (credit derivatives) to distribute credit risk to a wider section of the economy, thereby reducing the concentration of risk that had proved so damaging during previous recessions.

 As a result, banks were able to minimize the restrictions of credit to the economy that typically occur during economic downturns. This has been quoted as one of the reasons why the 2001 recession was so mild. And indeed, despite the many high-profile bankruptcies of those years (Enron, WorldCom, Parmalat, etc.), “not one bank got into trouble,” as Greenspan famously said. Interestingly enough, therefore, financial innovation (in this particular case in the form of credit derivatives) has indeed created new types of risk, but has also contributed to the reduction, or at least the diffusion, of other types.

Clearly, it is not just complex derivatives that have brought benefits to the financial sector and to its ultimate users, i.e., the general public: as the International Monetary Fund (IMF) and the Bank for International Settlements recently pointed out, the efficiencies brought about by “deregulation, new financial products and structural changes to the sector” have far outweighed the dangers posed by these developments. What are these broadly felt benefits then?

First of all, financial innovation has been useful to redistribute risk, and hence to diffuse its impact. This is, of course, extremely important, but still somewhat “intangible.” Financial innovation, however, has also brought about benefits directly felt by the man in the street, as new liquid financial instruments have offered new opportunities to the public at large. New types of mortgages in the United States, for instance, have allowed homeowners to access finance more efficiently and with smaller frictions and transaction costs than ever before in human history.

More generally, “Innovation and deregulation have vastly expanded credit availability to virtually all income classes” (Greenspan, April 2005). This has clearly been advantageous for borrowers and, probably, for the economy as a whole. It also carries risks, however, to the extent that such easy finance may cause, say, a real-estate bubble or, more generally, an overheating of the economy.

The changes run deeper. At a very fundamental level, banks had traditionally always been the main “accumulators” of credit risk. If one had to describe their role on the back of a stamp, one would probably write: “Lend the depositors’ money to people who may not pay it back.” The newfound ability afforded by financial innovation to relocate an important part of this credit risk outside the banking sector has fundamentally changed the nature of risk transfer in a modern economy.

JRS. 03.01.2014



ar bg ca zh-chs zh-cht cs da nl en et fi fr de el ht he hi hu id it ja ko lv lt no pl pt ro ru sk sl es sv th

Azulejos de Coimbra