NOU 2012: 16

Cost-Benefit Analysis

To table of content

8 Disasters and irreversible effects

8.1 Introduction

From the terms of reference of the Committee:

The Expert Committee shall consider how cost-benefit analysis is to deal with catastrophic effects with a small, but not negligible, probability, as well as the matter of irreversible effects.

Disasters are events with a low probability of occurrence, but with very serious consequences. These may be natural disasters or man-made disasters, but also a combination thereof. Earthquakes, hurricanes, tsunamis and major floods are examples of natural disasters. Terrorist acts, major financial crises and industrial accidents are examples of man-made disasters. The degree of destruction will often depend on society’s preparedness. In the long run, climate change caused by anthropogenic carbon emissions may, if sufficiently large, result in global disaster.

By irreversible effects we mean effects the reversal of which is either associated with high costs or, in the worst-case scenario, impossible. These may vary in nature. One type of irreversible effect is caused by exceeding threshold values in nature. In an economic context, it is often difficult to reverse investment decisions, thus implying that the investment costs are not recoverable. An example of this may be infrastructure investments in the transport sector.

Although irreversibility and disaster may occur in many areas of society, we will in the present Chapter place a special focus on environmentally-related issues. These represent a challenge for traditional cost-benefit analysis for a number of reasons. Relationships in nature are often non-linear, complex and characterised by uncertainty. The assessments and conclusions of the Committee will nevertheless apply more generally.

Catastrophic effects will typically be irreversible. However, irreversible effects need not be catastrophic. The two issues can in many respects be discussed separately. In Chapters 8.2 and 8.3, we examine the concepts of irreversibility and disaster, with an emphasis on their relevance to cost-benefit analysis. In Chapter 8.4, we discuss the precautionary principle and “safe minimum standards”, which are key policy rule proposals in contexts of potential irreversibility and/or disaster. In Chapter 8.5, we outline the main features of the debate on potentially catastrophic climate change, which is presumed to have motivated this part of the terms of reference of the Committee. Chapter 8.6 sets out the assessment of the Committee, and Chapter 8.7 presents its recommendations.

8.2 Irreversible effects

Although the reversible/irreversible distinction is useful, it is not a straightforward matter to divide effects into two separate classes. We are faced with a continuum of potential situations, from completely or almost completely reversible effects, via increasing rigidity or inertia, to effects that it would be physically impossible or economically unviable to reverse. It is therefore important to be mindful of the many potential degrees of irreversibility in an analysis situation.

Global examples of irreversible environmental effects may be melting of the Greenland ice sheet due to global warming, or the large-scale release of methane locked into the Siberian tundra. A domestic example may be the damming of a watercourse. Investments in housing, transportation links and other infrastructure will have a lifespan that makes them irreversible in practice, with environmental effects not necessarily being the predominant causes of irreversibility.

It is a characteristic of irreversible decisions that they curtail the future freedom of action, and thereby also decision-making flexibility. This may be of importance if inadequate information about the future consequences of a decision is available at the time of making such decision. In principle, this means that there is an economic value associated with postponing the decision, or delaying its implementation. This value is called an option or quasi-option value, or alternatively a real option value, cf. Box 8.1. This is equal to the expected value of the possibility of reversing the decision if thus merited by updated information. In order for it to be profitable not to wait, the net present value of the net benefits from immediate implementation must exceed the option value of waiting.

Since it will often be difficult to calculate the (quasi-)option value, the NOU 1997: 27 Green Paper; Cost-Benefit Analysis, concludes as follows:

”It must in each relevant case be evaluated how much can be gained by performing calculations of the expected benefit from postponement. It is under any circumstance important to be aware that a positive risk-adjusted net present value does not necessarily suggest that projects should be implemented immediately if these are irreversible and can be postponed.” (page 81).

Another aspect of irreversible decisions is that their profitability depends on whether it is evaluated prior or subsequent to the implementation of such decisions. This is because costs that cannot be recouped after the decision (”sunk costs”) are of relevance to decision making prior, but not subsequent, to the implementation of such decision.

There are, as noted above, many types of irreversible effects. The guide “Dealing with Uncertainty in Cost-Benefit Analysis” (Norwegian Government Agency for Financial Management (“DFØ”), 2006) emphasises the importance of being conscious of the scope for real options, and of looking for ways of preserving freedom of choice. Real options are held to be especially profitable in relation to effects that are irreversible in the strict sense of the word, when there is very considerable uncertainty about future developments in critical factors – which may be the case with long-term effects. Moreover, there should be a likelihood that the uncertainty will be reduced later on, and that the freedom of choice will actually be utilised. The calculation of option values is often resource intensive and complex. Consequently, the guide emphasises that the mode of thought underlying the concept of real options, and the fact that decision makers take the option aspect into consideration, will often be more important than the precise valuation of such option.

Textbox 8.1 Option and quasi-option value

In an environmental economics context, the value of “wait and see” prior to making a decision is termed quasi-option value. The background is an article by Arrow and Fischer (1974), in which they discuss the industrial development of a nature area as an example of an irreversible decision. If the development takes place, the recreational value of the area will be irretrievably lost, and the decision makers have inadequate information about how such loss will be valued in future. This gives rise to a quasi-option value associated with postponing the intervention, and such value must be included in an adjusted net present value criterion. Other examples of irreversibility in an environmental context may be pollution by heavy metals or other substances that take a very long time to decompose in the environment, such as CO2 in the atmosphere.

This economic decision-making problem has the same structure as financial contracts termed call options. Purchasing a call option means obtaining a right, but not an obligation, to purchase the underlying object, which may be a physical good or a financial instrument. This decision-making flexibility has a value, an option value, and the holder pays a premium for such value. If development in the price of the underlying object induces the holder to subsequently exercise the option, such exercise is an irreversible decision, and the option is said to be closed out. Options of this or similar types used on real investments are often also termed real options.

Pearce, Atkinson and Mourato (2006) note that it is unfortunate that environmental economics and financial theory use different terms (quasi-option value and option value, respectively) for what is in actual fact the same phenomenon. (Further confusion may arise because environmental economics also uses the term “option value”, but in referring to the value of preserving an environmental good for later use, irrespective of any uncertainty and irreversibility.) Moreover, they emphasise that the quasi-option value is not a separate component of the “total economic value” of environmental goods (see Chapter 4.5.2), but “rather ... a reminder that irreversible decisions under incomplete information should be made rationally.” Which decision is made will determine the degree to which the total economic value of an environmental good is taken into account.

8.3 Disasters

Bergstrom and Randall write, in their environmental and resource economics textbook, that economics is “all but silent when confronted with the need to analyze a decision involving, say, a truly catastrophic outcome with a very low probability at some future time” (Bergstrom and Randall, 2009). However, economists have devoted more attention to this issue in recent years, especially in the context of the risk of extreme effects from global warming. We will here present a brief and general discussion of disasters, before outlining main perspectives in the climate debate in Chapter 8.5.

A number of the fundamental issues in welfare economics are of relevance to the matter of potentially catastrophic outcomes. One such issue is discounting, which is discussed in Chapter 5. Another is the distinction between risk and uncertainty. If we are able to attribute a probability to a given outcome, we are dealing with a risk. Historical empirical data and/or theoretical knowledge mean that one will in many contexts be able to quantify probabilities, such as the probability of a river breaking its banks, or of a person suffering a car accident. In other cases we have no empirical basis for estimating such probabilities. It is, for example, difficult to estimate the probability of exceeding a threshold value in nature as long as we do not know what that threshold value actually is. This is referred to as uncertainty.

The fact that disasters are, by definition, very rare does in itself suggest that it will be difficult to estimate probabilities. Some disasters are also of such a nature that no one has been able to envisage their realisation prior to the event, cf. Box 8.2. In other words, the uncertainty does not only pertain to the absence of a probability distribution, but also to incomplete knowledge about the sample space.

Textbox 8.2 Taleb’s “The Black Swan”

According to Nassim Taleb’s book “The Black Swan” (2010), it is characteristic of the major disasters that their occurrence is a matter of complete surprise, like the terrorist attack in New York on 11 September 2001 or Hurricane Katrina, which hit New Orleans especially hard in 2005. Professor Taleb refers to this type of disaster as “Black Swans” or “unknown unknowns” because no one (or a very small number of people) had in advance even considered the possibility that such events could occur.

However, one problem with “Black Swans” from the perspective of prevention is that history rarely repeats itself, thus implying that it is difficult to draw direct lessons from such unique historical experiences. Professor Taleb writes that there is a tendency to underestimate the probability of unknown disasters ahead of the event (ex ante), and that one overestimates the probability of the recurrence of a disaster that has already materialised (ex post). Spectacular accidents, in particular, attract the attention of the general public, with the consequence that disproportionate resources are devoted to preventing their future recurrence – to the detriment of risk reduction in other contexts.

Obviously, the definition of disaster depends on the level of analysis. A fatal traffic accident is a disaster for those affected. The probability of such an accident happening to a specific person is also low (although obviously not negligible). However, no single accident can be considered catastrophic for analysis purposes from the perspective of society. Traffic fatalities are a fairly frequent occurrence, and the term disaster would, if at all applicable, have to be reserved for a dramatic growth in the number of traffic fatalities.

A war or an occupation is obviously a disaster at the national level, and the same applies to a terrorist act like the one Norway experienced on 22 July 2011, or a pandemic with a large number of deaths. At the global level, even a terrorist act on Norwegian territory will fall outside the scope of the term disaster. Major wars, like the World Wars, are global disasters. Large-scale famines, or worldwide pandemics with a high mortality, will be characterised as disasters. The immediate background to this issue being mentioned in the terms of reference must be assumed to be the climate problem, which is characterised by effects that are uncertain, but potentially enormous.

The definition of the term disaster will have social and political dimensions – not only, or primarily, economic ones. Delimitation is difficult, but would have to focus on events that are rare and unique. In principle, the discussion invited by the terms of reference may be pursued without setting out a clear definition of what constitutes a catastrophic effect. In practice, one will, as with irreversibility, be faced with a continuum of effects characterised by increasing seriousness, with the most extreme ones being defined as catastrophic.

At the level of the individual, financial safeguards are available. Enterprises and households may take out insurance against events that are catastrophic or serious, in order to reduce the economic consequences (even if the human costs cannot be averted). The authorities may also choose to “socialise” the consequences. The National Fund for Natural Disaster Assistance is an example of society absorbing the natural peril costs of individuals above a defined, normal level.

As far as disasters at the national level are concerned, the relevant issue is to determine the scale and scope of measures to reduce the real risk of such disasters, and/or to bolster society’s ability to withstand them should they nevertheless occur.

Cost-benefit analysis may contribute to uncovering whether the risk of death, disease, injury or material loss is implicitly valued on a par across various sectors. Using cost-benefit analysis for regulatory purposes may, generally speaking, contribute to making the decision maker aware of the extent to which risk reduction resources are allocated efficiently across various sectors, as measured on the basis of the priced effects (Hagen and Godal, 2011).

Regulations proposed in order to enhance safety may, in the same manner as investments, be subjected to a cost-benefit analysis, as a basis for decision making. Any effects with a known probability distribution and consequences may be included in the analysis in the usual manner when calculating the expected value, thus enabling an ordinary cost-benefit analysis of the risk-reducing measure to be carried out.

Farrow and Shapiro (2009) note that the number of safety-motivated investments and regulations in the United States has increased considerably in the wake of the terrorist attack on 11 September 2001. Safety regulations have, by and large, escaped serious economic analysis. Admittedly, cost-benefit analysis of such decisions is difficult, especially because it is difficult to estimate the benefit side of such measures. The benefits will take the form of costs that were expected, but are avoided as the result of the decision. This involves both probabilities and consequences of the events focused on by such regulations.

However, the literature within the area is evolving. The authors emphasise “reverse cost-benefit analysis”, also termed “break-even analysis”, where the idea is to identify critical values for certain unknown probabilities. For example, an analysis of minimum requirements with regard to identity documents may indicate by how much the new requirement would have to reduce the probability of a terrorist act in order for the net benefit from the measure to be zero or positive. Farrow and Shapiro (2009) note that analyses based on available data cannot be expected to deliver clear policy rules, and propose improvements in the form of replacing hidden assumptions by explicit modelling and knowledge. Such models will under any circumstance involve an element of subjective or assumed probabilities, because some potential events will be very rare.

8.4 The precautionary principle and safe minimum standards

A traditional cost-benefit analysis may, in its calculation of the expected net present value, attribute relatively minor importance to a possible future disaster with major economic implications. This is because the product of even a very high cost and a low attendant probability could be a small number, which number will also have to be discounted. This is the basis for scepticism about the treatment of such events in economics, and the development of policy rules that focus more explicitly on uncertainty, irreversibility and potential disasters. The two best known of these are the precautionary principle and the principle of safe minimum standards.

The precautionary approach is a key environmental policy principle, often cited both internationally and in Norway. The most commonly used definition is set out in the Rio Declaration on Environment and Development from 1992: “In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.” (Report No. 13 (1992-1993) to the Storting). It is often said that this approach “shifts the burden of proof” in cases where irreversible or serious damage may occur. The environment shall be given the benefit of the doubt in case of uncertainty.

This principle focuses explicitly on potentially irreversible effects. One will note that it encompasses the risk of disaster, but also the risk of “serious damage” that will not necessarily be defined as catastrophic. The cost-effectiveness requirements may be interpreted as a reasonable reservation to the effect that the approach does not justify any given preventive action.

The term Safe Minimum Standard (“SMS”) was introduced by Ciriacy-Wantrup (1952). The term is based on the idea of minimising the maximum loss in connection with a project. A simple interpretation is that one should reject changes that entail a risk of irreversible loss of natural resources. A weakness of the said interpretation is that no form of economic assessment is involved.

Bishop (1978) remedies this by proposing that safe minimum standards may be introduced unless the cost to society is unacceptably high. He introduces a measurable benefit component associated with preservation (B), to be deducted from the net economic benefits from a measure exclusive of environmental effects (A). Hence, Bishop’s modified SMS criterion implies that the investment is economically profitable when AB > 0, based on the valued effects. However, Bishop notes that the decision-making criterion should also include potential irreversible loss of environmental capital that is not yet considered valuable resources (C). The probability y of losing C as a result of the measure is unknown. Since we are unable to calculate the expected value yC, a cost-benefit analysis of the valued effects associated with a measure will have to be balanced against the non-valued (and unknown) component yC.

Since we do not know the value of yC, Bishop (1978) notes that the authorities may introduce a threshold value X representing what society considers an acceptable cost of preserving those environmental goods that are irreversibly threatened by a project. Consequently, the decision-making rule is to implement the project in question if A B > X.

Neither the precautionary principle, nor safe minimum standards, will in itself tell us what is an optimal or appropriate level of ambition, and these approaches therefore necessitate specific trade-offs in each individual case.

8.5 The climate challenge

Global warming is a man-made process with irreversible effects, which may turn out to be catastrophic. The Committee will here outline the current debate amongst economists relating to economic analysis of the climate problem, and particularly the special challenges posed by the inherent uncertainty. The presentation may be considered an example of the economic analysis of uncertain, irreversible and potentially catastrophic effects. It is, at the same time, the climate challenge that has stimulated a vigorous debate on these issues in economic literature. This is the background against which the Committee examines the issue here.

The climate challenge is described in Chapter 9. There are two basic types of uncertainty associated with greenhouse gas emissions. One of these concerns the relationship between the concentration of greenhouse gases in the atmosphere and the corresponding temperature increases. We may term this climate sensitivity. The UN climate panel, the IPCC, has defined climate sensitivity as the average global warming resulting from a doubling of the CO2 concentration in the atmosphere when compared to preindustrial times. The probable climate sensitivity range is 2 – 4.5 oC, with a best estimate of 3 oC. It is very unlikely that global warming will be less than 1.5 oC, and it cannot be excluded that it will exceed 4.5 oC (IPCC, 2007).

The other uncertainty concerns the impact of such temperature increase on the natural environment, production, consumption and welfare. The uncertain relationship between the CO2 concentration in the atmosphere and the climate response, and between the climate response and material welfare, means that it is very difficult to estimate specific damage functions for the greenhouse gas emissions in economic models. In addition, there is considerable uncertainty associated with, inter alia, future technological developments and costs associated with reductions in greenhouse gas emissions.

8.5.1 Integrated Assessment Models

So-called Integrated Assessment Models (“IAM”) are often used to analyse the climate challenge and climate policy at the global level. Such models combine climatology and economics, and can estimate both the costs associated with global emission-reduction measures and the global welfare effects of the attendant greenhouse gas emission levels. In principle, IAM models may be used to analyse what is an optimal global level of ambition for climate policy, or to analyse climate policy requirements for reaching a specific target (for example a two-degree path, cf. Chapter 9).

Many of the IAM models are based on standard economic growth theory, where society invests in capital, education and technology, thus refraining from consumption at present for the purpose of expanding future consumption opportunities. Society will seek to optimise its consumption over time, based on its current and future consumption needs. The level of the discount rate is of major importance to these trade-offs. Professor William Nordhaus (Yale University) has developed the best known IAM model, the so-called DICE model (Nordhaus, 2007). The model expands the traditional growth theory approach by also including investments in “natural capital”. By investing in natural capital today, future reductions in consumption opportunities as the result of climate-induced deterioration are curtailed.

In the DICE model, Nordhaus (2007) studies the trade-off between traditional production investments and climate investments. His findings show that it is profitable to make relatively large investments in traditional activities in an early phase, which incidentally also include improved technologies and additional intellectual capital, and some investments in designated climate measures. This can be achieved by introducing general, harmonised CO2 taxes at relatively low rates in this early phase. In the longer run, the environmental costs are expected to increase as greenhouse gases accumulate and global warming commences. It will then become profitable to shift investments to much more aggressive, emission-reduction measures. This can be achieved through a steep increase in the tax rates in the intermediate and long run. This gradual approach to the climate challenge has been termed the “climate policy ramp” (see, inter alia, Dietz, 2009).

Many IAM-based calculations lead to such a “go slow” conclusion, with a moderate emission reduction effort in the short run (Ingham and Ulph, 2003, and Ackerman et al., 2009). According to Ackerman et al. (2009), this conclusion depends on debatable choices of assumptions to underpin the analyses, including the choice of discount rate. Since the climate-induced damage materialises in the distant future, and the benefits of introducing clean-up measures at present therefore also materialise in the distant future, a high discount rate will attribute a lower value to climate-induced damage in the net present value calculations than would a low discount rate. This contributes to the conclusion that emission reductions should be modest to begin with. Whilst Nordhaus (2007) applies a market-based discount rate in his calculations, Stern (2007) uses a much lower discount rate in his calculations (1.4 percent). This contributes to Stern (2007) arguing in favour of starting early with relatively large, global emission cuts, in sharp contrast to Nordhaus (2007) and many other IAM-based calculations. Reference is also made to Box 5.4. of the present Report for a more detailed outline of the discussion between Nordhaus and Stern concerning the discount rate applied in climate-related calculations.

Ingham and Ulph (2003) cite additional criticism to the effect that the IAM-based model computations ignore the possibility that there may exist threshold values in nature, beyond which additional impact may trigger dramatic and irreversible processes. It is, for example, possible that global warming beyond a certain level may trigger rapid melting of the Greenland ice sheet, thus resulting in a steep increase in the sea level. Ackerman et al. (2009) emphasises that it is better to base analysis of the climate issue on an insurance-based approach, since the main consideration is to safeguard against a potential climate-induced disaster with a low probability, than to base it on a comparison of costs and benefits, as is the approach adopted by most IAM models.

8.5.2 Weitzman’s “dismal theorem”

In an article from 2009 on the global climate challenge, Professor Martin Weitzman (Harvard University) has presented the so-called “dismal theorem”. The theorem represents a criticism against traditional cost-benefit analysis of the climate issue, as conducted by using IAM models. At the core of the criticism is the observation that we do not know the probability of very serious consequences from global warming. The probability of climate-induced disaster, in the form of rapid global warming and large-scale feedback effects on production and consumption, may be non-negligible. Moreover, the shape of the social welfare function will be such as to make the willingness to pay for averting climate-induced disaster approach infinity with increasing temperatures. When this cost is multiplied by a non-negligible probability, the expected value is also infinite. Weitzman (2009) is therefore of the view that we cannot perform a standard cost-benefit analysis of the climate problem and argues, against this background, in favour of an insurance approach that bears a resemblance to the precautionary principle discussed in Chapter 8.4. He believes there is much evidence to suggest that the probability distribution of climate-induced disaster has “fat” tails (see Box 8.3), because there is very considerable uncertainty associated with the future prospects for major temperature change, and because the global cost increases caused by temperature increases may be steeply progressive.

Textbox 8.3 Thin and fat tails

In statistics, one makes a distinction between probability distributions with so-called “thin” and “fat” tails, respectively. The bell-shaped normal distribution does, for example, feature a thin tail. This implies that most outcomes are centred around the expected value of such outcomes, whilst outcomes a long distance from the mean occur only very rarely. A “thin-tailed” probability distribution implies that the probability decreases exponentially or faster when one moves away from the expected value. Hence, there is a very low probability of outcomes that are far removed from such value. An example of a normally distributed variable may be female (or male) height at the national level.

However, many phenomena may have a probability distribution that deviates from this bell-shaped curve. One example may be stock market fluctuations. On 19 October 1987, the US stock market slumped by 23 percent, cf. Nordhaus (2011). If the stock market had adhered to a normal distribution, we would, according to Nordhaus, have observed a 5-percent change in prices only once every 14,000 years. However, historical stock market data show that major fluctuations happen much more frequently than would be indicated by a normal distribution. This is indicative of a probability distribution with “fat tails”, i.e. that the probability declines towards zero more slowly than under the normal distribution. One example of such a distribution is the Pareto distribution, also called the “power law” distribution. This distribution is frequently used in both the natural and the social sciences. Even extreme outcomes will not be entirely improbable under such a distribution. Fat tails are often associated with events like, for example, the magnitude of an earthquake, a steep stock market contraction, or a major correction in housing prices at the national level.

Weitzman (2009) highlights, inter alia, the uncertainty associated with the climate response occasioned by bringing the CO2 concentration in the atmosphere to completely unknown levels. Since a temperature increase of, for example, 4.5 oC is outside the scope of our experience, we must rely on our views with regard to probability distributions, which must by necessity be subjective within the climate arena due to the lack of empirical data. Professor Weitzman also notes that potentially catastrophic feedback effects from increased CO2 concentrations in the atmosphere are currently omitted from most IAM models. He furthermore criticises the damage function underpinning IAM analyses of the climate issue.1 Such analyses find a relatively moderate effect on world production from large temperature increases. The special shape of the function, which measures the detrimental effects in terms of their impact on consumption, means that the economy can compensate for the welfare effects of higher temperatures through higher consumption. Another specification, which differs from the IAM models, might describe a situation in which the main effects of climate change affect goods that cannot be compensated for through material wealth, such as biodiversity and health.

The critical issue is, according to Weitzman, how rapidly the probability of disaster declines relative to the welfare effects of disaster. Although the probability of an outcome declines with the expected welfare loss associated with such outcome, it is not necessarily the case that the probability approaches zero rapidly enough for the expected value of the welfare loss to also approach zero. The answer depends partly on how “thin-tailed” the probability distribution is (see Box 8.3), and partly on how fast society’s expected welfare loss increases with the climate effects. Hence, this is a race along the tail end of the probability distribution, between how rapidly the probabilities decline and how steeply the welfare effects from climate-induced deterioration increase. If the probability of catastrophic climate-induced damage is not negligible, this will serve to bring about a high willingness to pay for the prevention of such extreme climate change. It may therefore be difficult to determine an upper limit for this willingness to pay.

8.5.3 Criticism of Weitzman’s findings

Nordhaus (2011) believes the dismal theorem to be important because it may assist us in determining when extreme outcomes are of relevance to our decisions. However, the theorem is only valid under special conditions: It assumes strong risk aversion in society, a very fat tail for uncertain variables and, moreover, that society is unable to learn and act in a timely manner. According to Nordhaus, the dismal theorem assumes, inter alia, that the marginal utility of consumption grows infinitely large when consumption approaches zero, as in the case of a disaster. This implies that society will have an unlimited willingness to pay to avert such a scenario, even if the probability thereof is very low. If this assumption is not met, the expected value of the welfare loss will not be infinitely large, and the premises underpinning the dismal theorem will be eliminated. That leads us, according to Nordhaus, back to standard cost-benefit analysis, such as the many IAM models.

The key question asked by Nordhaus is whether the international community does in actual fact have an infinite willingness to pay for avoiding a very low, but non-negligible, probability that the basis for human existence will be wiped out. He notes, by way of an example, that the probability of an asteroid hitting the Earth is about 10-8 per annum. If the dismal theorem was valid we would, according to Nordhaus, be willing to pay an unlimited amount for a tiny reduction in this probability. He notes that society does not, generally speaking, behave as if infinite negative utility is associated with catastrophic outcomes at the limit. Besides, Nordhaus maintains that the nature of climate change (unlike asteroid disaster) is such as to give us time to learn, and to postpone the large-scale emission reductions until more effective technologies have been developed.

Pindyck (2011) notes, as did Nordhaus, that Weitzman assumes a utility function exhibiting special characteristics, especially with regard to the shape of society’s risk aversion. Pindyck believes that marginal utility may be very high when consumption approaches zero, but not infinitely high. If we introduce an upper cap, thus implying that marginal utility approaches a finite level, the willingness to pay will also be finite, according to Pindyck. However, he agrees with Weitzman that it is reasonable to assume that the relevant probability distributions are fat-tailed. He does not dismiss the possibility of an extreme climate-induced outcome, and notes that sufficiently fat tails justify swift action without any complex analysis, but he believes that steep emission reductions may also be justified even if assuming thin tails. In view of other potential disasters faced by the world, the shape of the tails will not provide much guidance as far as political decision making is concerned. The decisions must be considered in the context of other important social priorities, such as the cost of taking precautions against other potential disasters.

In a rejoinder to the criticism, Weitzman (2011) writes, inter alia, that the existence of other potential disasters does not eliminate the special cause for concern occasioned by climate change. He could also have chosen different specifications for society’s utility function than those adopted in his original analysis. The key observation is that potentially fat tails should make economists less confident about cost-benefit calculations within this area.

8.6 The assessment of the Committee

As far as policy implications relating to irreversible effects are concerned, the Committee is of the view that no decisive theoretical or empirical developments have unfolded since the NOU 1997: 27 Green Paper. This would also appear to be confirmed by Pearce et al. (2006). When faced with irreversible effects, one should, if the project can be postponed and such postponement enables new information of relevance to decision making to be gathered, take into account the (quasi-) option value associated with a wait-and-see alternative. A positive net present value based on a cost-benefit analysis will not necessarily imply that it is profitable to implement the project immediately. The expected net present value of implementing the project immediately must also exceed the option value associated with the wait-and-see alternative.

It is often difficult or impossible to calculate the (quasi-)option value, but it is important to be aware of it, and to describe and assess the implications of waiting. Although the examples discussed here are principally obtained from the environmental arena, the Committee notes that there are also other types of irreversibility. Examples may be infrastructure investments, and decisions that influence the choice of technology. Whether (quasi-)option values of relevance to the analysis arise will depend on the scale, scope and duration of the effects, and on whether relevant, new knowledge can be gathered if the project is postponed.

If one is unable to conclude that the probability of catastrophic effects is negligible, the standard method of analysis for risky outcomes (where the probability distribution is assumed to be known) may underestimate, potentially to a considerable extent, the expected cost of society being exposed to an unknown degree of disaster risk. If such is the case, traditional cost-benefit analysis will not be a suitable tool for calculating an optimal safety level.

The Committee takes the view that one should in such cases, first of all, attach considerable weight to describing both what one knows about the possibility of catastrophic outcomes, as well as the knowledge deficiencies of which decision makers should be aware. There will be a difference between analyses where potentially catastrophic effects are the main focus, on the one hand, and analyses of measures involving some effects that may influence the probability of disaster, on the other hand. The level of ambition will usually, at least implicitly, be determined as a “safe minimum standard”. Cost-effectiveness analysis may contribute to shedding light, for the benefit of decision makers, on what amount of resources is allocated to risk reduction within different sectors. It may, in such a context, be of interest to highlight whether more resources are devoted, on the margin, to reducing the risk of death or injury in catastrophic and/or dramatic scenarios, when compared to more “mundane” accidents that cause, in aggregate, the same amount of damage, and possibly more. The Committee does, however, acknowledge that certain disasters may involve aspects and dimensions that make such comparisons less relevant to decision makers. The Committee has not considered defining what qualifies as “catastrophic effects”, and what does not, as falling within the scope of its terms of reference.

Generally speaking, the cost-benefit analysis of safety measures is at an early stage of theoretical development. Various types of break-even analysis may contribute to indicating how large a reduction in the probability of a terrorist act or similar incident must be entailed by a safety measure in order for such measure to be justified on economic grounds.

The Committee also refers to Report No. 29 (2011-2012) to the Storting, “Civil Security and Safety”, in which the Government outlines various measures to strengthen civil protection efforts. The Report addresses practical matters relating to civil protection efforts within various areas of society, including the organisation of such efforts.

The debate amongst economists concerning the economic analysis of the climate issue reviewed in the present Chapter provides an illustration, at the global level, of the issues raised by disasters and irreversible effects. It may in this context be appropriate to examine whether the relationships under assessment are linear, or whether there may exist threshold values beyond which effects that are not only irreversible, but potentially catastrophic, could arise.

The fundamental lesson from the debate surrounding Weitzman’s dismal theorem is the importance, in the cost-benefit analysis of situations involving potentially catastrophic outcomes, of assessing whether or not the probability of disaster is negligible. However, believing that the probability is low and/or approaches zero is, in this context, not sufficient to conclude that it is negligible.

Firstly, one needs to examine whether costs increase by so much in the event of more extreme outcomes that it will, in full or in part, outweigh the effect of a declining probability of such outcomes; cf. the above discussion. In other words, it is not only the level of probability, but also the costs, that determine whether the possibility of an improbable outcome can be ignored. Secondly, one needs to take into consideration how certain one is that the probability of a disaster is low. If, strictly speaking, one does not know very much about such probability, the actual uncertainty faced by society may potentially be much higher (than it would have been if the risk had been correspondingly low, but fully known). A traditional risk assessment in which the disaster probability is held to be low, but known, does in practice make active use of two types of probability information: firstly, its level, i.e. how probable one believes a disaster to be, and secondly, its precision, i.e. that the probability figure is certain – which implies, inter alia, an assumption to the effect that one knows that one is not mistaken, and therefore has not significantly underestimated the disaster risk.

If one is not certain that the disaster risk is negligible, the fundamental observation of Weitzman (2009) is that we should think along the lines of insurance against disaster, i.e. a precautionary principle. The two-degree target may be interpreted as reflecting such an approach, in the form of a “safe minimum standard”.

8.7 Summary recommendations

  • When faced with irreversible effects, it will at times be possible to get more information about the effects of the measures by postponing execution. In formal terms, this may be expressed as a (quasi-)option value. Such values may be difficult to estimate, but the advantages of postponing implementation should nevertheless be described and assessed.

  • In the cost-benefit analysis of situations with a potentially catastrophic outcome, it is important to examine whether or not the probability of such catastrophic outcome is negligible. In order to safely ignore a disaster probability it is, in principle, necessary to know 1) that the level of the disaster probability is very low, 2) that the level of the disaster probability is well known (and therefore not uncertain in itself), and 3) that the cost increase in the event of more extreme outcomes is not sufficiently steep to (in full or in part) outweigh the fact that more extreme outcomes are less probable.

  • If the probability is not negligible, or if one is unable to conclude that such is the case, the standard method of analysis may underestimate, potentially to a significant extent, the cost associated with society being exposed to an unknown degree of disaster risk. The Committee is of the view that one should in such cases attach considerable weight to describing both what one knows about the possibility of catastrophic outcomes, as well as the knowledge deficiencies the decision makers have to be aware of. The level of ambition will usually, at least implicitly, be determined as a “safe minimum standard”.

  • Cost-benefit analysis should be used to highlight the amount of resources used, implicitly or explicitly, for risk reduction within various sectors, in order to improve the basis for making decisions about sensible resource allocation. The theoretical literature within the area of safety regulation is in development. Various types of break-even analysis may provide information about the minimum probability of a terrorist act, or a similar incident, that may justify a safety regulation.

8.8 Bibliography

Ackerman, F., S. J. DeCanio, R. B. Howarth and K. Sheeran (2009). Limitations of Integrated Assessment Models of Climate Change. Climatic Change, 95 (3-4), pp. 297-315.

Arrow, K.J. and A.C. Fisher (1974). Environmental Preservation, Uncertainty, and Irreversibility. The Quarterly Journal of Economics, 88 (2), pp. 312-319.

Bergstrom, J. C. and A. Randall (2010). Resource Economics: an economic approach to natural resource and environmental policy. Cheltenham, United Kingdom. Northampton, MA, United States. Edward Elgar.

Bishop, R.C. (1978). Endangered Species and Uncertainty. The Economics of Safe Minimum Standard. American Journal of Agricultural Economics, 60 (1), pp. 10-18.

Ciriacy-Wantrup, S.V. (1952). Resource Conservation: Economics and Policies. University of California Press, Berkeley.

Dietz, S. (2009). On the Timing of Greenhouse Gas Emissions Reductions: A Final Rejoinder to the Symposium on “The Economics of Climate Change: The Stern Review and its critics.” Review of Environmental Economics and Policy, 3 (1), pp. 138-140.

Dixit, A. K., and R. S. Pindyck (1994). Investment under Uncertainty. Princeton, Princeton University Press.

Farrow, S. and S. Shapiro (2009). The Benefit-Cost Analysis of Security Focused Regulations. Journal of Homeland Security and Emergency Management, 6 (1).

Hagen, K.P. and O. Godal (2011). On principles for the prioritisation of preventive efforts relating to floods, landslides and avalanches at the national level. (In Norwegian only. Norwegian title: Om prinsipper for prioritering av den forebyggende innsatsen knyttet til flom og skred på nasjonalt nivå.) Institute for Research in Economics and Business Administration (“SNF”). Working Paper No. 27/11.

Ingham, A. and A. Ulph (2003). Uncertainty, Irreversibility, Precaution and the Social Cost of Carbon. Tyndall Centre for Climate Change Research. Working Paper No 37.

IPCC-AR4. (2007). Climate Change 2007. The physical science basis.

Nordhaus, W. (2011). The Economics of Tail Events with an Application to Climate Change. Review of Environmental Economics and Policy, 5 (2), pp. 240-257.

Nordhaus, W. (2007). The Challenge of Global Warming: Economic Models and Environmental Policy. Yale University.

Nordhaus, W (2007). A Review of the Stern Review on the Economics of Climate Change. Journal of Economic Literature, Vol. XLV, pp. 686-702.

Norwegian Government Agency for Financial Management (“DFØ”) (2006). Dealing with Uncertainty in Cost-Benefit Analysis. Guide. (In Norwegian only. Norwegian title: Behandling av usikkerhet i samfunnsøkonomiske analyser. Veileder.)

NOU 1997: 27 Green Paper; Cost-Benefit Analysis. (In Norwegian only. Norwegian title: Nytte-kostnadsanalyser.)

Pearce, D., G. Atkinson and S. Mourato (2006). Cost-Benefit Analysis and the Environment. Recent Developments. OECD 2006.

Pindyck (2011). Fat Tails, Thin Tails, and Climate Change Policy. Review of Environmental Economics and Policy. 5 (2), pp. 258-274.

Report No. 13 (1992-1993) to the Storting. On the UN Conference on Environment and Development. (In Norwegian only. Norwegian title: Om FN-konferansen om miljø og utvikling.)

Report No. 29 (2011-2012) to the Storting. Civil Security and Safety (In Norwegian only. Norwegian title: Samfunnssikkerhet.)

Stern, N. (2007). Stern Review on the Economics of Climate Change. HM Treasury, London.

Taleb, N. N. (2010). The Black Swan. London. Penguin Books Ltd.

Weitzman, M. (2009). On Modelling and Interpreting the Economics of Catastrophic Climate Change. The Review of Economics and Statistics. 91 (1), pp. 1-19.

Weitzman, M. (2011). Fat-Tailed Uncertainty in the Economics of Catastrophic Climate Change. Review of Environmental Economics and Policy, 5 (2), pp. 275-292

Footnotes

1.

The standard CBA damage function reduces the welfare equivalent of production in the event of a mean global temperature increase T by a quadratic-polynomial multiplier expressed as M(T)=αT2/(1+αT2), cf. Weitzman (2011). Instead of being multiplicatively separable, the negative utility from global warming may be additively separable. This will again, according to Weitzman (2011), imply stricter emission restrictions than would be suggested by the standard calculations.

To front page