2 This current survey paper is certainly not meant to slight contributors to what we know about risk and uncertainty in other fields within economics, about which several volumes have been written. Entire books are available by excellent scholars in the area of risk and uncertainty (e.g. Eeckhoudt, Gollier, and Schlesinger 2005), and there is even a book with a collection of papers along the same lines of focus as this current paper, edited by Wesseler et al. 2003. I hopefully provide a valuable update to those papers here. Most of the papers in the literature pertaining to the topics of risk and uncertainty are best classified as purely theoretical, not containing much of empirical content (e.g. Pratt 1964; Segal 1987, 1990; Segal and Spivak 1988; Epstein 1999). And most contributions would fall into the categories of microeconomics, finance/investment, or decision theory. This work has of course spilled over into applied areas of economics, and applied areas have fostered empirical work. Empirical contributions that involve risk are growing in number and now can be found in many areas of specialty in economics, including human health (e.g. Blomquist 1981; Berger et al. 1987; Dickie and Gerking 1996; Kennedy 2002; Viscusi and Gayer 2006; Anderson and Mellor 2008; Cameron and DeShazo 2013; Riddel and Hales 2014), food safety (e.g. Fox et al. 1998, 2002; Dosman et al. 2001; Alfnes and Rickertsen 2003; Lusk and Coble 2005; Shaw et al. 2006; Curtis and Moeltner 2007; Cerroni et al. 2013), and transportation (e.g. Rodgers 1996; Huang et al. 2015) economics. All of the above papers provide lots of references to other papers in those applied fields. Agricultural economists have also considered the role of risk in farm production decisions for many, many years (e.g. Chavas and Holt 1990; Smith and Mandac 1995). Studies involving risk/uncertainty in the area of resource economics are also on the rise. Before diving into resource economics, first, note that I use both of the words risk and uncertainty to allow distinguishing important circumstances. The first term, risk, will pertain to problems where the uncertainty can be at least fairly well specified with a known probability distribution; the second term opens a door that extends all the way to "pure" uncertainty (also often deemed "Knightian" uncertainty, Knight 1921), where we simply have no idea of the probabilities involved. Risk is most straightforward when logic, or laws of physics or science dictate the probabilities or at least frequencies (e.g. fair coin tosses, dice throws, long term weather data -- when data on the frequencies of outcomes are good, we might also reasonably use them to characterize probabilities). As less and less is known about probabilities, we move into the realm of uncertainty. Second, another point of clarification is that when economists use the word risk, they may sometimes appear to be simply interested in the probability of something happening, and unfortunately downplay the importance of the consequences. Psychologists and scientists in other fields might use the word "risk" to imply something broader, perhaps to clearly imply both probability and the severity of an outcome, where it to happen. In their chapter on risk and environment, Hanley, Shogren and White (2007, hereafter HSW --- all economists, by the way)

3 do emphasize the importance of the consequences of the event when it occurs and this is appropriate. It makes sense to suppose that the severity of a risky outcome matters in shaping preferences and ultimately, behavior. After all, we may not care much about something that has a high risk, but has little consequence, but there is a large literature on the opposite: low probability events with dire consequences (e.g. Kunreuther et al. 2001; Weitzman 2009).2 A few readers thinking about these definitions carefully might indeed wonder, are there any situations in the area of resource economics where we really have complete certainty about outcomes? Perhaps there are certainties in the environmental/resource arena, but a lack of certainty pertains to problems involving the following: - links between health and environmental contamination (e.g. the value of statistical life (VSL) literature, where mortality risks arise from exposures to pollutants found in air, water, other media). Sometimes the distinction between a health issue and an environmental or resource issue is not apparent. For example, naturally occurring radon is a risk many people face, and is inextricably linked to lung cancer (Еkerman et al. 1991). - those issues linking most ecological outcomes and human behaviors, and - virtually all problems involving decisions that draw on knowledge that only could be obtained were we able to time-travel or transport into the future (i.e. all trade-offs between the present and the future). Indeed, as some say about the last point above, the very nature of the future is that it involves unknowns. Thus, real-world natural resource management or optimal extraction theory, requiring that trade-offs are made between the present and the future, inherently involves risk or uncertainty. In extraction of non-renewable or renewable resources, future supply, demand and prices (e.g. Sadorsky 2001), as well as stock sizes and their relation to discovery and technological innovation, and the quality of resources, may all be uncertain or risky. Naturally, this is why futures markets can be quite helpful to those in the business of resource extraction. Environmental and natural resource economists have of course long recognized the important role that risk or uncertainty might play in issues which interest us. Important early contributions involved decisions related to environmental preservation (e.g. Krutilla 1967). A broad definition of resource economics might well include farmland, but I don't focus on agriculture in this chapter, except as it relates to risk/uncertainty in relation to climate change. 2 A reviewer notes that the emergence of behavioral economics, potentially reconciling a "divorce" of psychology and economics some 100 years ago or more, may well balance both topics when considering risky behavior.

4 Environmental issues which I will include in this chapter are those related to nuclear wastes and accidents (Itaoka et al. 2006; Riddel and Shaw 2006; Riddel 2009, 2011; Ho et al. 2014), toxic chemicals and other contaminants in drinking water (Casman et al. 2000; Adamowicz et al. 2011; Jakus et al. 2009; Kalisa et al. 2015), air, hazardous wastes in land/soil (Smith and Desvousges 1988; Gayer et al. 2000; Hammitt and Zhou 2006), fresh surface water (Desvousges and Smith 1987; Jakus and Shaw 2003), groundwater (Edwards 1988; Forsyth 1997), wildfires (Englin et al. 2000; Hand et al. 2015; Wibbenmeyer et al. 2013; and Yoder 2004) and oceans (McKelvey et al.). Many of the studies in the topic areas above involve non-market valuation for mortality risk reductions and lead to the value of a statistical live (VSL) literature (Hammitt 2000; Viscusi and Aldy 2003; Adler et al. 2014). Cameron (2014) also provides a lengthy review of morbidity valuation issues, noting that this area is much in need of further study. I will also include a very selective review of the treatment of risk and uncertainty in the rich area of natural hazards such as earthquakes (Brookshire et al. 1985; Bernknopf et al. 1990; MichelKerjan et al. 2013), wildfires (Daigneault et al. 2010; Fiore et al. 2009) tornados, hurricanes (Hallstrom and Smith 2005; Kelly et al. 2012), floods (Heitz et al. 2009; Bin and Landry 2013; Petrolia et al 2013.), and landslides (Thiene et al. 2015).3 As with most all of the literature in the economics of risk and uncertainty, natural resource economics involving risk/uncertainty is largely purely theoretical: there is actually still a substantial dearth in empirics (i.e. modeling that produces statistical results by analyzing data). For example, classic optimal extraction papers often integrate stochastic optimal control, but the papers are almost purely theoretical in nature. However, there are some interesting empirical studies as exceptions. There are some applications involving (i) invasive species (Eiswerth and Van Kooten 2002; Leung et al. 2002; Provencher et al. 2012); (ii) species for which hunting permits are determined by lotteries (Buschena et al. 2001), (iii) and empirical analyses of exhaustible (Pindyck 1980; Conrad and Kotani 2005) and renewable resources (Smith 1980; Bockstael and Opaluch 1983; Pindyck 1984; Clark and Kirkwood 1986; Reed and Clarke 1990; McKelvey et al. 2003). Risk Risk can be relatively simple to incorporate into some natural resource management contexts, at least for scientists. For example, wildfire risk has some interesting features that help us to incorporate risk into managing fires in forests. When there are dry conditions and a forest has not burned in a very long while, risks are perhaps at least known to be quite high, whereas very wet conditions lead to lower fire risks, and low fire risk is obviously also the case just after a fire has 3 One has to draw the line somewhere, so I am aware of, but have largely skipped any detailed review of several important "lottery" based recreation papers, i.e. situations where participation in recreation involves a lottery (e.g. Buschena et al. 2001; Scrogin, Berrens and Bohara 2000; Nguyen et al. 2007; Yoder et al. 2014).

5 occurred (see Prestemon et al. 2002). Similar things can be said about risks related to earthquakes and the timing of the most recent earthquake event (Bernknopf et al. 1990). For example, Istanbul (Turkey) has not directly been hit by an earthquake since 1766, and there is a 30 to 60% chance of at least a magnitude 7 earthquake hitting there some time in the next 25 years (Barnes, 2010). However, highlighting a very interesting problem is this: it is often the case that just after an event like a wildfire has occurred, the science-based fire risks are actually quite low, but people will generally believe or perceive that risks are enormous. A key point to make here is there may be quite a disparity between public beliefs about risk and scientists' estimates of the risk. As will be seen below, this is one of my favorite issues and is explored quite extensively below. Risk is often simply introduced into natural resource extraction problems so that expectations are involved. Profits are not maximized, but "expected" profits might be. Similarly, on the consumption side, we don't derive demand, we derive expected demand. However, extensive study suggests that neither producers nor consumers simply engage in the maximization of expected outcomes. Uncertainty and Ambiguity In contrast to risk, some natural systems are so complex that characterizing risk with a degree of high accuracy about probabilities is just impossible to do. For example, hurricane risk prediction and modeling have greatly improved over the years since the infamous Galveston, Texas disaster of 1900, which killed thousands of people, but it still remains quite difficult to precisely pinpoint a very specific region of landfall for a hurricane well in advance of its actual arrival4, and some scientists are now concerned about difficulty predicting future hurricanes because of greater uncertainty introduced by climate change (e.g. Ranger and Niehцrster 2012). The odds of all the features leading up to the Fukushima Nuclear power plant disaster in 2011 also come to mind here (e.g. Ho et al. 2014), as do specific predictions related to climate change outcomes (Cameron 2005a, 2005b; McCarl et al. 2008). Several fields other than economics distinguish between "extreme events" and in fact suggest that there is a need for dealing with these extremes that is different than the manner in which risk is typically handled. The extreme event is likely to be one that happens very infrequently, but does have severe consequences when it does. The main problem is that when events are extremely rare, not much is known about them, making it unlikely that there is a convenient, accessible probability distribution that one can rely on for information. One suggestion is to move toward fuzzy set theory (Van Kooten et al. 2001) and interval-valued probabilities, Bayesian analysis, or to allow non-linear probability weighting functions that may, or may not conform to what we know about Probability Theory (Wakker 1990). As will be seen below, 4 A reviewer suggests that wildfire risks might actually be equally ambiguous to hurricane risks.

6 several other types of economic decision models have been proposed to handle uncertainty (e.g. Segal 1990; Klibanoff, Marinacci and Mkerji 2005). To summarize so far, in contrast to risk, the definition of uncertainty involves a spectrum, ranging perhaps from slight uncertainty about probability estimates, all the way to pure uncertainty, where it may be hopeless to obtain any good estimate of probability in a reasonable amount of time to allow making a decision. Society has to decide whether to forge ahead using some sort of decision making-process, or discount or even ignore the uncertainty completely. An exploding strand of literature in the area of uncertainty uses the word ambiguity (Ellsberg 1961) to describe various situations people face. For example, when two or more persons, either scientists, or perhaps laypersons in the public, have differing opinions about probabilities, these lead to multiple estimates of a relevant probability, and this has been deemed a very simple case of ambiguity (see Camerer and Weber 1992, who provide an excellent review of ambiguity up to that point in time). Ambiguity is sometimes characterized simply as a case where probabilities are uncertain because of a deficiency in information. Once uncertainty or ambiguity are in play, any individual's subjective beliefs about probability likely dominate over science-based probabilities in decision-making, and it is highly likely that some decision rule is adopted that does not involve probabilities at all. For example, Grijalva et al. (2011) consider decision making under uncertainty, based on a strategy to minimize one's regret of making a wrong choice. Beliefs about risk may lead to formulation of subjective probabilities. Subjective probabilities might explain behavior much better than science-based averages (Manski 2004), although their accuracy or precision has been certainly called into question (Bassett and Lumsdaine 2001; Andersen et al. 2012). Several scholars have developed models that involve a range or interval of relevant probabilities (Manski 2004; Watanabe and Fujimi 2015), and some of these will be discussed below. First, next I consider preferences for risk. Risk Preferences, Revealed and Stated Probabilities Risk preferences are related to, but are not the same as, subjective probabilities. As will be seen in the next section, curvature in the utility function with respect to a risky influence (e.g. traditionally, risky income) indicates a preference for risk, risk neutrality, or risk aversion. Subjective probabilities are an individual's own estimates of the probability of an outcome, which may or may not coincide with science-based risks. To identify both risk preferences and subjective probabilities requires different information and data, and suggests that we need to be aware of the fact that observed individuals themselves might be working with partial information (Manski, 2004). For example, suppose we observe a gambler at the craps table suddenly bet a lot of money on a double six outcome. Why did she do it? One possibility is known as the gambler's fallacy: if asked, the gambler might decry: "Well, I've been playing for over an hour and it

7 hadn't come up in a long, long time, so it had to happen!5" This indicates that subjective probability for the individual is quite different than the science-based probability: of course the probability of a double-six coming up remains 1/36th regardless of previous roles. A second possibility is that the individual knows the probability of the double-six outcome is indeed 1 in 36, but she loves risk (the risk preference), so took the chance. With simple observation of the bet, how do we know which is the case? Do we care? It might be useful to use some additional, and perhaps coin some new, jargon here. Standard non-market valuation approaches used by environmental economists distinguish between the use of stated preference methods and those that rely on revealed preference behaviors. For those unfamiliar with these two, stated preferences are based on responses to hypothetical scenarios (respondent number 12 chooses a hypothetical new/planned transportation route A to route B), while revealed preferences are based on observed behaviors (by watching her, we think it apparent that Mary Lou prefers to ride her bike to work than drive). These revealed and stated approaches will also be important in the discussion of risk and uncertainty here. As in the case of certainty, we can hope to be able to observe individuals making actual decisions in the context of risk, but with consequences of the decisions perhaps not immediately observable because outcomes might not be resolved or played out. Their actions and behaviors lead to revealed risk preferences, or lead to recovery of subjective probabilities or risks. Manski (2004) extensively discusses the advantages of revealed versus elicited/stated probabilities, and offers an historical perspective. He concludes that there is a great deal of value, and even the necessity, for the use of elicited subjective probabilities. Behaviors undertaken to mitigate or avoid risk are often deemed "averting behaviors." For example, perhaps an individual can be observed to choose to purchase an automobile purchase package that includes an airbag option, at a higher cost than one that has no airbags. The purchaser may, or may not have access to data on mortality risk reductions in an automobile with airbags, and thus may or may not have a sense of probabilities, but we might assume that they behave as if they do. Similarly, when faced with lung and bladder cancer risks associated with arsenic in drinking water in the home, a household may choose to mitigate this risk by taking one of various possible measures, such as water treatment (Shaw, Jakus, Riddel 2012), or drinking bottled water (Jakus et al. 2009). This averting behavior might in fact make some specific risks endogenous to the household (Konishi and Adachi 2011; Shogren and Crocker 1999), a point returned to below. In any case, we might not observe a member of the household long enough to know if they get cancer, but again, we can assume that the mitigation choice helps reveal the 5 I actually have asked a craps player this question at a Reno, Nevada casino and got exactly this response. For those who do not know the game of craps, it is a dice game, involving bets on rolls of two six-sided dice by the "shooter." The game is complicated by the fact that some outcomes pay fair odds, conditional on the come-out roll by the shooter "passing", i.e. not being an immediate winner (a 7 or 11) or an immediate loser (a 2,3, or 12).

8 preference for risk, and in some settings, the value of a risk reduction. Averting behaviors that involve preventative expenditures, like cost of illness measures in the health arena, are thought to be helpful in providing lower bounds on the value of risk reductions.6 Another choice that is revealing in preferences for risk is one's type of job, and the amount of safety associated with that job. Values for safety, or for less mortality risk, might be found by comparing wages in low risk versus high risk jobs, however the issues in estimation are complicated, and may involve risk perceptions and potential for endogenous risk. Alternatively, there may be situations where there are no observable behaviors that will shed light on preferences under risk or uncertainty, or on what the individual believes is the relevant probability. Therefore, a stated probability or preference approach must be implemented. Individuals can perhaps state their true risk preferences, attitudes, or beliefs about probabilities. Skeptics about stated preference might believe they cannot, much the same as skeptics do not believe that individuals can do this in relation to providing true values for goods and services. So, the issues of incentive compatibility and framing effects arise when considering stated preference mechanisms such as survey questionnaires. A design mechanism is incentive compatible when a subject or respondent is motivated by self interest only, but reveals true, private information. Also, as in stated or hypothetical valuation exercises, people may have some incentive to lie about their true risk preferences or beliefs. For example, it may be viewed as sort of macho to understate one's true sense of risks. (e.g. "I'm not scared of a little arsenic in my coffee."). Thus, it is clear that the same issue of incentive compatible design arises when using stated probability approaches as it does in the case of stated preference/valuation. To summarize here, I will borrow from the stated preference valuation literature, and I'll coin the use of stated risks or probabilities vs. "revealed" ones, as the jargon for the remainder of this chapter. Some similar issues to the ones that arise in stated preference will then also be discussed below. A similar description related to both of these concepts is deemed "belief elicitation" and/or belief induction, although the latter suggests that a probability belief can be induced (see Hurley and Shogren 2005, who find that the induced belief in an experiment cannot be confirmed: instead people overestimate low, and underestimate high probabilities.). Whatever the approach, the stated probabilities that we might use in models must be somehow elicited from the laboratory subject or the survey respondent, and discussion of this is continued in section 4, below. Next, the reader is introduced to the workhorse risk model in economics, the expected utility model.7 6 It is well known in economic theory that the cost of say, water treatment, is not equal to the maximum willingness to pay to obtain a risk reduction or water quality improvement, but instead provides a lower bound. 7 Those familiar with basics about the EU framework may of course wish to skip this entire section.

9

2. Decision Making Under Risk

Whether one uses stated or revealed probability approaches, there is a basic model of economic decision making under risk, the expected utility model (EUM). The idea is attributed to Daniel Bernoulli in the 18th century, in response to cousin Nicholas's St. Petersburg paradox (see HSW, p. 373, 2007). The analytical framework is attributed to von Neumann and Morgenstern (1947).

2.1 Expected Utility with Exogenous Risks

The basic expected utility framework can be applied to virtually any resource decision maker's problem, though we may often think of the EU model as pertaining more to environmentallyrelated risk decisions than we do risk in natural resource optimization problems, because of its tie to utility maximization from the position of the consumer.

The EU can be written in terms of continuous outcomes. However, a great deal of resource economic analysis involves a few, distinct, and discrete outcomes. In the usual expression for EU with discrete outcomes, the probabilities (p) appear to enter the EU model linearly, and in the simple form those probabilities appear outside the utility function (u) as weights, not inside it. In a simple discrete, two outcome, state dependent form with utility in state j as uj (j = 1,2) and x as income or wealth:

(Eq. 1)

EU p u1(x) [1 p]u2 (x)

While the EU framework is perhaps most often thought of with respect to consumer's decisions, it can also be used to characterize investor's decisions. When thinking of the producer's decision under certainty (e.g. perhaps a producer of oil or natural gas), one immediately thinks of profit maximization, so a natural extension to the world of risk is to think of maximization of expected profits. However, applying the EU framework to a producer's decision is different than the most simple producer's expected profit maximization problem. Particularly in agricultural applications (the farmer's decision problem) simple expected profits problems are often laid out in an effort to explain risky production decisions, but these miss the ability for the researcher to consider risk preferences for the producer. Are they risk takers? These preferences could potentially be quite important in many problems in environmental and resource economics.

State dependence, as in (Eq. 1), can be quite important, but is not required in the EU. However, letting there be different utility functions pertaining to different states is almost certainly essential in much economic analysis we might consider in resource and environmental economics, as it is in health economics. In Eq. 1 above, the first state outcome might be "life" and the second, "death," and many have hypothesized that the marginal utility of income in each state might be quite different, which leads to many key results that relate to the valuation of mortality risk reductions.

10

Implementation of the EU framework empirically is discussed below though there are in fact few applications that I know of, especially in resource economics. To implement (Eq. 1) empirically, one of course typically needs to do two things: first, assume a particular specification for the utility function (which can be tested), and two, assume a parametric probability distribution so that an estimating equation of some kind can be formulated. I postpone discussion of the latter until the applications section below, and continue on with some well-known theoretical aspects connected to the EU framework.

Certainty Equivalent

Standard presentation of the risk versus rate of return trade-off leads to the certainty equivalent (CE), which in turn helps identify basic preferences for risk. This helps greatly in our understanding why some people behave quite differently than others, when faced with risks. The CE identifies that amount of additional money or wealth (x) that an individual accepts with certainty to be indifferent between utility of the expected outcome for a gamble, and the expected utility for gamble. Suppose the gamble pertains to a random variable x, which has some expected value, E(x). The definition of the CE generally is given by the equation:

E[U(x)] U(x CE)

The CE can be the same, bigger, or smaller than E(x). We are interested in the amount necessary as the CE so that the individual is indifferent between the EU of the money gamble and the utility of the expected value of the gamble: If the CE is equal to E[x], then we believe the individual to be risk neutral, but when she requires a smaller CE then she is risk averse (she will accept smaller certain compensation to be indifferent). When the right hand side is strictly preferred to the left hand side above, this is called Jensen's inequality, and it implies that the utility function is concave (the 2nd derivative with respect to wealth is negative. In contrast, risk lovers may want more compensation or payoff to forgo the gamble because they get some utility from the gamble itself. The concepts of course help us partly understand why there are individuals in society who are more than willing to take on a gamble, and some who prefer a certain payoff ($ > 0) to a fair bet (E($) = 0), i.e. risk averse individuals might turn down even a fair gamble. Paraphrasing one of the reviewers for this paper and his or her words, to gamble, they require an expected value sufficiently high to overcome their aversion to risk.

The EU framework also leads to a simple definition of financial risk aversion, known as the Pratt (1964) -Arrow (1964) measure (r). Here the ratio of the second order partial (u) to first order partial derivative of utility, again with respect to income or wealth (x), plays the key role:

(Eq. 2)

r(x) u(x) u(x)

11 Equation 2 defines a coefficient of absolute risk aversion. Is this constant, or does it change over the domain of wealth? Clearly, it matters a great deal what form is assumed to hold for the utility function, and one sees lots of empirical literature in the area of agricultural economics wherein a specific utility form is imposed, which then yields an estimate for r that corresponds to (2). One can impose, but also test for risk preferences to be of the constant absolute risk aversion form (CARA), or constant relative risk aversion form (CRRA), or increasing or decreasing versions of either. It is well known that the only utility function that leads to a constant level of risk aversion, hence the "CARA" form, is the exponential, such as u(x) = -exp(-x). One can take the derivatives, solve for r in equation 2, and show that r = the constant, . Examining (2), note that the more curvature in the utility function, the more risk aversion there is because the risk index numerator is larger in absolute value terms, than the denominator. To get a coefficient of relative (not absolute) risk aversion one multiplies the index in equation (2) by x (in the numerator). Absolute risk aversion relates to the actual amount of wealth an individual holds in risky assets for a given level of wealth, while relative risk aversion focuses on the percentage of wealth held in risky assets. The second issue is whether one assumes a functional form that allows for constant absolute (CARA), or constant relative risk aversion (CRRA). When one assumes CARA, the usual form is: (Eq. 3) U(x) exp(x) Here, the form is a negative of the exponential, and dy/dx when y = exp(x) returns exp(x). The absolute index would be = -. That makes sense, to yield the constant. When one assumes CRRA the form is very simply: (Eq. 4) U (x) x1r With Eq. 4, risk neutrality clearly holds when r = 0, obviously, because the utility function is a very simple function, linear in x. A person is risk averse when r > 0 [let's say that r = .5, then the 2nd derivative is = .25( 1 )1.5 , and risk loving occurs when r < 0 [note that when say, r = 2, then x the 2nd derivative is the positive constant, 2]. Again, the Arrow (1964)/Pratt (1964) relative risk index for the utility function in Eq. 4 would just be = r. That is of course the whole point of a constant relative risk aversion functional form. CARA implies that sensitivity to risk stays constant over the spectrum of wealth. One can think of that sensitivity as being measured by how much a person would invest in a risky asset. Under CARA, the amount won't change, even as wealth increases. In contrast, DARA would imply that

12 as wealth increases, that sensitivity declines, and thus a person with DARA preferences might increase investment in a risky asset. The opposite would be true with increasing absolute risk aversion. For some applications we are not comfortable imposing the CRRA or CARA structure, and so we might consider the "Expo Power" (EP) function (Saha, 1993). The EP is used by many economists because it is flexible, and nests CRRA and CARA within it. The form for the EP function is: (Eq. 5) U (x) 1 exp(x1r ) The Arrow-Pratt relative (not absolute) index for this EP form is then: (Eq. 6) APindex xu(x) r (1 r) x1r u(x) Note also that the EP function reduces to CRRA if =0, and reduces to CARA if r = 0. More correctly, it appears these are true as approaches 0. Recall that many forms actually yield the same preferences because expected utility functions are only identified up to an affine transformation anyway. The relative index in Eq. 6 is consistent with what is often estimated using data on choices. We can actually estimate this AP index for each person in a sample, given the right data. This is often done in laboratory experiment settings, with a small set of presented lotteries. If the range on those lotteries is also small, the AP index likely won't change much with changes in x. The data and estimation approach will yield estimated values of r for each person, and the is the same for all individuals. I probably make it sound from the above that empirical implementation of even the most simple modeling of choices under the EU framework is common, but it is certainly not, at least within the resource and environmental economics literature. Some empirical studies I have seen claim to involve risk, but it is not clear that the estimating equations conform to any particular theory like the EUM. There are a few empirical studies that at least attempt to determine if individuals are risk averse or not, but most of these are better classified as being in the area of farm production (e.g. Binswanger 1980). At best, these studies establish whether overall for the sample, risk attitudes are consistent with risk aversion, and whether some groups of people (e.g. women versus men) have different risk preferences. In his classic study Binswanger (1980) looked at risk preferences in rural India and found, using one approach to eliciting preferences, moderate risk aversion,

13 with no apparent effect of demographics. Risk aversion is a common finding in empirical studies of farmers, no matter what the country the farmers are located in.8 Pushing further, the risk aversion coefficient r is sometimes parameterized to be a function of demographic variables that allow for heterogeneity across individuals (e.g. Grijalva et al. 2011). Many experimental analyses have found that not all individuals in their laboratory setting have the same risk preferences, as have other studies. Indeed, one might ask, why would they? The question inspires some modeling and research: at one extreme each individual might be allowed to differ, and to a lesser extreme we might let r depend on gender, or race, or brackets of income. Risk preferences, and differences in those for different people are not to be confused with allowance for subjective probabilities that also might differ across individuals. I'll postpone further discussion of this to the section below, on empirical applications. Before leaving the discussion of the EU framework, some important or key features of the EU seem to remain unknown to environmental and resource economists that assume it pertains in their studies, so these features are worth a slight digression here. First, and perhaps most important when applying the EU framework, is the presumption that individuals at least behave as if they know the probabilities of the outcomes, and really comprehend what these mean, or again, they at least behave as if they do. Equation (1) above is quite simple in that the risk being considered most simply pertains to a financial outcome (i.e. utility is a simple function of wealth or income, x), which of course is what was of interest to the early researchers who developed the EU framework. Second, because of its simple, linear-in-probabilities formulation (which doesn't significantly change in a continuous outcome version of equation 1), it is well known that preferences under the EU take very restrictive shapes (see the straight lines in the usual Marschak-Machina triangle, prominently featured in the risk chapter in HSW (2007), and thus, imply some interesting assumptions and very restrictive behaviors. These are nicely addressed by HSW so I refer the reader who wants to know more about this to that discussion. Curvature in the utility function with respect to income is of course absent if the utility function is linear in income, and specification of the utility function in this way is tantamount to assuming that individuals are income risk neutral. In fact, for small income stakes, EU theory implies that 8 This has always struck me as odd, since farming is inherently risky. I.E. one would not suspect that people who choose to farm would be strongly risk averse. However, I suspect that for many, farming is a very complicated life choice, or that in some countries there is no choice with regard to occupation, at all.

14 people are approximately risk neutral (Arrow 1971; Rabin 2000).9 Hundreds of studies assume risk neutrality on the part of individuals in a theoretical or empirical analysis, either explicitly, or implicitly. Being quite familiar with recreation demand modelling, for example, I can safely say that the vast majority of utility-theoretic models in that context assume that utility functions are linear in income, i.e. that the marginal utility of income is constant.10 Fourth, and I imagine this will be controversial to some, if we find a way to uncover risk preferences or probabilities related to an income or financial gamble, many economists would argue our job as economic researchers is done. The explicit or implicit assumption is that risk aversion or another preference in the financial realm (risk with respect to money) carries through to preferences for other types of risk, such as mortality risk from engaging in a particular activity. However, I argue here that there is no reason why I cannot be risk averse when it comes to gambling with my life by climbing a mountain in the Himalayas, but a risk-lover when it comes to playing the financial or stock market, or vice versa. Few researchers have empirically explored differences in risk preferences for different types of risk: financial and non-financial. An exception is Riddel (2012), who in fact modifies the EU framework, and compares probability weighting functions (pwf's) for individuals for financial outcomes, and for outcomes related to (environmental risks) oil spills. She finds that her subjects overemphasize low probability extreme environmental outcomes more than they do the same type of financial gambles, holding constant the odds and equivalent outcomes. This is discussed more below, when pwf's are introduced. Some of these interesting features of the EU relate to relevant valuation or welfare measures that are derived in the context of the framework. 2.2 Welfare Concepts and the EU: Option Value, Option Price, Expected Surplus Many environmental economists are interested in valuation of non-market goods because so much that we care about is not allocated within markets. The standard welfare measure is maximum willingness to pay (WTP). Strict adherence to the world surrounding the EU imposes several restrictions on WTP. As environmental economists, the reader may already be familiar with some jargon like Option Value (see Smith 1983) or Option Price (Graham 1981), but note that mainstream and financial economists' first notion of something like a valuation measure within a risk context is to simply think of a risk premium. The Arrow-Pratt risk premium relates 9 A reviewer suggests that indeed Arrow and Pratt were considering small stake risks over monetary gambles, and not environmental and mortality risks. And, lest I appear to be "bashing" EU here, we should not forget that it was proposed on a normative basis. 10 Sometimes the role of income is simply absent in empirical modeling because a survey did not collect adequate data on income to allow it to be used.

15

to the two authors' measure of risk aversion (our "r" above), and is in general, different than the OV or OP concepts we are used to in resource economics. However, one interpretation of Option Value (OV) is indeed that it is a risk premium, differing from expected consumer's surplus (ES). ES is simply probability-weighted consumer's surplus, and knowing its value implies that one can somehow know the consumer's surplus upon the completed realization of outcomes that are risky. OV and related concepts are similar to "options" in financial analysis. In fact, when a firm can invest, but faces irreversibility, or faces uncertainty about future rewards from an investment, and has some flexibility in the timing of an investment, then it in effect holds an option quite similar to a financial call option.

As an aside, I note that Quasi-option value (QOV) is yet another term seen a fair amount in resource economics, and it is primarily related to the expected value of information gained by waiting to better see or learn what might happen in the future. Several have linked QOV to irreversible decisions, arguing that with the latter, there may be a higher, or at least much different value to future information than is revealed at present. Some studies, such as one involving the investigating potential contamination at groundwater sites, note that investigation or exploration can reduce uncertainty and can thus be linked to QOV.

Using the EU framework as a starting point, one can in fact consider several risk-related valuation measures. Consider the willingness to pay to bring about a reduction in the probability of death. Again using the simple discrete probability form from equation (1) above, with u1 being the "alive" state, write the willingness to pay for an increase in the probability of life from initial probability (p) to the new level, p', or:

(Eq. 7)

pu1(x WTP) [1 p]u2 (x WTP) p u1(x) [1 p]u2 (x)

Eq. 7 provides a WTP concept that is essentially the same as an Option Price (OP) for a risk increase. The OP is the maximum amount that an individual would pay to obtain the different set of probabilities (lower ones above), and is independent of the states, as it carries through over both states above. The willingness to pay "locus" of points that includes the OP also includes the fair-bet point: the point on the WTP locus with the highest expected value. Some have argued that the expected value of the fair bet point is in fact the appropriate welfare measure when risk pertains. However, this is not in fact true when individual risks are not insurable because in that case individuals cannot purchase contingent claims contracts allowing them to obtain this fair bet point.

One can of course modify (Eq. 7) to express the WTP for a change in an additional variable that enters the utility function, but which remains important. Or in fact we can write (Eq. 7) differently, in terms of a minimum willingness to accept compensation (WTA) for forgoing an increase in the probability of survival. However, very few papers have considered the WTA in a

16 risk framework (the Riddel and Shaw 2006 study is a notable exception). This is probably for all of the same reasons that WTA is not generally estimated even when risk is not present. Also note that the "WTP" in equation (Eq. 7) is not the same as the standard WTP equations in derivation under certainty, because it is couched in terms of expected utility, not utility. In fact a key distinction is obviously that the OP will be a function of the probability of the outcome. How not? Thus, in comparing welfare measures that correspond to certainty with ones that correspond to risk, we would expect some differences. Generally, under risk aversion, one would expect the OP to be smaller than a WTP obtained by assuming certainty. For example, suppose the WTP for a certain and definite improvement in water quality is obtained and found to be $60 per year. In contrast, suppose an OP is obtained for a water quality improvement that is related to a risky outcome. It would not be surprising to find it to be smaller, on average. In theory, there is a relationship between the OP, ES, and OV, such that OP ES = OV. Here one can in fact see that interpretation of the OV as a risk "premium" really only makes sense when the initial option price is larger than expected surplus, so that it is some positive amount. But in general, OV can be positive or negative and all models of OV either implicitly or explicitly assume a timeless world exists, where learning is not possible. The size of the risk reduction (i.e. a 5% versus 50% reduction) should matter in valuation, such that a roughly proportional relationship between it and the WTP should hold, at least in a traditional EU framework. As an example: say the WTP for a risk reduction of 2 deaths in 100,000 is $20, then it should be about $40 for a risk reduction of 4 deaths in 100,000. Failure for this to matter suggests a failed scope test, similar to what is sometimes encountered in the usual contingent valuation method (CVM) literature. Note also however, that at least under risk neutrality [and again I remind the reader that Rabin (2000) suggests that this is often the case with small stakes], the strict EU implies that the WTP for a fixed change in probability, say 10%, should be the same regardless of whether the change is from 90% to 100%, or from 10% to 20%. Intuitively, this might not be an implication that we are happy with. Several studies in fact suggest that there will be a higher value in reaching certainty than not, even when marginal changes in risk are equal. This is known as a "certainty effect." It implies that a reduction in the probability of death from 0.06 to 0.05 is valued less than a reduction from 0.01 to 0.0 (Pratt and Zeckhauser 1996) because many people attach a special importance to facing zero risk, even if it is implausible in reality, i.e., based on science, zero risk cannot be obtained. Several economists have also suggested that the baseline or initial level of risk should also influence WTP (see Adamowicz et al. 2013), and this is because of differences in the utility and in the marginal utility of income in the states of being dead (lower) and alive (higher). The usual

17 justification for state dependence with one state involving death raises interesting questions about why there appears to be utility when one is dead, but this is typically intuitively explained via a bequest motive. As will be discussed later in this paper, the empirical literature is mixed as to whether baseline risk levels affect WTP (e.g. Smith and Desvousges 1987). Certain theoretical frameworks might predict that WTP should be higher for an equal change in risk, when the initial or baseline risk level is higher. For example, if baseline risk of mortality is 50%, the WTP to reduce it to 40% might be expected to be larger than if baseline risk of mortality is 25% and we look for the WTP to reduce it to 15%. Both are 10% changes in risk, but the first case involves twice the initial risk of dying than the second case. The importance of baseline risk is not dictated by the EU; it is not a trivial exploration and the issues must still be further explored. Finally, most of the discussion of risk-related welfare measures has involved trade-offs between risk and income or wealth, and indeed there are many instances in which the trade-off of interest is indeed between risk (= probability, or "prob") and wealth. The slope of the indifference curve that is most often examined is dx/dprob, telling us the rate at which an individual is willing to trade higher risk (probability) for wealth or income. Naturally, if the indifference "curve" is a straight line, then the trade-off is constant, and WTP for a given change in probability, say 10%, will be equal across the spectrum. As will be seen below, several authors (e.g. Hand et al. 2015) not only now allow for curvature in the utility function with respect to wealth, but also curvature with respect to other variables that might involve risk. Hand et al. 2015 reject the null hypothesis that these variables are conforming with risk neutrality for their sample of wildfire risk managers, although they do not estimate WTP using their model. This study will be discussed in more detail below. The hedonic wage literature that leads to the value of a statistical life (VSL), can be seen as falling into this trade-off category (i.e. an individual trades on-the-job safety for higher wages); similarly, consider the literature that examines revealed preferences involving safety devices (seatbelts, e.g. Peltzman 1975), which come at a cost, and related mortality risks. However, of more interest to some is that there are many situations in which we have no clear trade-off between income and environmental or health risk. As examples, consider sports with a relatively high injury or mortality risk, such as sky diving, scuba diving, mountaineering, rock climbing, extreme skiing, race-car driving, and base jumping --- many of these come to mind. With the exception of professionals who get paid upon winning some competition, what reward do non-financial risk takers get in return for taking on higher risks? This is the sort of question that resource economists might well ask because we are interested in why and how people use outdoor amenities, and I'll come back to this later in the chapter, in the section on risk elicitation. Because it is related to the answer to this question, subjective probabilities and the subjective expected utility model are considered prior to that.

18 More generally, as noted above, the sought-after valuation measure in the presence of risk is typically the option price (OP) from an equation such as (Eq. 7) above, though Graham (1981) presents situations where it may be more appropriate to use a different measure of value. The OP is not the same welfare measure as those obtained under certainty: ordinary consumer's surplus (OCS), or compensating or equivalent variation (CV or EV), or surplus. However, again, as under conditions with certainty, it may well matter who is assigned property rights. If so, we consider whether the valuation concept under risk best relates to a WTP or WTA (e.g. see Riddel and Schwer 2006, and Riddel and Shaw 2006, some of the few papers I know of which obtain a risk-related welfare WTA measure). Empirical Valuation Studies Most all empirical studies I know of that involve risk and are related to environmental economics use SP, not RP to obtain VSL and other values (e.g. Edwards 1988). An exception is an interesting early effort to use RP data is the study of radon risk mitigation by Еkerman et al. (1991), but it is not completely clear here what welfare measure is estimated. This is because the study initially mentions cancer risk and suggests the use of the EU model, but the probability that is relevant in the model is the probability of a household's mitigating behavior, not the risk of cancer. Radon is found in soils throughout much of the United States, and mitigation can take simple forms, such as fans and ventilation systems that disperse radon gas, or soil removal from areas under and around homes. The WTP that is estimated is based on a basic relationship that makes the household indifferent between mitigating against radon and not. The authors specifically suggest that this corresponds to the median of the distribution, or where the probability of mitigation equals 0.5. Obtaining the OP measure requires that an individual state a value or reveal a value before the outcome involving the risk is realized. The phrasing of an OP question in SP analysis is thus rather difficult, and should be carefully done. In the study by Еkerman et al. (1991) it appears that the underlying assumption would be that if the household mitigates, then the post-migitation cancer risk is zero. The authors state that their WTP is calculated by solving for the value of the cost of mitigation that induces half the households in the sample to mitigate. Some households in the data mitigate and others do not, yielding some variation in the decision. However, the sample WTP that would make half the households mitigate is not the same as the OP that makes an individual household indifferent between whether they mitigate or not. SP approaches can be used to present scenarios to subjects that involve varying levels of risk. These changes in risk might not be experienced under normal conditions. For example, nearby households might face an unchanging level of risk by living near a toxic waste site, but also may be unable to easily move away from the site. In this situation, an SP approach might be undertaken. In any case, an RP approach would need to track movement or some other type of mitigating or averting behavior over time, and this can be expensive. Because of this, they are

19 much more common in the environmental economics literature than RP studies, so these get a complete discussion below. 2.3 Subjective Expected Utility If a person does face a risky outcome and chooses this, for example, one involving what appears to be a foolish financial gamble, or even involving possible death (as from smoking cigarettes or jumping off a bridge tied to a bungie cord), then we try to find reasons why the individual does this. There are several possible explanations. First if we believe in the simple framework in (Eq. 2.1) then we assume an individual knows the "real" risks of the gamble, or of the probability of death from smoking cigarettes. Here, "real" means the risks that are based on the best-available science. With known probabilities like this, we might explain the choice based on risk preferences. We figure that the sky diver is made better off jumping out of the airplane, or she wouldn't do it. The craps player who knows that the probability of a double six coming up is always going to be equal to 1/36, still thinks at some point that s/he is better off placing a bet on that outcome than not. Alternatively, if there is no science available to provide probabilities, we can assume the individual forms their own risk beliefs or estimates of "subjective" probability. Perhaps then, an individual's own subjective estimate of the probability of death upon jumping off the bridge or tower with the bungie is much lower than others have. If mine is very low, maybe I will jump, and if yours is quite high, you won't. Subjective estimates can differ, according to the situation.11 Savage (1954) pioneered the subjective EU ( or SEU) framework, allowing individuals to have their own sense of the probabilities of outcomes, hence, the "subjective" descriptor. The SEU framework has its own axioms (see the appendix), which, as in the case of "obeying" risk axioms, may or may not be adhered to by individuals in their behavior. As researchers, Savage thought we could perhaps uncover estimates of subjective risks individuals have by observing the individual's behavior. One can think of subjective probabilities as the marginal rate of substitution for contingent claims. In some securities markets, such as hurricane futures, the price of the security may in fact reveal the subjective probability. For example, as will be discussed more below, we can describe a market where the security pays $1 if a hurricane strikes a particular location. If one hurricane securities trader believes that the probability of this happening is only 10%, then a trade price might be $0.10, indicating the trader's subjective estimate. However, clear ability to distinguish subjective probabilities from preferences by observing behavior (inside the confines of the experimental laboratory or outside it) has been questioned by some economists and psychologists. 11 I rock climb, but I once climbed a tower and looked down to consider being tied to a bungie cord and jumping. I considered the risks and rewards and decided against it.

20 Savage (1954) had the view that if an individual faced two or more estimates of probability, she would be smart enough to average these, collapsing a distribution to a degenerate point estimate. That point estimate is then used to make decisions in the SEU framework. One key reason that resource economists might want to use subjective probabilities is for use in SEU types of empirical models. Subjective risks (probabilities) can be elicited using a variety of approaches and used in modeling of behavior (e.g. choices), but doing so empirically is quite rare; quite new studies are embarking on elicitation of subjective risks, and hopefully in a convincing manner. As mentioned above, several important axioms for SEU are included in the appendix to this paper. I offer those without judgement, but it is important to consider that many risk scholars have been critical of these axioms, or at least have found that decision makers appear to violate them. Gilboa, Postlewaite and Schmeidler (2008) note that Savage's axioms are considered by some economists to be the behavioral definition of rationality, but that, in their opinion, a person can be rational, but not satisfy the axioms, and can also be quite "irrational" and yet satisfy the axioms. The example they use for the latter, is that a person may cling to beliefs in the supernatural, and conform to Savage's axioms. As will be seen in the "uncertainty" section below, some approaches to handling uncertainty actually allow relaxation of key Savage axioms, again, as people have been shown to violate them when they are observed to make choices. 2.4 Perceived Risks Many years after Savage's original work on the SEU, the psychologist Paul Slovic introduced the idea that individuals behave in accordance with their own perceptions of risks, giving rise to his jargon of "risk perceptions" (Slovic 1987). "Perceived risk" are now two words I see very often in the psychology literature. They are a closely related concept to subjective probabilities. They might differ only because the latter leads to numerical estimates of probability that might be useful in a formal model of decision making which economists can easily recognize, while the former may not. Perceived risks can include verbal ratings or an ordinal ratings of a sense of risk (e.g. likert scales are often used in psychology). For example, Vбsquez (2012) uses verbal risk ratings from "very bad" to "very good" to arrive at subjects' perceptions of the reliability of water supply. In this study these verbal ratings imply something about the risk of getting one's water supply. The words "bad" cannot be used as weights in an expected utility model, but an overall perception index is used by the author in a model to explain household expenditures on items to avoid the poor reliability of the water supply (i.e. averting expenditures on things like buckets and barrels of water to store water in times of low supply.) As an example of likert-type scales, Heitz et al. (2009) adapt a 0 to 10 "pain" scale used in medicine to get at the importance of muddy flood risk to each survey respondent. Data based on these sorts of verbal ratings, or numerical scales cannot be easily used as measures of probability

21 in an EU or SEU framework (Viscusi and Hakes 2003). Still, verbal answers and likert scale questions may well be what many think of as subjective risks or concepts associated with several other words in the risk literature, such as "risk beliefs or ratings." My sense is that the latter are being used more in psychology than in economics, but this does not necessarily mean that empirical modeling of decision-making in psychology is any less informative than conventional modeling in economics.12 Economists might think of perceived risks often as simply being "wrong" because there are objective, science-based risks that are "right," and these are well known and different than the perceived risks. One gets the idea that some economists still believe (or used to anyway) that individuals with risk perceptions that are not the same as science-based risks simply lack cognitive skills, or at least very poorly informed. The attitude underlying this old-school thinking is that we can fix this: it is simply a matter of teaching people the right risk information. While this is yet another view that distinguishes perceived risks from probabilities and other risk concepts, perceived risks also may be consistent with the fact that there are no good estimates of risk that can be based on science. Fortunately, a lot of modern empirical literature focuses on the role of underlying causes for differences between science-based and subjective probability. While economists do not exactly have a theory that explains variation in perceived risks across different people, psychologists suggest that an individual's perception of risk might be a function of a host of personal and other factors, including their education, cognitive skill, experience, exposure to information via the media or other sources, proximity to the hazard, and moral perspective. Many psychologists have suggested that perceived risks are a function of feelings such as dread or fear, and this is especially pronounced in the case of one's own mortality risks. They have shown that the manner of death may lead to higher/lower perceived mortality risks (see references in Thiene et al. 2015). Risk perceptions differ across individuals (Heitz et al. 2009) and also appear to differ across cultures (Gierlach et al. 2010). Perceived risks can also change over time, with a person's exposure to events and additional information, such as hazard warnings or recent events that occur. Early work by economists explored whether hazard warnings about earthquakes affected perceived risks and influenced real estate values near Mammoth Lakes, California. Risk perceptions do change after the earthquake hazard warnings are released (Bernknopf et al. 1990). Cutchin et al. (2008) examine risk concerns before and after an explosion at a Texas refinery; and Ho et al. 2014, study perceived nuclear risk for a sample of people in Taiwan, in the aftermath of Fukushima. Similarly, a recent study finds that the perceived risk of a hurricane strike changed as the amount of time since Hurricane Katrina occurred has increased. Baker et al. (2009) surveyed evacuees of the Katrina 12 A reviewer of this paper suggests that it may boil down to how much accuracy we are looking for, and that ordinal ratings are less precise.

22 event soon after the hurricane hit New Orleans and then again about one year later, finding some decrease in perceived risk for the sample, on average. Even though scientific information was provided that the chance of a hurricane in the similar category of impact striking New Orleans was about 12%, many subjects clung to the notion that the probability in the next year would be quite close to 100%. Even though results like Baker et al. (2009) found make us question the thought process of some subjects, there is in fact no reason that one's own risks, one's perceived risks and science-based risks should all be the same for everyone in a sample or within a population. The science-based risk estimate is often an average for the population, and people in the population of course differ, and may differ from the average person in characteristics and exposures that determine average risk. Viscusi (1990) and many other economists and psychologists have examined subjective probabilities and perceived mortality or disease risks associated with cigarette smoking: maybe an individual does not have the same risks of dying from smoking as scientists dictate for the average smoker.13 This could easily happen if a smoker gets pegged as such because she has had one cigarette in the past 30 days (a commonly used metric), while the average smoker perhaps smokes 5-10 cigarettes per day. Naturally, one other way of explaining why people smoke is that they might be grossly underestimating their own mortality risks from doing so. Similarly, the idea that perceived risks can greatly exceed science-based risks goes a long way toward explaining why people are afraid to fly on a commercial airline's airplanes.14 A problem arises for the operation of markets and policy making when the public has vastly different estimates of probability than the science community does. For example, after the U.S. airplane hijackings and disasters of 9-11, the public was generally so afraid to fly that it took a long time before faith was restored in the safety of the commercial airlines' flights. Had economists then engaged in research involving the valuation of risk reduction for airline flights in about October of 2001, the implications for policy would have been enormously different than doing the same valuation exercise today, highlighting the importance of extreme events and psychological factors in subjective risks. 13 Actually, Viscusi has been criticized by Paul Slovic (2001) and others for asking people their estimate of the average smoker getting lung cancer, as opposed to asking their own lung cancer or mortality risk. The thought behind the criticism is that an individual pays attention to their own, not to someone else's risk. 14 For example, Sam Shepard, the actor and writer once explained that his unwillingness to fly is an enormous problem for a person who lives in New York, as he does, but he nevertheless insists on driving to Los Angeles for some roles in movies and other professional engagements.

23 Similar discrepancies arise in the context of the environment. It has been documented many times that the public has a difference sense of environmental risks than experts who may be scientists, offering different rankings and estimates (see Allen, 1987; Riddel and Shaw 2006). Heitz et al. (2009) find that only about half of their sample trusts local authorities' information about muddy flood risk in an area of France (the Alsace). With this type of thing in mind, Paul Portney posed the following environmental problem to readers of his little two-page paper (1992) on environmental risks. If the public believes that a drinking water source in the fictitious town of Happyville is contaminated, leading to carcinogens, but world experts believe it is not, then what is the best estimate of the annual benefits from removing the contaminant those based on the publics' beliefs, or those based on the scientists' beliefs? Portney also asked what one would do when faced with a legal situation where the contamination was caused by some particular party. Stephen Breyer (1993), a justice of the U.S. Supreme Court, weighed in on the legal issue, and assumed that government policy should be based on facts about risk and not Public perceptions. Pollak (1998) later countered that, suggesting that the public's fears perhaps should indeed play a role in determining government policies. Quite recently, Salaniй and Treich (2009) took up the Happyville scenario again, describing the issue as pitting consumer sovereignty (under a populist regulator) against a paternalistic regulator: in the former, the regulator should respect consumer preferences and regulate to make its worried citizens feel better about risk, but in the latter, the regulator knows the risk is phantom and that there are opportunity costs, so should regulate according to science-based risks. Developing a model to incorporate risk perceptions, which may be higher or lower than science-based risks, Salaniй and Treich (2009) find that a paternalistic approach in fact might justify over-regulation of risks. This seems counter-intuitive, but it relates to the fact that the regulator can either protect its citizens or encourage them to consume. When citizens' risk beliefs are lower than the regulators, over-regulation may better protect them, as compared to actions taken in accordance with their beliefs. However, if the citizens believe that risks are higher than the regulator does, then stringent regulations in fact encourage consumption because they reduce perceived risks. Finally, Hurley and Shogren (2005) conduct an experiment to determine if subjects' elicited beliefs (i.e. their subjective probabilities) can mimic induced ones. They conclude that in general, this is not the case and find evidence of the usual pattern of overestimation of low probabilities, and underestimation of high ones. They suggest that this can arise for two reasons: (i) there is a failure of induced beliefs taking hold because the subjective probability is indeed quite different (perhaps for good reasons) than the induced/objective one; (ii) there is a failure of elicited beliefs because the subject (or we, as researchers) makes mistakes in the process of elicitation.

24 2.5 What can be done with Subjective Probabilities if we know them? I will return to the issue of the divergence between perceived or subjective risks and sciencebased ones below, as discussion of that fits in nicely with the topic of how we indeed know what an individual's subjective risk estimates are (i.e. via the elicitation process). Nevertheless, in theory, one can use subjective probabilities in modeling of behaviors. This has been somewhat slow to take hold in the economics profession. I believe he is incorrect, but Manski (2004) says at the time he wrote his paper, "the only published research using probabilistic expectations data in econometric analysis of choice behavior is Nyarko and Schotter (2002)..." I would have thought Manski (2004) would include the earlier work of Kip Viscusi and others who elicited subjective probabilities of getting lung cancer (or of death) in modelling the choice to smoke cigarettes. Manski (2004) notes that it is often difficult to disentangle risk preferences or causal factors that explain choices, and that elicitation of subjective probabilities can shed light on these. For example, he describes the possibility in an ultimatum game (where one person can choose how much money to allocate to herself versus another person) that splits arise because of fairness. But he shows that we cannot be sure whether this is the motive for the split, versus fear of retaliation by the other party. In any case, to proceed, we may begin to use subjective probabilities by replacing p in equation (1) with subjective risk, s. This subjective risk might in fact be related to p, or many other factors in the vector z, so we may write s as a function of p and z, or s(p,z). The subjective estimate of the probability Shawn White has for Shawn White getting injured while doing a double-twisting flip on his snowboard is much different than my own subjective risk should I try it. Each estimate of s depends on a host of factors, such as our skill level. As will be shown below, these explanatory factors might be correlated with a behavior we are trying to explore, creating empirical problems, such as endogeneity or measurement error in the s(p,z) that replaces p in Eq. (1). Finally, I remind readers that some scholars believe that there is a very close, or identical relationship between subjective expected utility and the general concept of uncertainty (e.g. Andersen et al. 2012). The idea again relates to the SEU formulation by Savage (1954) and the notion that to proceed to make a decision, the individuals as if she has a probability in mind, even when none is observable. In a perfect information world, the Bayesian individual eventually gets the subjective probability just "right" and the world of uncertainty collapses to one of risk. However, it can be argued that Bayesian decision making does not really adequately handle true uncertainty. True uncertainty involves probabilities of probabilities and no clean collapse to any single (mean) point estimate. Gilboa, Postlewaite and Schmeidler (2008) also strongly question whether an individual facing uncertainty can sort out their preferences and subjective

25 probabilities. As will be seen below, we need much more in order to richly handle uncertainty, such as multiple priors models (e.g. Klibanoff, Marinacci, and Mukerji. 2005). Last, the simple Bayesian framework does not handle endogenous probability, nor does it necessarily explain choices that can better be explained via probability weighting functions. Next, I discuss endogenous probabilities or risk. 2.6 Endogenous Probability or Risk Still another reason why subjective risks are important in general, as well as in resource economics, may simply be because risks are endogenous to individuals: actions we take can influence the probabilities or severity of outcomes, or both (Shogren and Crocker 1991). A nice example contrasts two types of risk, or probability, and comes from the natural hazards literature. Assume that a person living in Galveston Texas cannot influence the (exogenous) probability of a hurricane of category 2 striking his house (as did Hurricane Ike in September 2008). However, that same person can influence the probability of his home being destroyed by adopting certain building practices, and can influence the probability of being killed by evacuating, presuming he has ample warning and the means to do so. Whether a risk is exogenous or endogenous may depend on the amount of time an individual has to engage in averting behavior. As economists, we have to decide whether we are interested in modeling behavior in response to the first, or to the second risk above. As noted earlier the HSW risk chapter (in their 2007 2nd edition) does an excellent job of laying out a decision model with endogenous risks, highlighting the importance in this framework of the effectiveness of the action that influences the risk. One can now think of the subjective probability of being killed by the hurricane (s) as a function of the probability of the hurricane landing in Galveston (p), and actions (a) taken to reduce s, or s(a,p). While this is an important concept in theory, researchers wanting to really tackle the endogenous risk problem empirically have a very demanding set of data collection tasks ahead of them in doing so, including obtaining relevant information allowing identification of actions that explain the probabilities for individuals and households. I return to this below, under the discussion of probability elicitation. 2.7 Non-Financial Risk, Background and Multiple Sources of Risk Non-Financial Risk Thus far, the discussion has mostly been confined to modeling behavior in response to one risk, and most often, that risk is a financial risk. The most often considered alternative risk considered in resource economics is mortality risk. Should mortality risk simply be handled in exactly the same way that financial risk is? In other words, if we elicit risks preferences, attitudes, or subjective probabilities related to income or wealth, would these be identical to those over other kinds of risk, such as mortality risk? The usual way of eliciting risk preferences or subjective

26 probabilities in the laboratory is to let subjects make trade-offs involving different probabilities, i.e., choice over monetary gambles or lotteries. Naturally, subjects are typically paid for choice tasks, or at least a randomly drawn choice task. We assume that if a subject exhibits risk aversion to a monetary gamble, then she is risk averse with respect to other things such as illness, injury or death. Harrison, List and Towe (2007) state that the vast literature based on laboratory economics studies suggests little difference in risk preferences when the risky commodity featured in tradeoffs varies. However, because we don't play out morbidity or mortality risks in the laboratory setting (i.e. we don't kill laboratory subjects), and environmental risks such as global warming cannot be played out either, I do not believe that we can make such widespread inferences from laboratory analyses. As noted earlier, Riddel (2012) is one of the few I know of to explore differences in context and differences in risk preferences or subjective probabilities. I further consider her exploration after the discussion of non-EU models (below in section2.8), because Riddel (2012) uses a non-EU model to conduct her analysis. Background Risk and Multiple Risk Sources If eliciting a risk preference for mortality risk, a subject may well be thinking about morbidity risks at the same time, influencing their risk preferences over the mortality risk. Thus, it may be important to consider whether background risks exist when examining risk preferences or eliciting probability estimates. Harrison, List and Towe (2007) find that for naturally occurring outcomes that have background risks, the laboratory experiment setting unfortunately may yield unreliable results. To demonstrate this, the authors introduce coins of uncertain quality into a lottery game. Subjects not only face risks associated with lottery outcomes, they also face the risk of getting a coin of poor quality. This other risk is called "background" risk and it may well affect preferences for the primary risk of interest (the lottery itself). Some laboratory studies have suggested that risk aversion is higher in the presence of background risk. Intuitively, if I am thinking about a lottery risk posed for me in the lab setting, but am also thinking about an additional background risk, then my risk preferences may be pushed toward more aversion. Lusk and Coble (2005) do find exactly this in the lab setting, adding risk preferences for genetically modified food to the standard investigation of risk aversion in the financial dimension. The implications for using laboratory work to make inferences about choices relating to environmental outcomes are damaging when such background risks exist. Consider global warming or another environmental outcome that has possible adverse consequences for components of the ecosystem, for example. An individual might think of global warming as a bad thing because she is worried about the risks to her own income or wellbeing, but may also be

27 thinking in the background about the risks to animals such as the gray seals at risk because of inadequate ice forming to allow the habitat they like for breeding (this issue was reported in the media in January 2011).. To summarize everything said about probabilities up to this point, probabilities can be sciencebased, subjective, exogenous (beyond the individual's control) or endogenous (under control, to some extent), but none of these categories necessarily constitute huge departures from the EU framework. Thus, next I turn to discussion of bigger departures from the strict version of the EU model, which I'll deem non-EU models. These are quite rare in environmental and resource economics, but are growing in the related economics literature (I'd say especially in the field of transportation economics), and are now making their way into resource economics (Cameron 2005b; Riddel and Shaw 2006; Shaw and Woodward 2008). 2.8 Non-Expected Utility (NEU) Models There has been an explosion of papers that offer departures from the strict EU model, and the easiest way to classify these is just as "non-EU" models (Starmer, 2000). Probably the most famous NEU model is known to nearly every economist and it is the prospect theory paper by Kahneman and Tversky (1979). This paper in fact is one of the most frequently cited papers there is in all of the economics literature: on March 14th, 2015 Google Scholar reported 34,563 citations to this one paper!15 Tversky and Kahneman (1992) in fact revised this theory in their later paper on cumulative prospect theory (CPT), to overcome a theoretical problem (relating to violations of dominance) in their 1979 version of prospect theory. Quiggin's (1982) theory of anticipated utility (also known as rank-dependent expected utility or RDEU) is quite similar, and many say, identical to CPT, at least under some assumptions (these pertain to the absence or presence of a reference point). The essence of CPT, and in fact all NEU models, is that they introduce decision or probability weights into the basic EU equation. If these weighting functions are non-linear, the weights can explain why individuals behave differently than we might expect in the absence of weights. For example, when the probability is 0.82, which does not imply certainty, the individual may behave as if it does imply certainty, or a probability of 1. Hence, the individual is overweighting the 0.82 probability, allowing it to influence them very heavily in making a decision. Conversely, suppose we expect a person to take an action when a probability is quite small, but she doesn't. For example, I sometimes ask my undergraduate students if they would be willing to play Russian roulette if I could construct an imaginary gun that fires a bullet at them with a chance of 1 in 100 billion. I ask them to consider if they would do so with a payment of $1 million cash, which would go to the person of their choosing, should the gun fire. Of course, this 15 This is of course perhaps because it is also very well known to psychologists and is frequently cited in noneconomics journals.

28 is pure anecdote, but few students ever raise their hand and say they would play my silly game, even though most drive their automobiles to attend my class at school for no monetary compensation at all, at a much greater risk of death than 1 in 100 billion of being killed in an automobile accident. It is pretty likely they are grossly overestimating the chance of the gun firing, and in fact, they might assume that the worst would indeed happen. Probability weighting functions Suppose the weight is w(p), where p is again some underlying "true" probability of interest. The weight allows the possibility that individuals might overestimate or underestimate low or high probabilities. There are several possible forms for w(p), but a standard one has one parameter and takes the form: Eq. 8 w( p) exp[(ln p) ] The parameter determines the curvature of the pwf, and when = 1, there is no weighting, and the model will collapse to the traditional EU model. However, a common finding is that 0 < < 1, and then individuals' weighting functions take on an inverse S shape, as in Figure 1, implying that lower probabilities are over-estimated, while higher ones are underestimated. Tversky and Kahneman (1992) develop CPT to allow for differences in losses and gains, and suggest that people will indeed be risk seeking over high probability losses, and risk averse over high probability gains. This relationship often flip-flops for low probability losses and gains: people become risk seeking over low probability gains, and risk averse over low probability losses. Another way of thinking about probability weighting in non-EU models involves underlying optimism or pessimism as a possible reason for distorting actual probabilities. Consider a loss involving some probability. A pessimistic person might attach too much weight to high probabilities of a loss, and too little to low probabilities that might in fact be valid. We'd expect their weighting function to look like an "S" shape. But that same person might cling to their pessimism and underweight a high probability of a gain, putting too much weight on the low probability of that gain occurring, resulting in an inverse S shape. There is certainly no reason I know of why, if a person has one set of weights for a particular risk evaluation, these would hold across all of their evaluations of risky outcomes, and this is in fact central in Riddel's (2012) and Riddel and Kolstoe's (2013) explorations of risk involving nonEU modeling. [Huang et al. (2015) find this sort of behavior in an empirical study of managed lane choice in transportation.]

29 Figure 1: Inverse S Probably Weighting Function w(p)

45 p

The most careful application of an NEU models indeed allows for the possibility that every single person has a different weighting function, and these can take various shapes, including of course a linear one (see the dotted line in Figure 1), which then corresponds perfectly with an unweighted probability, and collapses to the EU model. The problem for us as economists, is in how to cope with a potentially different shaped pwf for every person in a sample we are using in estimation (to see what this literally looks like for different subjects, see Wu and Gonzalez 1996, or Gonzalez and Wu 1999, for example).

In this richest form of CPT, as mentioned above, losses (V ) are treated differently than gains (V ), and the cumulative prospect function is (following Riddel `s (2012) notation):

Eq. 9 CPT (x; p) V (x; p) V (x; p)

where:

n

k

k 1

Eq. 10 V (x; p) w( pn ) v(xn ) v(xnk )[w( pn j ) w( pn j )]

k 1

j0

j0

m

k

k 1

Eq. 11 V (x; p) w( pm ) v(xm ) v(x(mk) )[w( p(m j) ) w( p(m j) )]

k 1

j0

j0

30 Riddel (2012) notes something that gets overlooked a great deal, and that is that there are two distinct risk attitudes that become apparent in the CPT framework. First, there are what she deems "outcome" risk attitudes, associated with the curvature in the utility function (v). These are the traditional risk "preference" notions of risk aversion. Second, there are what she calls "probabilistic" risk attitudes associated with the weighting function itself. Many have noted that Quiggin's RDEU is essentially the same as CPT (it is a special case with two outcome gains). Indeed not all situations involve both losses and gains, so the pwf for either Eq. 10 or Eq. 11 can be used in that case. For example, most laboratory experiments in economics do not involve economic losses because subjects will not participate when they might lose their own money. However, transportation economists might find CPT desirable for risky gains and losses in commuting time (see Huang et al. 2015, for example), and this might be an interesting way to consider potential risky gains and losses associated with climate change. Finally, many NEU models, as well as the looser versions of prospect theory may violate the independence axiom (e.g. CPT does, depending on the pwf) so that choices appear to be inconsistent (Machina 1989), thus, it is important to consider whether this is desirable, or not. Though there are not many, I look at some empirical applications in resource economics, in section 5.0, below. And, although the SEU model suggests uncertainty, next I explore a richer and broader literature related to uncertainty. 3.0 Uncertainty Frameworks We all have made decisions in the face of uncertainty, even essentially pure uncertainty. Ambiguity is often thought of as synonymous with uncertainty, but can take on a specific meaning (see Klibanoff et al. 2005). Essentially, in this world we just have no probabilities to guide us in our decision, yet, we manage to make a decision anyway. How? There are many frameworks in decision theory wherein one might allow for some degree of uncertainty, and even pure uncertainty. Some of these are quite old. I cannot cover all of these here, but an extensive survey about ambiguity can be found in Etner, Jeleva and Tallon (2009). It has been argued that the most popular approach to handling ambiguity involves the maxmin expected utility framework (e.g. Gilboa and Schmeidler 1989). A simpler way to begin is to note that Bernoulli suggested that when pure uncertainty exists, a reasonable thing for people to do is to assign an equal probability to each possible outcome (see the discussion in Woodward and Bishop 1997). This relates to Laplace's "principle of indifference" and is also known as the "principle of insufficient reason" (also see Gilboa et al. 2008 for an excellent discussion of uncertainty models in economics). An important ambiguity concept relates to compound lotteries. These are often used in the health risk arena: it might sometimes be helpful to first know the probability of exposure (say pe = 1/1 million), and then second, if one is actually exposed, we might wish to know the probability of

31 illness or death (say, pd = 1/100). The compound lottery reduction axiom suggests that it is a simple task for people to reduce this compound lottery. We assume they can multiply pe times pd to derive the unConditional Probability of death (1 in 100 million). These are often deemed as first and second order probabilities. As another example, one could have a prior, and then bet on which prior is correct. Ambiguity in this context is when the CLRA fails to hold, i.e., Segal (1987, 1990) models ambiguity to relaxing the CLRA between first and second order probabilities (see further discussion and discussion of other ambiguity models in Klibanoff et al. 2005). I return to this a bit more, below. Many models of uncertainty have their origins in game theory, where a player's opponent in a zero sum game is Mother Nature (hereafter, nature). This has a nice appeal to the resource economist. There are various strategies in such a game against nature, and the standard one discussed is called max-min, where the player maximizes the minimum outcome that nature doles out. In fact, a common strategy in making decisions under uncertainty is to try to make oneself as best off as possible, assuming that the worst outcome is going to happen. First however, I take a quick look at simple Bayesian analysis, which is related to this. 3.1 Bayesian Analysis As noted above, many consider the Bayesian decision-making framework to essentially be one that handles uncertainty, but this is debatable. Bayesian decision making (not to be confused with Hierarchical Bayesian analysis in statistical searches for model parameters), combines information, perhaps gained in a learning process, with an individual's prior sense of a probability of an outcome. Some researchers model the prior itself as a function of parameters ( , ) describing the beta distribution. This distribution has a mean , which can be thought of as the fraction of successes in a Bernoulli process. The Beta distribution has been more generally, and fairly widely, used to model subjective probabilities (see Riddel and Shaw 2006, Riddel 2009; 2011, Kelly et al. 2010, and Kalisa et al. 2015 --- many of these studies are discussed below). Gilboa et al. (2008) note that the need for a prior estimate of probability can be tied to the fact that there is uncertainty: "Bob" does not know what the probability of the outcome of interest is, and cannot rely on laws of physics or easily observable data to determine one. Hence, Bob forms a prior. If Bob has a single prior, and then goes about learning as much as he can, he might combine the information gained in the search process with the prior to form the posterior distribution. However, in some cases, Bob has two, or in fact many priors, because he cannot sort out which one of these distributions is the best one. This case of multiple priors is often linked to the term, ambiguity introduced above, and discussed more in this section.

32 When the knowledge and information is scant, some recommend the use of maximum entropy distributions. The idea behind these is to use the least informative distribution that matches specified constraints, so that more is not assumed to be true than is actually known. The maximum entropy distribution then might be used as the prior distribution in Bayesian analysis (see the discussion of maximum entropy distributions in Bier et al. 1999). Ambiguity Ambiguity has recently been considered carefully in a host of theoretical economics frameworks (e.g. Segal 1987, 1990; Schmeidler 1989; Klibanoff et al. 2005). It was briefly introduced above and I again start with one of the very simplest cases of a "little" uncertainty. A nice description and the example I use can be found in the risk chapter of the book by Hanley, Shogren and White (2007), and this is where two experts offer a differing opinion of what risk is. Suppose that one expert says the risk of getting bird flu is 0.20, and another key expert says no, the average risk is half that, or actually 0.10. This is a case of possible ambiguity. The possibilities for handling this discrepancy between the experts range from a simple averaging (i.e. I'll average and assume that risk is probably around 0.15), to weighting one of the experts' estimates differently than the other, raising the trust issue in assessing information by source. Klibanoff et al. (2005) introduce ambiguity aversion into the situation for an investor facing several distributions on the investment return this way: "...imagine an ambiguity averse [decision maker] to be thinking as follows: `My best guess of the chance that the return distribution is is 20%. However this is based on softer information than knowing that the chance of a particular outcome in an objective lottery is 20%. Hence, I would like to behave with more caution with respect to the former risk.'" [p. 1852] It has been shown that in some contexts (e.g. health risk) some sources of information, such as government, might be trusted more than others, such as industry (Viscusi et al. 1999). Trust in any particular source might be context-dependent. A good deal of research, much within the field of psychology, has examined the underlying causes of ambiguity and what makes people feel more ambiguous about a risk, or less so. For example, Curley and Yates (1985) found that the centers and ranges of intervals containing the true, but imprecise probability can be presented to subjects and affect them such that they may become more ambiguity averse. In their experimental study no evidence of ambiguity seeking was found, but preference for ambiguity, like risk preference, is an empirical, and not purely theoretically determined, issue. As noted a few times above, many see a close tie between subjective probability and uncertainty. Much of this important strand of uncertainty literature stems from psychologist Daniel Ellsberg's

33 (1961) seminal work that demonstrated that individuals may prefer bets with known probabilities to bets with unknown probabilities.16 A small degree of uncertainty is also often introduced by research analysts via sensitivity analysis. In these studies it most often is the case that it is actually assumed that risk (not really uncertainty) pertains: an initial and frequently made assumption is that a well-known probability distribution can be used to represent outcomes. However, it is might be recognized that the exact shape of the distribution might not be known with certainty. The mean of the distribution might be different than we think, or the overall shape might differ because of different variances (leading to thinner, or flatter looking distributions --- see Woodward and Shaw 2008, for an example pertaining to uncertainty about fish stock dynamics). Simple sensitivity analysis might allow results to be generated by assuming that two or more different means are the right ones, or perhaps two or more different variances. 3.2 Pessimism/Optimism Models Other uncertainty-related models are developed to allow individuals to be either pessimists or optimists in their decision-making. Here consider an outcome that will lead to worse welfare, i.e. it is a bad. The idea is simply that a pessimist who has no idea what actually will happen in the face of uncertainty makes decisions assuming the worst will happen, and an optimist assumes the best outcome (the least "bad" one) will. The pessimist might make a decision by minimizing the worst harm (choosing a "mini-min" strategy), while the optimist could even assume that bad things won't happen (choosing a strategy to maximize utility from a gain). Others might assume that an outcome might indeed be bad, but among that set of outcomes, the best (least bad) of them will prevail (choosing a "maxi-min" strategy). (A hurricane will hit, but it will hit in a place where damage will be very slight). Many mathematicians have considered optimization problems with uncertainty [see Schmeidler (1989) and Gilboa 1987]. A relationship between pure uncertainty and probability weighting can be formalized in this context by assuming that individuals make decisions as if they violate wellknown properties that probabilities have. For example, consider the state-space corresponding to three outcomes, A, B, and C, with probabilities p(A), p(B), and p(C). It turns out that decision makers often act (i.e. make choices) as though probabilities do not sum up to one. Situations of both "sub-additivity" and "super-additivity" (see below) have been found to arise in experimental settings. Such unusual weighting has led to the framework known as ChoquetExpected Utility [Schmeidler 1989; Diecidue et al., 2004]. It has been shown that for decision makers who are averse to ambiguity, choices that use nonlinear weighting or in other ways that violate the standard model can be "rational" in a rather rigorous sense of that word. 16 Yes, for those who don't know, this is the same Daniel Ellsberg who became famous for authorship of the Pentagon papers.

34 Multiple Prior Models Many of the most popular uncertainty models are essentially multiple prior models (see Gilboa and Schmeidler 1989). These consider the possibility that an individual cannot come up with an exact probability of an outcome, but instead assigns a range to the possible set. She then calculates expected utility for each of the probability distributions and considers the set. As above, a very cautious decision maker might assume that the minimum expected value for each distribution in the set will pertain, and choose the highest of those minimum values. This is a max-min expected utility (MEU) strategy, not to be confused with a max-min in a two player game. Klibanoff, Marinacci and Mukerji (2005) expand on the original work by Gilboa and Schmeidler (1989) in several ways. First, their "smooth" model of decision making under ambiguity does not necessarily imply MEU preferences; they are more general than that. Second, their model allows for smooth rather than kinked indifference curves. Third, and perhaps most importantly, their model allows for the separation of beliefs about ambiguity and attitudes about ambiguity, which is not evident in either the original MEU work, nor in Choquet-Expected Utility. This last feature is important in much the same way that it may be important to disentangle beliefs about risk from risk attitudes. In other words, in the world of uncertainty, we can expect that individuals could be ambiguity neutral, averse, or in favor of ambiguity, and also have beliefs about the uncertainty probabilities. The Klibanoff, Marinacci, and Mukerji (2005) model allows these to be separately considered. Their "smooth ambiguity" model also relaxes Savage's "sure thing principle" (many believe this to be the key axiom of SEU), allowing preference for ambiguity or uncertainty to affect choice behavior. What is especially interesting in their framework is that it allows comparative statics: the implications of ambiguity attitudes on choices, while holding risk attitudes constant, can be examined. Ambiguity aversion can be thought of in an analogous manner to risk aversion. We know that a precise characterization of risk aversion is avoiding a mean preserving spread: a risk averse person turns down a fair bet, preferring a certain amount. In Klibanoff, Marinacci and Mukerji (2005), an aggregate distribution is evaluated over the many priors, and an ambiguity averse person is averse to a mean preserving spread in that aggregated distribution. Choquet-Expected Utility (CEU) Schmeidler (1989) lays out the Choquet Expected Utility framework, which explicitly allows for subjective probabilities to take on unusual properties, such as failure to add up to 1. This allows for behavior that is frequently observed in laboratory economics, where subjects appear to violate the law of probability (for a simple example, in two outcome space, if the probability of one event is p, then the probability of the non-event is 1-p). Wakker (1990) shows that when stochastic dominance is imposed, then CEU and Anticipated or Rank-Dependent Expected Utility are the same models. RDEU also has the minimax principle as a limiting case.

35

Sub-additive and Super-additive Subjective Beliefs

An important property that well-behaved probabilities have is that over the complete set of outcomes, the sum of the probabilities of the outcomes must be unity. However, subjective probabilities offered by an individual may not have this property. Individuals often behave as if, or offer, probabilities that sum to less than one (exhibiting sub-additivity) or more than one (super-additivity). This is allowed for by using a "capacity," which relaxes the usual properties, but has the properties of a standard probability as a special case (see Schmeidler, 1989).

3.3 Robust Optimal Control

A particularly relevant extension of conventional optimal control is robust optimal control [see Hansen and Sargent, 2001; Ben-Tal et al., 2009; many other references in Woodward and Tomberlin 2014], often specified in dynamic economic models.17 It has strong ties to, and implications for the precautionary principle, discussed immediately below. A robust control optimization problem is used when there is a family of possible probability distributions, say P, and the decision maker is completely uncertain about which distribution is the correct one. The robust-control problem is

Eq. (12)

max z

min PP

EPu

z;

.

That is, the robust optimal policy is the one that is made assuming the worst possible probability from the set of possible distributions, P, so that the expected outcome will be at least as favorable as the worst case that is identified in (2).

When decision makers are averse to ambiguity, robust control is "rational" in that it is consistent with a set of reasonable axioms (Gilboa and Schmeidler 1989). The approach includes a term that allows for varying degrees of aversion to ambiguity, effectively determining the size of the set P in (2). At one extreme P is a single probability distribution and the model maximizes the standard discounted expected value. At the other extreme the model chooses as if only the worstcase scenario is relevant. There are a variety of approaches to Robust Optimization, but Woodward and Tomberlin (2014) say that the common theme for these is design to achieve "at least a minimal level of performance under a wide range of conditions." [p. 828]

Increasingly, robust control is used to solve applied problems in engineering though the computational complexity of this specification can be as burdensome as in the standard EU cases discussed above. It is making its way into analysis of environmental and natural resource issues and problems. For example, Roseta-Palma and Xepapdeas (2004) consider robust control in the area of water management and more recently, Woodward and Tomberlin (2014) apply Robust

17 I thank my colleague Richard Woodward for providing parts of the material here.

36 Optimization to the fisheries context: their example is the Skeena River sockeye salmon fishery, located in Canada. In their context, the key source of uncertainty is found in a parameter that informs us how fast the stock grows (here, they follow Walters 1975). Their specific approach builds on robust dynamic optimization as formulated by Nilim and El Gahoui (2005), and these choose this because it is numerically tractable, allowing use of data to inform or support estimation of parameters in a model. Rather than use probabilities directly from relative frequencies, their robust optimization problem maximizes the expected present value of net benefits from harvest of salmon, using the worst possible probability distribution over the uncertain stock growth parameter. They demonstrate that the approach helps formulate precautionary harvest policies, leading to the topic of the precautionary principle. 3.4 Uncertainty and Ties to the Precautionary Principle The Rio 1992 Declaration of the United Nations is one example of many that describes the precautionary principle: society should move forward with extreme caution or take extreme measures to avoid irreversible consequences from an action when uncertainty about environmental outcomes exist. This principle and ties to risk or uncertainty, are examined by Randall (2011) in his book: he shows that is used by many other countries in various policies through-out the world. Put in the jargon of the United Nations, this actually is phrased as a double negative: society should not postpone measures to prevent environmental degradation because there is a lack of certainty about science (see Bargiachhi, 2003). The precautionary principle has important implications for managing or preserving biodiversity (Vardas and Xepapadeas 2010), and these might go beyond conventional risk management (see Randall's survey in this journal, 2009). Cass Sunstein (as reported by Friedman in the New York Times, 2009) wrote on his blog that "According to the precautionary principle, it is appropriate to respond aggressively to lowprobability, high impact events, ...such as climate change." Friedman (2009) editorializes that is prudent for society to prepare for the "1 percent," though noting that the probability of some high impact climate change events is thought by many to be higher than 1 percent. Engineers often over-design or over-build safeguards for bridges and buildings, consistent with the precautionary principle. And several countries in the European Union and elsewhere appear to pursue environmental policies that are consistent with the precautionary principle (see Wiener and Rogers 2002). Pollan (2001) wrote that the precautionary principle was a "rival" idea in Europe, competing with risk analysis in the United States. He cites Germany's "principle of Vorsage" which was pursued to save German forests from destruction. At the time there was no proof that acid rain was the cause, but this principle found its way into German law. In contrast, "risk analysis," as discussed by Pollan, considers the economic cost of precautions, and tries to ascertain whether additional cost is worth it. In opposing food-related bans, the World Trade

37 Organization at the time appeared to be in favor of risk analysis, while much of Europe had adopted the precautionary principle. Economic analysis of behavior consistent with the precautionary principle is difficult to formulate. Attempts to operationalize it are documented in Barrieu and Sinclair-Desgagnй (2006). Uncertainty does not lend itself to Expected Utility theory. Many economists have now tied the precautionary principle to the concept of the minimization of future regret; the concept is to make the costs of making the wrong decision as small as possible (see many references in Grijalva et al. 2011). Note that many uncertainty/risk frameworks involving learning, or Bayesian learning, might be tossed out the window in the context of irreversible outcomes, because it may be too late to do anything about a situation in this case, even if learning occurs. Oops, society says, we learned too late that we should have avoided extinction. Being wrong in the case of irreversibility may well be exactly the thing we should be thinking about. In their study Grijalva et al. (2011) explore whether subjects in an experiment might choose preservation over development when the benefits of preservation are uncertain. The benefits relate to cures for diseases that relate to the existence of species, but the subjects know only that these benefits may or may not happen. The preserved habitat provides the potential cure. With development, if a disease happens then there cannot be any cure. Development returns a monetary reward, actually paid to subjects. Two sources of uncertainty are considered in a treatment in the experiment that Grijalva et al. (2011) develop: uncertainty about whether the disease will happen, and uncertainty about whether a cure can be found. No probabilities are mentioned or offered to subjects to assist them in making their decisions, thus, the authors had little ability to ascertain specific things about the subjects' sense of the degree of uncertainty. Grijalva et al. (2011) did ask whether some subjects just considered the chance, in the absence of any information, to be 50-50, and indeed some did. Based on results (the proportion who choose to preserve vs. develop), it appears that a good portion of subjects behave as if they are logically weighing the relative costs and benefits of being wrong, versus being "right" and indeed minimizing the regret from making a mistake. Being candid, since I am a coauthor of this study, I offer the usual caveat here in that we cannot be sure that subjects really think carefully about the disease as if it were a real threat, even though they do forgo or gain real money if they choose development. To summarize approaches to uncertainty, it is obvious from the above that there are many possible approaches. It may well be that different types of uncertainty require altogether different theories of decision-making under uncertainty, a point of view that can be found in a relatively old paper, by Faucheux and Froger (1995). For example, these authors point out that Bayesian analysis has little to say about intertemporal resource problems, and they question whether people make decisions rationally, in the conventional sense. This is a common theme in

38

behavioral economics today, and I return to that near the end of the paper. Next, I take up the topics connected to the elicitation of preferences or attitudes, and probabilities.

4.0 Risk Attitude, Preference and Probability Elicitation

The main message of this section is that when we are estimating models of behavior, our results may strongly depend on how we "measure" risk attitude or preference, and elicit subjective probabilities. And, even if we don't actually elicit estimates individuals have for the probability of an outcome, how we frame the presentation of science-based risks may well affect parameters in our risk-oriented models (e.g. Hand et al. 2015, which is discussed at length below).

There is now a well established literature focused on the elicitation of risk attitudes or preferences, which by now the reader knows are related to, but quite different from risk perceptions. For example, one individual might exhibit strong risk aversion, while still echoing back the science-based probability with his beliefs, while another might exhibit strong risk aversion, but grossly overestimate the risk of a bad outcome.

One of the most common references with respect to finding risk preferences for an individual is to Holt and Laury's (2002) use of a table of choices offering pairs of lotteries. This concept was in fact used much earlier by Binswanger (1981). An example from Grijalva et al. (2011) is in Table 1. The pairs depicted there are often deemed a "multiple price list" (hereafter MPL), and constructed so that a switch point in the series can help identify the level of risk aversion that an individual has to a financial gamble.

In the MPL setting the subject is typically presented with a full table, and asked to move down the rows, deciding whether to choose option A or B in each row. The table is usually constructed so that one column (A or B) is clearly the riskier column of alternatives. The hope that any given

Table 1: Example Risk Tradeoff Table (MPL, from Grijalva et al. 2011)

Indicate A or B

Option A

Option B

Below

1A

B a 1% chance of earning $120 a 1% chance of earning $210

and a 99% chance of earning and a 99% chance of earning

$90

$10

2A

B a 5% chance of earning $120 a 5% chance of earning $210

and a 95% chance of earning and a 95% chance of earning

$90

$10

3A

B a 10% chance of earning $120 and a 90% chance of earning $90

a 10% chance of earning $210 and a 90% chance of earning $10

39

4A 5A 6A 7A 8A 9A 10 A 11 A 12 A

B a 20% chance of earning $120 and a 80% chance of earning $90 B a 30% chance of earning $120 and a 70% chance of earning $90 B a 40% chance of earning $120 and a 60% chance of earning $90 B a 50% chance of earning $120 and a 50% chance of earning $90 B a 60% chance of earning $120 and a 40% chance of earning $90 B a 70% chance of earning $120 and a 30% chance of earning $90 B a 80% chance of earning $120 and a 20% chance of earning $90 B a 90% chance of earning $120 and a 10% chance of earning $90 B a 100% chance of earning $120

a 20% chance of earning $210 and a 80% chance of earning $10 a 30% chance of earning $210 and a 70% chance of earning $10 a 40% chance of earning $210 and a 60% chance of earning $10 a 50% chance of earning $210 and a 50% chance of earning $10 a 60% chance of earning $210 and a 40% chance of earning $10 a 70% chance of earning $210 and a 30% chance of earning $10 a 80% chance of earning $210 and a 20% chance of earning $10 a 90% chance of earning $210 and a 10% chance of earning $10 a 100% chance of earning $210

subject can express their risk attitude or preference via a series of risky or safe alternatives, perhaps, but not necessarily, by considering a mental calculation of the expected outcome of each lottery she considers. The "best" approach to the elicitation of risk preferences has been bandied about in the literature, but note that the simple choice approach as depicted in Table 1 results, at best, in the estimation of an established functional form for the utility function under conditions of risk (e..g. CARA or CRRA, as mentioned above). It does not yield estimates of subjective probabilities. The advantage in using the MPL to assess preferences for financial gambles, over many other risk elicitation exercises we might try to implement is that the implied lottery can actually be played out. The individual subjects, usually in laboratory experiments, can be (and are) told this, and hence, actual pay-outs are thought to correspond with incentive compatibility in choosing the option.

40 In other words, subjects know that real money is at stake when a choice is made. Even with this being the case, it certainly does not mean that all subjects will make logical choices when faced with such tasks. Resource economists might well be more interested in non-monetary gambles or lotteries. As I've said above, I don't believe that risk preferences for money necessarily tell us about risk preferences over say, two risky activities an individual can engage in (e.g. smoking cigarettes versus smoking a pipe, or not smoking at all; riding a bike versus driving a car to work). Obviously here again, any lottery-style question cannot be easily played out for non-monetary gambles: it was of course the "playing out" of death, injuries and illness in infamous experiments that led to the formation of human subjects rules, and university and research institute review boards that have the responsibility to deny or approve studies involving research on human subjects. Real Versus Hypothetical, and "Naturally Occurring" Gambles Individuals can certainly be asked in the laboratory to make their choices over lotteries as in Table 1, but not actually paid on them. From discussing the topic with colleagues in psychology, it is fair to say that the norm in psychology experiments is not to pay subjects for completion of tasks, although subjects might be paid a fixed show-up fee; economic experiments typically involve payouts for tasks, or at least payment for one task out of several. Several "within subject" experimental studies have demonstrated that risk preferences over "real" gambles can be different than the preferences implied over hypothetical ones. For example, Holt and Laury (2002) find that scaling up the payouts results in more of a tendency to become risk averse when real cash payments are made rather than when the gambles are purely hypothetical. I am sympathetic to the notion that risk preferences for non-monetary gambles might be quite different than those for monetary gambles, but again, how do we play out a non-monetary gamble in the lab? And, with or without payments the risk elicitation tasks are sometimes still deemed to be artificial (these can be paid or unpaid in laboratory experiments) or hypothetical gambles. The thought is that even if real payments are made, if these tasks are done inside the lab, these tasks are still thought to be in an artificial, and not conducted in a "real world/natural," setting. And so, a third issue is whether risk preferences might differ between the lab and within the context of naturally occurring gambles. [A related issue pertains to what the key differences are between lab and field experiments (Harrison and List 2004). ] The concern is that when faced with what are essentially the same lotteries, the person who makes a choice in the lab setting might make a different one when faced with it in a natural setting. As an example of risk preference elicitation using the MPL for financial gambles, Fiore et al. (2009) assume these also hold in their examination of the risks of property loss due to a forest

41 fire. The fire doesn't really happen, of course, but the paper features the only experiment involving virtual reality and the environment that I know of. Laboratory subjects are told to imagine they own property (e.g. a cabin) that could be harmed by the forest. Using one or more virtual reality devices, subjects can watch the fire from different perspectives and see if "their" property is destroyed. The experiment is an interesting one, but one could certainly argue that the subject does not really own property, doesn't feel the fire, and doesn't see their family pictures destroyed, nor their pet die of smoke inhalation, as they might in a real fire. I leave it for the reader to think about whether it is unlikely or likely that subjective probabilities for forest fire risks, or preferences for risk of an actual forest fire would be the same as the risks posed in the MPL. The assumption that these are the same requires us to believe that the subject's focus is on the trade-off between payments to be made for fire protection (involving income or wealth) and potential property loss, as opposed to simply choosing the optimal financial payout from the MPL. If one is interested in obtaining a set of individuals' subjective probabilities, then the usual MPL approach (e.g. Table 1) does not suffice. Instead, the researcher has to attempt to uncover estimates of the probabilities that an individual believes pertain to a particular situation, and as covered in the introduction to this chapter, this can be done using revealed or stated approaches. I consider some of these approaches below, starting with the most simple strategy: just ask. Eliciting Subjective Probabilities: Just Ask Eliciting probabilities themselves can be extended to non-financial events and accomplished in a variety of ways, beginning with the most simple approach: just ask. While potentially difficult for subjects, various risk communication devices, such as risk ladders, pie charts, or risk grids can be used to assist the individual in preparing their estimate (see for example Corso et al. 2001, or the risk ladder in Jakus et al. 2009). Asking individuals to state probabilities or expectations is not a new idea and it is not uncommon in research involving risk. It is part of the Health and Retirement Study (HRS), where respondents are asked to place answers on a numerical scale, and the responses have been used in a variety of studies of health and other behaviors (see references in Bassett and Lumsdaine (2001). It may surprise some readers, but individuals often may state meaningful probabilities of things like their own chance of living to age 75, or holding a job for a period of time. Several studies have obtained stated probability estimates (SPEs) relating to health (illness) and usually, mortality risks. As mentioned above, the standard "Viscusi" question was to simply ask an individual (from a sample containing both smokers and non-smokers) how many cigarette smokers out of 100 she thought would get, or die from lung cancer. As I am not a cigarette smoker, prior to studying this issue for some time, I am quite sure that I would have had no idea how to answer this question, so it is rather surprising that non-smokers can provide answers that

42 are reasonably close to statistical population estimates. [Warnings on packs of cigarettes offer no numerical information.] In several studies Viscusi finds in fact that both smokers and nonsmokers overestimate lung cancer risks. But it may well be that answers to such a simple question at least informs us as to whether an individual believes risks are relatively high or low. The list of studies involving elicitation of probabilities has grown, and many quite credible survey efforts by academics and government agencies now use some form of probability elicitation. Manski (2004) provides an interesting history and a list of these, including the wellknown Health and Retirement Survey, the Michigan Survey of Consumers, and the Bank of Italy's Survey of Household Income and Wealth. Researchers have been able to discern some relationships between factors and reported or stated probabilities, such as the importance of cognitive ability of the respondent (Hurd et al. 1998). Manski (2004) concludes that the majority of elicitation schemes up to the point he wrote his paper resulted in meaningful estimates of subjective probability, although ascertaining the accuracy of the estimates can be problematic. Thus, this does not suggest that all studies of subjective probabilities have demonstrated high accuracy. But studying these perceptions, and whether they are accurate, is important for economic policy. A common pattern in the literature is overestimation of low probabilities and underestimation of high ones (see for example, Hurely and Shogren 2005), and knowledge of this is in itself useful information. As a specific example, if elderly people have too high a perception of the risk of dying as compared to the statistical science, this may affect, and thus explain unusual rates of saving, and perhaps poverty rates (Bassett and Lumsdaine 2000). Similarly to health and mortality probabilities or risks, researchers can go after (try to elicit) environmental risks or probabilities, but again the lotteries and scenarios can most often not be actually played out, especially for large-scale environmental events. We do not burn down forests in the laboratory, or harm ecosystems, or bring about climate change, or expose subjects to arsenic in drinking water to see if they will get sick or die. The only exception, in fact, that I can think of is the Boyce et al. experimental study (1992), in which small trees were actually destroyed if subjects did not adequately pay to protect them. Because of the failure to play out the outcome, elicitation of environmental or health risks may not be incentive compatible. Thus, I suspect this is why many experimental researchers who want to obtain risk preferences confine themselves to incentive compatible financial gambles. I know of very few studies that test whether risk preferences based on the financial gamble determine the shape of the utility function that holds through for what is of interest, which is the environmental or health risk. As suggested above, it is probably no surprise to learn that there is concern, as in the stated preference approaches to non-market valuation (e.g. contingent valuation or discrete choice

43

experiments), that individuals might offer biased responses when simply asked about probability estimates. For example, individuals may bias risk estimates in favor of the outcome they believe is desirable, but bias is likely tied to what the respondent believes she is accomplishing, if anything, by offering a biased answer. This is why, early on, some statisticians and other scientists proposed the probability scoring method.

Probability Scoring

The probability scoring method might be employed as a method to ensure truthful probability estimates are elicited. Scoring approaches were considered early on in relation to weather forecasting by Brier (1950), and various techniques of probability elicitation have been compared many times (e.g. Chesley 1978). The weather forecasting example is historically interesting. It was suggested that weather forecasters might actually have incentives to report a chance of an outcome, say rain, that they didn't actually believe to be true. I.E. a weather professional's reputation might depend on being wrong in a particular direction: the public might prefer no rain when it was forecast to rain on given day, to rain occurring when it wasn't forecast. This preference might in turn make the weather forecaster bias his or her forecasts, and thus, more popular with the public.

The Quadratic Scoring Rule (QSR)

Scoring rules, if proper, are considered to be efficient mechanisms for eliciting probabilities, done so most often in a manner that is incentive compatible. They are rewards, and are proper when the evaluator (sometimes an expert who is expected to be able to report the full distribution) gets a maximum expected score by reporting the true distribution of an event (Lambert et al. 2008). Brier (1950) considered a quadratic score for a forecast (S) as:

Eq. 13

1 r S n j1

n ( pij Eij )2 i 1

Where E takes the value of 1 if the event occurs in class j, and 0 if not. For example, let there be 2 classes (r = 1,2) with r = 1 as rain, and r = 2 as no rain. The forecaster assigns the probability (p) to the event and obtains the score S based on the quadratic score. The best she can do is obtain a zero, and the worst, a score of 2. Such scores can be used to reward subjects in games when the correctly forecast events that can be played out in the games. The QSR is the most commonly used proper scoring rule used today, but an alternative is the logarithmic scoring rule (see references in Offerman et al. 2009).

Scoring approaches such as the QSR may also involve actual payouts, and hence can be incentive compatible and have the advantage of obtaining subjective probability estimates (see Nelson and Bessler 1989, or Offerman et al. 2009, for discussions of the scoring method). However, here again the probabilities typically relate to financial gambles, not to other kinds of

44 risks. Note also that scoring rules are typically valid only under the assumption of expected utility maximization, and in fact many assume expected value maximization (and hence, risk neutrality). Several studies have demonstrated that forecasts of outcomes can differ across different scoring rules (e.g. Palfrey and Wang 2009). A generalization for probability scoring is possible to allow for non-EU models, involving disentangling of risk and uncertainty attitudes, and subjective beliefs (see Offerman et al. 2009). Considering Both Risk Perceptions (or Probabilities) and Preferences As noted above, perceived risks are potentially different from risk attitudes or preferences, and an interesting idea is that both of these may matter in affecting decisions that individuals make. The fact that they are related is intuitive: if I am strongly risk averse, then my subjective probability or perceived risk for a low probability (based on science) event with a very nasty outcome might be higher than if I am a risk lover. Lusk and Coble (2005) consider the role that both play in choices for food risks. To obtain risk attitudes they implement the usual MPL over financial gambles, allowing recovery of a coefficient of absolute risk aversion over their sample. To obtain risk perceptions however, they use a Likert scale for several questions directly related to food risks and consumption of it, and this limits what can be examined within formal risk models in economics. Parameters that explain the Likert scale questions are estimated using order probit models. Lusk and Coble (2005) more or less use the estimated parameters in these two models in a model of choice for accepting risky food (genetically modified), purchase, and eating of it. The same idea is present in the independent work by Grijalva et al. (2011). In Lusk and Coble (2005) the risk perception for each subject is apparently constructed by first summing four Likert scale question scores (the scale runs from a minimum of 1 to a maximum of 9). The sum was then standardized to have a zero mean, and this variable was used in the choice models. Finally, Andersen et al. (2014) push the idea of jointly estimating subjective probabilities and risk preferences further, demonstrating that estimating the former without consideration of the latter is important. Next I describe empirical applications that incorporate risk or uncertainty. 5.0 Empirical Applications In this section I offer some discussion on popular modern topics, and review some recent applications. Risk is presented, first for environmental issues and then for natural resource problems, and then I move to a similar order of presentation for uncertainty. I am omitting a host of important theoretical papers on each, focusing more narrowly here to cover the much smaller body of literature that offers some empirical results. This isn't to slight the importance of the

45 theoretical contributions; it is just that the empirical papers are just coming into their own and so these get most my attention here. It is a bit tricky to lump some topics under "risk" and others under "uncertainty," and it is admittedly a judgement call to do so. I'll try to offer some reasons as I go forward below. As an example, probably the single most researched topic these days is global warming or climate change. It may be debatable where to place that below, but I'll argue below that it goes under environmental uncertainty, not risk, and the reader will know why, when I get to that below. First, I consider how the EU model can be made empirically tractable within a very popular model, the discrete choice framework. Discrete choice models are quite appropriate in situations with limited numbers of choices being available, and have become the norm in non-market valuation. 5.1 Empirical Modeling of the EUM Applying the EU Framework Of the empirical work in environmental economics that includes risk, far and away most of it presumes that risks are exogenous to the individual, and it uses science-based risks, not subjective or perceived risks. However, there are some notable exceptions. First, I consider how the basic EU could be implemented empirically, using a discrete choice framework. Without risk, the usual approach is called the random utility model (RUM), because utility is made random using an additive error term. First consider the standard random utility model [X is a vector of influential variables; Y is income] and utility with states of the world 1 and 2: Eq. 14 V1 1 X 1Y 1 Eq. 15 V2 2 X 2Y 2 X takes a different value (X') for at least one of the variables within the vector, in (15) rather than (14). The conventional RUM framework basically considers the "utility difference," V2 + 2 V1 + 1, using this difference as the basis for the choice the individual makes. If X' denotes an environmental improvement then the difference is positive, and we expect to observe an individual choosing state 2 rather than state 1. The error terms in the conventional RUM are assumed to arise on our part as researchers and the individual is not facing risk in 14 or 15 above. Also note, as an aside here, that if in fact Y enters the utility function linearly and does not differ between the two states, then there are no income effects and income does not matter in choosing between states 2 and 1, i.e. income drops out of the utility difference because it does not affect the choice. The same would be true if gender, or the same risk-related probability, were added

46 to both of these equations. A person's gender doesn't change in two states, and if the probability does not change either, then both drop out of the utility difference. Taking the utility difference above and assuming a particular distribution for the error terms leads to the estimating equation, i.e., the probability of choosing an alternative. The workhorse model in applied economics (e.g. transportation, recreation demand modeling, etc.), assumes a particular distribution (logistic or normal) leading to, in the case of two mentioned distributions, the logit or probit models, respectively. I won't pick on any particular study here, but in several older studies that claim to involve "risk," the standard logit or probit models are used exactly as the above framework would suggest. The probability associated with the risky event itself is nowhere to be seen, and in some early studies, it isn't even discussed. The absence of such a probability in a model of behavior under risk may in fact arise as a consequence of the lack of variation empirically, or it may arise because a probability estimate of some sort was unavailable to the researchers. Naturally, in general, an explanatory variable in empirical modeling only matters if it varies over time or across observations. If all individuals in a sample face the exact same probability of a risky outcome, then one can hardly expect it to seem to matter in the empirical estimation of behavior across the sample of individuals. To formally introduce risk into the framework above, there must be some random variable that poses the risks the individual faces. Following Riddel's (2009) simple exposition, which makes this clear, consider an indicator variable IT , such that I = 1 if an accident happens, and = 0 if it does not. Riddel's context (2009) was highly radioactive nuclear waste, and she associated mortality risk as the probability of death when an accident happens during shipping or storage. For starters, assume this probability is exogenous, and science-based. Now let Eq. 16 be the accident free conditional utility, and Eq. 17 be the different case, simply with I = 1, or: Eq. 16 V1 1 X 1Y 1 Eq. 17 V2 2 X 2Y IT 2 To move toward an estimating equation here, we take the expected utility difference, not the simple utility difference. First, note that ET [V2 ] 2 X 2Y 2 Thus, the expected utility difference where the expectation operator pertains to the accident random variable yields: Eq. 18 ET [V2 V1 , ] X Y

47 As usual, the parameters in Eq. 18 have no subscripts to indicate that only the differences in parameters in the two equations can be identified, i.e. 2 1 . If the science-based probability is exogenous, then one can proceed in estimating the probability of the choices using the usual logit or probit model. But what is also clear in Eq. 18 is that the probability of the accident, i.e., the risk of interest in the EU framework, will now enter the estimating equation. There are at least three cases where the risk-related probability will potentially matter, empirically. First, if the probability changes in the two states, which might be true when one considers a program to obtain a risk reduction, then it does not drop out of the utility difference. In the above setting, if the accident does not happen (as in Eq. 16), then the random term is zero, so this allows the observable difference in probability. Second, if at least some portion of individuals in an estimating sample face different probabilities, or at the extreme, each face their own subjective estimate of probability, then the risk will probably matter in estimation. And third, one might allow for something not typically seen, so that the probability term enters the framework non-linearly.18 To see this last case, consider the development of a discrete choice model with allowance for risk, by Cameron (2005) or as followed below, by Riddel and Shaw (2006) [and also by Riddel (2009)]. In both of the first-mentioned studies, there is an attempt to incorporate ambiguity into the discrete choice modeling framework, so to be fair, these are not strictly-speaking, conventional EU models. In state 1, suppose that the probability of the event is zero, but in state 2, a risk function f() is introduced where the function goes beyond a simple single term, as in the quadratic. Riddel (2011) shows that this framework is still an EU model (consistent with Bernoulli risk, and quadratic utility), but has an implausible utility function19. Income is also introduced in log form by both sets of authors, to allow for risk aversion in the income dimension (and income effects), and in state 2, Riddel and Shaw (2006) introduce the individual's willing to accept compensation (A) to bear the risk increase so that income is supplemented by the WTA, which can be written as below. V1 1X lnY 1 V2 2X ln(Y A) f ( ) 2 18 Another possibility is ad hoc, and it is when the probability term is constant over the two states, but is interacted with some other variable that does change between two states. 19 A reviewer notes that some of Cameron's suggested functional forms, not depicted here, do depart from the EU. An example is when the form includes the ratio of the mean to the variance.

48 If the probability function is non-linear, then probability can indeed potentially matter here in the probability of the two alternatives, but this nonlinearity definitely constitutes a departure from the traditional EU framework. In both cases above, the authors essentially introduced something akin to the mean and variance of the probability, consistent with the more general mean/variance approach in risk analysis. It is precisely the intention of the two studies to allow for influence beyond the mean of the distribution. The range of uncertain subjective probabilities, or the variance of the subjective probability distribution might be used to help determine the individual's choice between state 1 and 2, and a connection can be drawn to ambiguity. I will return to this below, but note that Riddel (2011) criticizes the use of the mean-variance approach used earlier by Riddel and Shaw (2006) and Cameron (2005a)20 because it introduces the possibility that higher probabilities of mortality (a "bad") generate positive utility in a portion of the range of possible values. 5.2 Environmental Risk Problems/Valuation Risk often plays a huge role in the determination of the benefits of environmental improvement. That risk can be ecological, environmental, or include either of those affecting human health. I do not include any lengthy discussion of other human health studies, such as those related to things like smoking cigarettes. In contrast, exposure to the sun, and its relationship to skin cancer (Dickie and Gerking 1996), is considered an environmental problem because it could be influenced by stratospheric ozone depletion. There is also a strand of literature that models behaviors as a function of risk ratings or levels (high, medium, low), and I don't spend a lot of time on those below. Risk attitudes or preferences, i.e. risk aversion, may indeed lead to differences in support for programs, and in estimates of willingness to pay. For example, Macmillan et al. (1996) found that individuals in their sample preferred an environmental improvement program to reduce acid rain damages by some significantly smaller amount with certainty to a program involving a full reduction in such damages, but only with a 50% chance. Risk seekers might well take the risky chance, but those who are risk averse are less likely to do so, given a particular reward. I'll first look at a few revealed preference (RP) approaches here (there aren't that many!), and then move on to stated preference studies. As an early example of an RP approach applied in the context of risk, Rodgers (1996) considers how bicyclists respond to risk by wearing safety helmets, similar to the earlier pioneering work on automobile seat belts by Peltzman (1975). Most of the studies I know of implement a version of the SP framework to recover behavioral responses and values, but a few exceptions in natural hazard and health/environment settings consider actual behaviors. 20 Cameron (2005a) does not in fact use a simple mean-variance model to arrive at her most interesting results.

49 5.2.1 Revealed Preference and Behaviors 5.2.1.1. Hedonic Studies As an early example of an empirical revealed preference study that might be tied to risk analysis, Bernknopf et al. (1985) consider housing location decisions in conjunction with earthquake risk or safety zones. They focus on whether the housing market in California reveals decisions that line up with the expected utility framework. They deem the decision to locate further away from an unsafe area in terms of earthquake risk as "self-insurance," even though many today would deem this a "self-protection" decision, because the safer location reduces the potential financial losses, as would insurance coverage in the event of an earthquake The authors estimate a onetime (1978) hedonic price model with a dummy variable for presence in one of California's Special Studies Zones (SSZ) that were identified under the Alquist-Priolo SSZ act (amended in 1976, shortly before the year for which they have housing market data). Once the parameters in this empirical model are obtained, they use these as well as facts obtained outside the model to test the house price differential inside and outside the SSZs. Comparing the actual differential to what the EU would predict, the authors conclude that the housing market corresponds fairly well to behavior modeled using the EU decision framework. Their simple hedonic price model likely would not stand up to scrutiny by modern hedonic modelers, and they only roughly consider the role that heterogeneous risk preferences might play, but this is an important early, and somewhat rare RP study of risk and the EU. Hedonic studies are again much in vogue today in resource economics, perhaps because of the mistrust of SP approaches by many economists, as well as other social and physical scientists. Several housing market studies have tried to make inferences about risk preferences or perceptions for floods and hurricanes (e.g. Hallstrom and Smith 2005; Naoi et al. 2009; Bin and Landry 2013). The basic intuition is straight-forward here. If homes are at greater risk of damage from flooding, then, in the absence of complete insurance, and holding other factors constant, their sales price should be lower. Put another way, if monthly housing payments include flood insurance, then knowledgeable buyers are willing to pay less for a home at risk, and homes at no risk of flooding should fetch a premiuM. Hallstrom and Smith (2005) use changes in housing prices in association with hurricanes to try to reveal changes in risk perceptions. If hurricane risk is exogenously determined, it can be presumed to have a negative influence on home prices. However, individuals might have different preferences and risk perceptions. If a buyer has higher perceived risk then this may soften his offer price as compared to a buyer with low perceived risks. Sales prices might also be influenced only slightly following a hurricane, because of mitigation measures. To control for confounding influences, Hallstrom and Smith (2005) use a "near miss" hurricane, so that any rebuilding of homes does not occur. They believe that an observed decrease in

50 housing prices that occurs after a near miss of a hurricane is solely due to changes in risk perceptions. Bin and Landry (2013) also find some risk-related evidence using RP and housing markets. They find that the "no risk" home premiums are higher just after a flood has occurred, suggesting a spike in the perceived risks of flooding. In this way, they at least shed light on the presence of a subjective probability pertaining to market purchase behavior. However, in both of these hurricane and RP housing studies, I see no ability to actually numerically identify a particular household's subjective probability. 5.2.1.2. Recreational Fishing and Fish Consumption Advisories Another common RP setting arises within recreation demand modeling. Risk faces anglers in the form of health risks from eating contaminated fish at ocean and freshwater sites. The most common approach in dealing with this has simply been to use a dummy variable to simply designate the presence of a fish consumption advisory, and see if this has a statistically significant (presumed negative) influence on demand for fishing at those sites. Jakus and Shaw (2003) tackled the problem differently. They used a standard discrete choice model of fishing choice to try to examine risks or hazards associated with eating contaminated fish. However, they had neither scientific estimates associated with health risks of ingesting contaminated fish, nor did they have anglers' subjective risks. They therefore used the probability of keeping a fish caught as an indicator of the anglers' perception of these risks, presuming that an angler would only keep a fish in order to eat it. They found that "keepers" and those who catch and release behave differently from one another, and have substantially different values for a reduction in contamination levels. 5.2.2. Stated Preference Studies Several economists have used stated preference (SP) approaches to tackle environmental risk issues. For those unfamiliar, SP approaches include the contingent valuation method (CVM), contingent behavior, and discrete choice experiments (DCE). Again, note that many older SP studies are novel and the authors that undertook them were important pioneers in connecting risks to environmental analysis, but at best, they may be misleading. Some claim to recover riskrelated values, such as the option price (OP). If done correctly, an SP approach which obtained the OP must ask an individual their willingness to pay for a program under conditions of risk that are explained to them, getting an answer before the risky outcome is resolved. Based on my reading it does not appear that this was done in some cases. Depending on the approach used and data available, one might estimate expected consumer's surplus (ECS), option value (OV), or the option price (OP). Care should be taken: when reading older empirical papers, one might well consider whether what is really estimated using the data is in fact the OP and not something else, as these three valuation concepts are not the same.

51 As an example, consider an early and important effort to obtain estimates of option prices for groundwater protection by Edwards (1980). It is difficult to tell from the text of the article whether the value measure Edwards obtained is really an OP. The background description of the risks involved with contamination are certainly on the right track, but exact wording of the valuation question, which follows the discrete choice (yes/no) format was not provided in the paper. In addition, the conventional logit model Edwards (1980) used to estimate the probability of a yes answer does not depend on the level of risk. Thus, my guess is that some welfare other than the OP is actually estimated, but to be fair - Edwards was certainly ahead of his time in trying to get an empirical estimate. 5.2.2.1. Contingent Valuation In the environmental risk literature, Desvousges, Smith and Fisher (1987) provide one of the earlier SP empirical analyses of "option price" that I know of, using the CVM. However, in their framing of the ex ante benefits question, there is no mention of the probability of outcomes. Rather, individuals are simply asked what they would be willing to pay now for future use of the Monogahela River at different levels of water quality. Their use of the word option price thus implies that uncertainty enters only because the respondent does not know whether they will actually use the river in the future for recreational purposes. At about the same time Smith and Desvousges (1987) conducted another environmental risk study to empirically estimate OP and used the CVM, this time in the context of mortality risk from hazardous waste sites. As briefly noted above, their focus was to test whether WTP for an equal marginal risk change (say 10%) is higher when the baseline risk is higher, thus, each respondent in their sample is asked to value a risk change (reduction), but provided with different baseline levels of risk. Their analysis suggests failure to reject the null hypothesis that there are equal marginal risk change values. This falls into the category of a VSL study, and I devote an entire section of this chapter to VSL studies below. More recently, several authors have slightly modified the basic EU framework and used the CVM approach to obtain valuation measures. These include the Cameron (2005a, 2005b), and Riddel and Shaw (2006) studies mentioned above. Also as noted above, the EU framework is modified so that the estimating equations are non-linear in the probability term, leading to what is essentially a mean-variance approach. Riddel (2009) modifies the basic EU framework further than these previous studies, using an induced distribution approach to simultaneously estimate elicited risks and choices related to nuclear waste disposal, while allowing for ambiguity in the stated risks. She finds that allowance for risks to reflect an individual's ambiguity are important in estimates of subjective risk: individuals who say they are certain about their risk estimate have very different estimates of average risk than those who indicate they are not certain about their estimate.

52 In a very recent paper Riddel (2014) builds on literature that suggests that gamblers like a longshot bet, i.e. a chance at a low probability, but high return prospect. She draws the similarity between this and climate change, asking whether people might be willing to pay more than expected climate change damages when they believe there is a small probability of a catastrophic temperature change outcome. Her study may really fit better under the heading of uncertainty, as opposed to risk. In fact, as much of this modeling is actually quite an extensive departure from the EU, this is discussed more below. First, before leaving it behind, it is worth mentioning a popular strand of contingent valuation literature that I find to be somewhat strange or confusing. Contingent Valuation with "Uncertain" Respondents There have now been many papers that have introduced a follow-up question to respondents of dichotomous choice or other types of standard CVM question formats (see Ready et al. 2010 for a good review of these, even though the paper itself pertains to stated choice and not to contingent valuation). In this follow-up question the respondent is asked how certain he or she really is about making a payment that they have just indicated they would make (or in some cases, not make). Typically, the respondent is asked how confident he or she is, on a scale of 1 to 10 or 1 to 100, where 10 or 100 implies complete confidence and the lowest number implies no confidence. The implication here is that the individual is, for whatever reason, uncertain about his or her own preferences. It may not be that there is a specific source of uncertainty or risk that the decision maker can put her finger on, it is just that they cannot decide which things they prefer. Preferences are just not definite, a situation that was considered by Sileo (1995). To proceed in this set up there are a variety of ways in which the usual CVM responses are then "calibrated" or adjusted, depending on the confidence in the payment. The simplest is to recode all the "YES" responses in a dichotomous choice format to "No's," below some threshold of confidence. This obviously lowers the median or mean bids, thus reducing the value. In another, slightly different, approach some interpret the 1 to 100 percent confidence as a factor between 0 and 1 that can be used to adjust the probabilities of choices. I.E. if the probability of saying YES is 0.25 and the confidence in this is 0.10, then the "actual" probability is interpreted to be only 0.025. I find all of this to be at least somewhat odd because this "uncertainty" or lack of confidence seems to relate to uncertainty that the respondent has about his or her own preferences. Ready et al. (2010) in fact introduce another error term in the conventional choice model in their setting, stating that this incorporates the respondent's own uncertainty. The uncertainty in this world is rather unusual as it is not apparently based on risks or uncertainties in the conventional sense I have used throughout this chapter. If I thought this to be the case, I would be looking for an individual's subjective probability related to the risk source. Here, I wonder if "confidence" is

53 something quite different than risk or uncertainty. I suggest that the usual follow-up confidence question does not really ask the individual to state their subjective probability of making a YES choice. I believe this needs further consideration by the scholars that believe that individuals doubt their own preferences in the CVM setting. In addition, or to succinctly summarize, it may well be that the uncertainty examined in this types of models involves both uncertainty about the risk, and uncertainty about one's own preferences, with no way to disentangle the two.21 5.2.2.2. Discrete Choice Experiment Models with Risk The stated discrete choice experiment (DCE) framework is a good deal like the dichotomous choice or discrete choice CVM framework and it can be used to extract marginal willingness to pay from the choices. Instead of being asked to choose between support (yes) of a program or not (refuse, or say no), the more general binary DCE asks an individual to choose between alternatives A and B. Otherwise, almost all of the modeling proceeds at it does with the RUM framework above. Attributes of the alternatives are used to describe them. They are typically elements of the utility function and they may enter the utility function linearly, as long as the attribute levels differ between alternatives. Otherwise of course, they drop out, as would taking the difference between two identical constants. Most often, the attributes, along with their levels, are presented in a simple choice matrix that an individual can inspect. For example, if alternative A provides high water quality, and alternative B provides low water quality, this can be easily depicted in a choice matrix. However, numerical estimates of probability are not so straight-forward, and this begs the question of where to introduce risk into the DCE analysis. In a DCE that assumes certainty, the marginal WTP for an attribute can be easily found by using the ratio of estimated coefficients. This ties to the marginal rate of substitution (more simply, the ratio of the marginal utility of the attribute to the MU of income or cost.) I think this is why so many have appealed to DCE, as the welfare measures are quite simple to obtain. There have been several efforts to adapt the basic DCE approach, which assumes certainty, to handle some element of risk pertaining to the choice alternatives. Early CVM approaches that tried the same thing simply used words to indicate risk, such as "low, moderate, or high" risk, to describe a program. These can be recoded into dummy or ordered categorical variables. This ordinal risk level approach was tried in a DCE by Baker and Shaw (2010), applied to choice of locations in which to live, with varying levels of hurricane strike risk. Obviously this worded risk level approach is not necessarily consistent with, and does not lend itself to implementation 21 I owe this explanation to a reviewer, who adds that this type of question framework should probably just be avoided.

54 within, the EU framework, as it does not involve a numerical estimate or probability, i.e. it does not introduce any probability density function underlying the risk. Other DCE efforts have included a numerical stated level of probability associated with a choice attribute. For example, in one strand of literature that is purported to introduce risk, individuals see the numerical chance that the proposed attribute improvement will actually happen. The attribute is the outcome chance, and in most of the literature I have seen, this "risk" attribute is placed inside the utility function. As we have seen with the RUM above, the DCE also involves a utility difference that leads to an estimating equation. Also as above, it is perhaps consistent with the EU to have the risk-related probability show up as a right-hand side variable in the standard choice probability equation. As an interesting example of a DCE approach with risk, consider Itaoka et al. (2006), who provide a relatively early effort along these lines. Itaoka et al. (2006) developed a DCE to examine choices among programs to provide energy using different types of fuels, including nuclear power. Nuclear power has the obvious stigma of mortality risk associated with a disaster (a leak such as at Chernobyl or the Fukushima catastrophe).22 One of the attributes of the choices is therefore the probability of mortality associated with the program (P), and the severity of the risk is given by the expected number of lives lost (L, i.e. the probability of a disaster multiplied by the number of deaths should a disaster occur). Note that the authors are also including lives lost in the fossil fuel sector (TL). Each choice involves a blend of energy production between nuclear and conventional fossil fuel power, with varying annual probabilities, lives lost and the reduction in lives lost, along with a tax to support the risk reduction program. In one specification the authors use, an attribute of the utility function is expected loss (P X L) , which does incorporate the probability. This is somewhat consistent with the EU framework, but there is no risk preference parameter (e.g. risk aversion) allowed in it. Income is in none of the specifications: all respondents are apparently assumed to be risk neutral with respect to money income. However, note that this is possibly inconsistent with earlier statements the authors make, that the respondents seemed to be averse to risks (i.e. they don't engage in risky behaviors that involve a fair gamble). In one of the other specifications that Itaoka et al. (2006) provide, one of indirect utility functions is: V cCost p P L L TLTL 22 Interestingly, I had originally referred to Three Mile Island instead of Fukushima, but a reviewer noted that there were no deaths in the Three Mile Island incident, highlighting the nature of my own perceptions of risk and failure to remember this correctly!

55 Note that the probability does not weight the entire utility function, as it would in the EU framework, rather, it is multiplied by a coefficient, as it is for the expected lives lost attribute. Thus, this form is not necessarily consistent with the EU model, although in a special case, where all of the other variables in the utility function were first weighted by the inverse of the probability, then the EU would be as above. I cannot think of a simple rational for this, and the authors do not motivate their equations using theoretical derivations, as does, for example, Riddel (2009; 2011). A second possibility for motivating this is that one is implicitly assuming something akin to separability. An additively separable utility function in the certainty context can be written as U = Ua(x) + Ub(y), where there are two distinct parts (Ua and Ub) added together. Clearly, the marginal rate of substitution between x and y is of a very restrictive form in this case. Using something like this in the context of the expected utility function [EU = pV + (1-p)V], so that the utility function is partitioned into two parts, a and b, where only the first part involves the probability weight: EU pUa Ub p P {cCost L L TLTL} This is similar to the Itakoa et al. (2006) utility function above, and it means of course that the probability weight in the expected utility has no influence on the entire group of second terms, as if they all involve only certainty, and of course as in all linear and additive models, there are strong implications for substitution. In essence, the terms in brackets on the right are not influenced by changes in probabilities. That makes little sense above, as the lives lost would seem directly related to the risky probability. A difficulty also arises when one tries to use the ratio of coefficients to obtain the WTP, using the specifications above. What does (V / P) /(V / Cost) mean in this form? The meaningful welfare measure in the context of risk, as I have discussed above, is the OP. The survey used to collect the data for the Itaoka et al. (2006) study was conducted in Japan in 2003, where there still may be obvious negative stigma about anything having to do with nuclear energy (in relation to the bombs of World War II), so the design considered whether there was a labeling effect (i.e. the authors consider whether fear, when associated with the word nuclear, would inflate WTP over and above the other attributes associated with nuclear power production). The authors find evidence of a "baseline" risk effect: they suggest that the WTP is sensitive to hypothetical versus actual baseline mortality rates. The underlying idea, as in Smith and Desvousges's (1987) seminal empirical analysis, is that a WTP for a risk change should be higher for a higher actual baseline risk. The study also find evidence that subjects tend to ignore

56 the probability of the loss and focus instead on the number of people lost, which they deem "disaster aversion." The average WTP for reducing deaths from a nuclear disaster is 60 times the WTP for fossil-fuel generation deaths. Both are WTPs for a reduction in public deaths, not private ones. The authors do not control for background risk, nor do they carefully model possible altruistic behavior. Also of interest, they asked subjects whether they believed the baseline probabilities presented to them in the attribute amounts. Seventeen percent did not believe them. This disbelief is a problem in presenting attribute levels that characterize risk when respondents might think that risks should be different, but Itaoka et al. (2006) do not discuss how they dealt with this issue. More recently, Glenk and Colombo (2011; 2013) and Rolfe and Windle (2015) also introduce an element of risk or uncertainty in the DCE context. These studies are along the lines of informing the subjects of the degree of certainty regarding the "delivery" of the attributes. In some cases, particularly involving promises of future outcomes, the subjects might believe that the guarantee of the outcome is suspect, so these authors include outcome uncertainty as part of the description of the alternative or a specific attribute level. These studies are billed as testing whether risk matters in the choices. For example, Glenk and Colombo (2011) estimate different versions of models with the chance that a climate change program will deliver benefits in the utility function, versus this chance being absent. The results are mixed, as are the degree to which these authors models conform to a strict EU type of model. Similarly, Rolfe and Windle (2015) include the level of certainty as an alternative choice attribute in their programs to improve environmental quality at the Great Barrier Reef [GBR] of Australia (i.e. there is a 50% or 80% chance that improvement will actually happen.) The choice matrix is reproduced below.

57 As with the approach taken by Itaoka et al. (2006), one of their model specifications includes a standard utility function but with the expected value of the improvement (the amount of GBR improved multiplied by the chance the improvement will happen) as an argument. The authors deem this to be the appropriate form of the expected utility function (their equation (2)), but do not demonstrate this theoretically.23 The particular modelling specifications range from ad hoc to something that could be generated in the strict EU framework, and the implied values for the risky improvement vary. Rolfe and Windle (2015) explore some of the unexpected findings further, by estimating a Latent Class model of preferences. Oddly, they find that one of their latent classes has negative preferences for more certainty, which contradicts the usual certainty effect found in the literature. They suggest that this could be because this group finds the high levels of certainty depicted in the choices as being unrealistic. This is quite possible, but the authors should then have probably tested somehow for scenario rejection or attentiveness to particular attributes. There is a fairly recent explosion of similar efforts to incorporate risks into the DCE framework. Thiene et al. (2015) implement much the same framework as do the models above, but they use subjects' stated subjective risks in their choice model, which were collected at the same time the survey asked for respondents to make choices involving landslide risk reduction programs. I expect others in environmental economics will soon begin to try to elicit subjective risks for such purposes, but leave further discussion of that innovation until the last section of this chapter. In any case, clearly much more needs to be done to carefully use DCE while incorporating risk. 5.2.2.3 Values for Risk Reductions in Non-EU Models Some non-EU models are attempting to introduce uncertainty or ambiguity, and these are discussed below, but here I mention a few valuation models that involve risk in a non-EU framework. Note that the risk-related value or welfare measures are different under the EU and the non-EU models (see Jindapon and Shaw 2008), though this has hardly ever been discussed (a few theoretical papers have consider the nature of the risk premium in EU versus non-EU models). As mentioned above, both Cameron (2005a) and Riddel and Shaw (2006) develop models that depart from the more familiar looking models that feature utility functions which are linear in risk, although theirs are not necessarily "non-EU" models. Using models that are quadratic in risk, they each estimate WTP (actually WTA in the case of Riddel and Shaw's paper) using the CVM approach. But neither discuss how their valuation measures might be different had they stuck with the conventional, linear in probabilities, utility functions. 23 If one assumes a standard utility function depends on a dummy variable = 1 if the program happens, multiplied by the amount of GBR improved, and also lets the dummy variable have an associated probability, then it would be possible to derive the form they have in an EU framework.

58 Roberts et al. (2008) use a non-linear pwf model to predict "WTP" estimates in a DCE framework, finding large differences between these and welfare measures under certainty, as one might expect, but also differences with a linear probability weighting model that would be potentially consistent with the EU. There is no systematic relationship: in some instances the non-EU welfare measures are smaller than the EU model, but in others larger. In all cases the non-EU measures are larger than in the certainty model. Roberts et al. (2008) offer no careful interpretation of their valuation results and anomalies. However, inspection of their graph of the WTP versus the probability of an algal bloom suggests some interesting features. In theory, the linear pwf model should perfectly coincide with the EU, so predictions would be the same. In the conventional EU, the WTP (which ideally is the OP measure of welfare change under risk), should smoothly, proportionally increase with the magnitude of the risk reduction, as noted earlier in this chapter. Their graph of the linear pwf relationship is a straight line, decreasing with a higher risk of the algal bloom, consistent with this. The non-EU graph they depict is highly non-linear, but also generally decreasing with the increase in the probability of the algal bloom. This paper is discussed a bit more below, under the heading of "uncertainty" and the environment. 5.2.2.4. Combined SP and RP Studies and the EU framework Hardly any study uses both SP and RP approaches with conditions of risk and compares them. In an exception, Kennedy (2002) estimates both a revealed preference and stated preference model of radon risk reduction valuation. Radon affords one of the interesting opportunities for work in environmental and health economics to come together. For the RP analysis, Kennedy uses data on remediation behaviors at several prices. Several such remediation methods are available that allow a reduction in radon risk in one's home. She mentions using subjective risk estimates in modeling the probability of taking remediation actions, and finds these to have a positive and significant impact on this probability, but offers no details about how these are elicited. The implied value of a statistical life (VSL) based on subjective probabilities are also compared to objective or science-based probabilities: she finds that the WTP per life-year gained is larger for the subjective risk estimates that individuals provide. Petrolia et al. (2013) also combine RP data on flood insurance demand with SP type of data obtained in surveys. Using housing market data and science-based floodplain risks, it is possible to determine whether it appears that market transactions overestimate flood risks, but to my knowledge it is not possible to recover subjective probability estimates for each house from this. Flood insurance demand is also a market demand that can suggest discrepancies between science-based probabilities and subjective ones. Petrolia et al. (2013) combine data on flood insurance purchases with at least some field-experiment style data collected at households to obtain risk preferences and subjective probability estimates. The risk perception questions do not recover numerical estimates of probability, as instead the responses are categorical or ordered

59 assessments of the expected number of future hurricanes. However, the authors also elicit risk preferences (the degree of aversion) by using a real money type of lottery trade-off approach. They conclude that subjective risk beliefs matter a good deal in insurance purchases. Next, I devote a separate section to the valuation of mortality risk reductions, which leads to the value of a statistical life (VSL) literature, focusing again on environmental risks. The VSL literature gets its own section because this literature is so vast, and the topic is so popular among, and important to, resource economists. 5.2.3. Valuation of Mortality Risk Reductions and VSL To begin here, as mentioned above, a huge amount of effort by environmental economists that research risk is related to finding the VSL. Why? Because the VSL is very often used as a measured unit in benefit-cost analysis for large-scale programs which have the goal of environmental improvement (e.g. air pollution regulation), and in many such programs, human health values dominate the benefits side of the ledger. For example, in analyzing the benefits of the Clean Air Act (and amendments), the U.S. Environmental Protection Agency's VSL of about $6 million results in about 80% of the benefits from the regulation (Alberini 2005). WTP studies typically obtain a VSL ranging from $1 to $10 million (see Salaniй and Treich (2009), and this is the range that is often referred to in the literature (see for example, Michel-Kerjan et al. 2013). Whether it is Viscusi's (1993) average of $1.6 million (in 2000 dollars), or $3 million (see a variety of estimates in Blomquist 2004), $10 million, or even some much higher (or smaller) number, the estimate of the VSL can sway policy making decisions that involve savings lives, so it is one of the things that environmental economists do that gets the most public attention, and if definitely involves risk, and thus, is worth particular mention here. Value of a Statistical Life When lives are saved by an environmental or health-related regulation or policy, this can have enormous economic benefit, even though this is quite controversial and offensive to some who believe that a life is precious or "invaluable," and that we economists are assigning values to a specific individual's life. This is perhaps our fault because we have chosen a poor or at least misleading set of words for the concept in the VSL, so that the public misunderstands what we are doing. For example, a prominent scholar in the SP modeling of risk reductions begins her paper with the following: "The VSL is the value at which people are prepared to trade off income for a reduction in their risk of dying." (Alberini, 2005, p. 783). This is phrased pretty carefully, but still perhaps subject to misinterpretation because of their word "their" in my italics (not Alberini's). Modern SP approaches typically try to avoid any

60 implication that it is known exactly whose risk is being reduced, although of course if the risk reduction affects the entire population, then it also does reduce one's own risk of dying. Cameron (2010) has suggested that we be very careful when diving into the VSL world, and in fact she recommends that we adopt different words than VSL, so that people in the public do not believe we are trying to value their grandmother's life in preparation for a decision on whether to use a death squad to get rid of her at some critical point in the aging and health profile. VSL's are of course inferred from estimates of value for a small reduction in mortality risk that accompany some program or policy change. This risk reduction often does affect an entire population, or at least some larger portion of the population than a single grandmother. Each person in the population faces the smaller reduced risk, resulting in some lives being saved, and naturally, our valuation methods do not identify who is to be saved. If the implied WTP that we believe individuals have, on average, for a reduced risk of 2 in 10,000 is $400, then the implied VSL is $2 million. Theoretically, this can flow from a standard equation based on the tradeoffs between safety and consumption or income. Researchers should be cognizant of economic theory under risk, and test whether resulting empirical models are consistent with this theory. For example, our theory says that the welfare measure should be larger for larger risk reductions. But for some hypotheses the theory does not offer clear guidance. Should the VSL differ by age of the respondent, for example, and if so, in what direction?24 Should the VSL estimate be sensitive to the assumed probability distribution for the welfare measure? Should people in poorer health be willing to pay less or more for a risk reduction than healthier people? Should the VSL be higher when the cause of death is cancer, i.e. is there a "cancer" premium? And in a fairly recent paper, Hammitt and Treich (2007) also ask whether it matters whether we allow for heterogeneity in risk: they conclude that it does, and the effect of this heterogeneity depends on whether the information relates to the change in risk, or to the baseline risk that a person faces. Several studies attempt to answer the questions above. For example, Alberini et al. (2002) examine the effect of age on WTP and find no significant effect in their sample of North Americans (Canada and the U.S.). Magat, Viscusi and Huber (1996) find no significant difference between the WTP for a cancer-related death and those caused by automobile accidents. 24 It has been argued that elderly people have fewer years remaining in their own life and less to gain via risk reduction programs, thus their VSLs should be lower. But there is no clear support for this in economic theory because the value of each remaining year could be higher, i.e. the "life is precious" effect.

61 A particular value estimate and thus, the VSL, can stem from an SP or RP study. SP studies have been amply covered above, but in reviewing some here, I focus on specific issues related to mortality risk reductions, and also confine the exploration to deaths that can be caused by environmental problems. Several environmental contaminants and problems can cause disease and illness leading to deaths. Borrowing from the natural resource damage assessment (NRDA) literature for the U.S., hazardous substances include the heavy metals (arsenic, copper, cadmium), and organics with carcinogenic properties (e.g., dioxin), as well as particulates and other things associated with air pollution (e.g. PM10, carbon monoxide), and substances found in drinking water and soils (radon, for example). RP studies that provide VSL estimates are dominated by differential wage/labor studies, i.e., studies that consider wage differences for jobs with different mortality risks. Viscusi and Aldy (2003) provide a thorough review of this labor and hedonic wage literature up to that time, and Cropper, Hammitt and Robinson (2011) update that to recent studies fairly well, so it will not be repeated here. RP studies can also include observable averting behaviors (see Blomquist 2004), or market consumption related decisions that reduce risks. For example, one can choose residential locations to be close to, or further away from risks, and Gayer et al. (2000) use housing markets near Superfund sites to estimate the value of saving a life from cancer: they estimate it to be about $4.3 to $5 million in 2000 dollars (see discussion in Blomquist 2004). Consumers trade income for safety via higher priced, and presumably safer, products (e.g. automobiles). SP studies that lead to VSL estimates are still growing in number (see Dekker et al. 2011), and include the usual and popular contingent valuation approach (see Alberini, 2005), as well as recently, use of the DCE approach (e.g., Tsuge et al. 2005). The usual criticisms related to hypothetical scenarios pertains, as it does in all SP modeling, and there are other unique issues specific to assessing reductions in mortality risk. For example, to be useful in an EU or non-EU quantitative framework, the numerical change in probability indicating the risk reduction has to be effectively communicated to the survey respondent (Corso et al.). Several studies have shown that respondents get the risk information much better when natural frequencies are used (i.e. there are 20 out of 1,000,000 cases), as opposed to fractions or decimal representations of objective probability (see Hoffrage et al. 2000). Ideally, as researchers we'd like to know if the respondent adjusts the risk information via probability weighting of some sort, i.e., whether risk perceptions are used by the respondent (e.g. see Vassanadumrongdee and Matsuoka 2005). After all, in studies where people are told sciencebased probability estimates of death due to some cause, a common finding is that the respondents do not digest and adopt them (e.g. Riddel and Shaw 2006; Riddel 2009; 2011). This problem can be exacerbated in studies where the respondents are allowed to provide a subjective baseline risk or probability estimate. As these may differ considerably across individuals, the research design

62 can be enhanced accordingly. A common approach uses a fixed proportional change in risk geared to the respondent's own baseline, and thus, the magnitude of the risk reduction is higher for people with a higher baseline (i.e. if the baseline is 25% versus 10% and the fixed proportional risk reduction is 20%, then the magnitude of the change is 5% and 2%, respectively). This allows examination of a type of scope test (Alberini 2005), or whether individuals respond to absolute versus relative risk changes. Second, the details of the cause of death might be important25, and third, the respondent needs to believe that the risk reduction can actually happen. Dekker et al. (2011), echoing a few others' previous work, conclude that a VSL is not a VSL, is not a VSL... i.e., it matters what the risk context is. As a specific example, they find that VSL estimates conducted in the arena of road safety improvements should be multiplied by 1.8 to arrive at an appropriate VSL estimate to use in valuing lives saved via improvements in air quality. SP studies should be examined for internal validity, i.e., to check whether the theoretical and empirical models are consistent with each other. Despite possible problems, SP studies do have some advantages over RP studies. The first obvious one is to be able to assess values and public support for a risk reduction policy or program that has not happened yet. A second advantage pertains to the fact that many hedonic wage studies are not really examining chronic health issues that lead to death, rather, they are focused on accidental death comparisons between jobs (see Cropper, Hammitt, Robinson 2011). In contrast, the SP approach can be used to describe specific types of risk or causes of death, and respondents can be presented with different timing of risk reduction. For example, Tsuge et al. (2005) use the type of risk (accident, cancer, heart disease) as a choice attribute in their DCE. Their respondents (from Japan) showed the highest preference for alternatives that would reduce cancer risk. A likert scale question also showed that their respondents associated more pain and fear of cancer than they did heart disease or an accidental death. Respondents also show a preference for the risk reduction happening sooner, rather than later. A third advantage of SP approaches over hedonic wage approaches to estimating the VSL pertains to the fact that risks might be endogenous and that there is sorting taking place over jobs, which means that econometric analysis leading to a WTP might be flawed when using labor market-based data (also covered, ibid). From recent meta-analyses of VSL estimates, it appears that SP studies result in significantly lower estimates of VSL (e.g. see Dekker et al. 2011, who use 26 SP studies in a meta-model to examine the role of risk context), than do RP studies (e.g. see Viscusi and Aldy 2003). Further exploration of why this is the case, is needed. 25 Vassanadumrongdee and Matsuoka (2005) find that dread associated with cancer from air pollution leads to different risk perceptions than deaths associated with traffic accidents, but that ultimately, this difference does not lead to pronounced differences in the VSL estimate.

63 Alberini (2005) also points out that SP approaches can target segments of the population that are of particular interest or who will be most affected by a program, such as the elderly. The usual labor/wage study only covers people who are still in the labor force, obviously less likely to include the elderly. Whether SP or RP, well-specified models show that WTP for a risk reduction, and hence, estimates of VSL, depend on a host of factors, including the marginal utility of income, and/or wage rates, age, use and cost of time, and a suite of self-protecting behaviors Timing of risks may indeed be quite important, but this is difficult or impossible to address in labor/wage models that use only cross sectional wage differential data. Many economists (see the review in Blomquist 2004) have suggested that life cycle models are richer ways of considering the factors that lead to different VSLs. For example, one of the more controversial recent explorations involve whether WTP for risk reductions, and VSL, declines with age and latency. The latter pertains to the period of time before risk declines. Johansson (2002) finds no relationship between age and WTP and again, this issue is regularly investigated in ongoing research today. Thinking of the VSL again raises an important point in valuation: whose life is being valued? More generally a key question is: Is the risk faced or borne personally when a person makes a valuation-related decision (SP or RP), or is it being borne by someone else in the public? This of course relates to the private versus public goods issue. Some researchers have decided that it is best to focus the VSL issues on one's private, or personal risk, as including public protection then involves the possibility that altruistic values confound values for risk reduction. However, there remain many studies that nevertheless provide risk reduction values for public protection (e.g. Adamowicz et al. 2010). Jones-Lee (1991) and several others have considered whether altruism plays a role in the theoretical model that leads to the WTP for saving lives. In her study mentioned above, Kennedy (2002) also estimates VSL, and does rather something similar to the earlier radon study by Еkerman et al. (1991), recovering the "WTP" from the discrete choice model of remediation. Details on estimation of WTP from the model are scant, but she suggests she is using the mean (not the median), which could be recovered from the CDF for the probability of a yes decision (i.e. the decision to remediate). She converts a WTP relating to radon risk remediation, roughly 700, into VSL by dividing the WTP by the average subjective risk reduction for the sample (about 0.004) to yield roughly 13,000 per life year gained by remediation. Newest Issues in VSL Modeling Several issues have emerged in VSL modeling in recent years. First is the debate over whether it would be best to focus on the value of lives saved, or the value of additional years of life saved, where the latter leads to the value per statistical life year (VSLY) (e.g. see Hammitt, 2007). If a

64 program saves lives for people who have shorter life expectancies (older, or infirm), then the VSL will likely lead to a larger benefits calculation than the VSLY does, and thus, some government agencies have recommended smaller (larger) VSLs (VSLYs) for older people when they are the group primarily impacted by a policy (e.g. air pollution control). When using SP approaches to calculate details relating to the VSLY it becomes necessary to ask different questions than those typically asked in a standard VSL context. For example, it might be of interest to ask WTP questions pertaining to reducing mortality risks for five people in the age group between 50 to 60, versus a similar program that affects two people between the ages of 20 to 30. Another interesting develop considers VSL estimates in non-EU models that allow for some degree of uncertainty or ambiguity, rather than risk (e.g. Riddel and Shaw 2006; Treich 2010). Further discussion of a few of these studies is postponed to the section (5.4) on empirical modeling of environmental problems under uncertainty, below. Next, I consider risk within the context of resource extraction. 5.3 Risk in Natural Resource Problems The primary natural resource problem that comes to mind is optimal extraction of a nonrenewable or renewable resource. The discussion above noted that the optimal control literature can be considered as an EU problem, under risk, leading to the stochastic optimal control and dynamiC PROGRAMMING models. There are many, many studies that examine price risk in energy markets (eg. Sadorsky 2001). A recent example is the study by Henriques and Sadorsky (2010) which finds that when a company becomes more sustainable environmentally, it can lower its exposure to fluctuations in energy prices. The influence of energy price exposure on stock market returns for a company is measured via oil price volatility (for which the authors have daily data). In an early natural resource-oriented paper that involves risk (not really uncertainty as the title of their article implies), Bockstael and Opaluch (1983) derive a discrete choice model of fishing supply alternatives (target species) for commercial anglers. What is interesting is that the authors are very clear on their derivation of the probability equation that pertains to discrete choices when certainty is relevant and the quite different derivation of a probability equation when risk is present for the angler. Inspecting these two equations, there are some similarities that result from the limited number of choice alternatives corresponding to the discrete choice realm, but introduction of random returns to fishing investments leads to the presence of many quite different terms in the two probability equations: the latter has expected returns and variances that flow from a Taylor series expansion of the utility function under conditions of risk. The basic risk in natural resource extraction equation incorporates risk to make relevant parts of the problem stochastic, via a standardized Wiener process. For example, in a fairly early

65 "stochastic" paper Mueller (1989) states that the instantaneous change in reserves for a resource firm is26: dR(t) Q(t)dt H(t)dt RdZR (t) Where R(t) are proved reserves at time t, Q is extraction at time t, H is the amount added to reserves through development or exploration, and Z is the standardized Wiener process, with as a constant. The standardized Wiener process is also known as Brownian motion. It is strictly stationary with: E{Z(t)} 0; E{Z 2 (t)}t; cov{Z(t), Z(s)} 0; and Z(0) 0 The math and assumptions here mean that incremental changes due to the stochastic process (dZ(t)) are normally distributed with a zero mean, dt variance, and dZR(0)= 0. This, Mueller (1989) notes, means that increments are consistent with "white noise." Thus, unlike other models with at least some structure inherent in the risky outcomes, in this simple there is little causality attributed to the randomness, although Mueller suggests it can be related to unknown future reserves (geological uncertainty). Movements in the spot (P) and future price (FP) away from their intitial values (indicated by the subscript "o") are given by: dP(t) Po p dZ p (t) dFP(t,t* ) FPo Fp dZ Fp (t) These equations imply that a resource extracting firm facing exogenous prices knows them at time t, but there is increasing risk about them as time increases into the future. Additional assumptions are typically made about the correlation between the spot and futures price at time t: Mueller (1989) assumes they are positively correlated, but the covariance itself is not a function of time. Using the above framework, solutions dictate that the price paths for the spot and future prices will be functions of the initial prices, and the variance and Wiener processes, i.e. they will be inherently stochastic; a random walk. The framework links to a stochastic optimal control problem, which can be solved using a variety of approaches. One is the stochastic dynamic programming approach, involving the Hamilton-Jacobi-Bellman equation (e.g. Pindyck 1980). An alternative, used by Mueller (1989) is the stochastic maximum principle (e.g. Bismut 1975). 26 Note that Mueller's title is misleading given my definition used throughout this chapter, as his paper is about incorporating risk, not uncertainty.

66

Let utility be a function of profits, and E be effort towards development, and suppress (t) notation. The stochastic Hamiltonian is then:

U ( profits)ert

E (H

Q) R

1 2

2 R

p

1 2

2 p

FP

1 2

2 FP

pFP

cov(dp, dFP)

Denote profits below as pr and C as cost, where CQ is the marginal cost of extraction, and DE is marginal development cost. Solution yields the necessary conditions:

ert kpr (P CQ ) 0

ertkpr (DE ) H E 0 ertkpr (FP(m) ) 0

In addition to these necessary conditions, differential equations result that provide descriptions of movement over time, or motion, and there are the usual initial conditions, and the transversality conditions (the above has a free endpoint and terminal time, and is an infinite horizon problem). Few economists use a model like the above and still try to do very much that is empirical. One fairly recent exception is by Conrad and Kotani (2005). They consider the issue of the timing of drilling oil in the Arctic National Wildlife Refuge, which they model as primarily a function of risky prices of oil. Two stochastic processes are considered: geometric Brownian motion (GBM) and a mean-reverting (MR) process. Like most do in natural resource problems, they make a variety of assumptions about some of the unknowns in the modeling to allow them to proceed. For example, they really do not know the in-situ value of ANWR (their "amenity" value) so they assume it has a per-household WTP per year of $2 to $3 that can be used to generate the annual loss in benefits if ANWR were to be developed. What is perhaps surprising is that the trigger price based on their model and assumptions is in the range of $20 to $30 per barrel of oil. That has of course been exceeded in recent years, and ANWR remains undeveloped as I write this, but the authors do not offer much in the way of explaining the difference between their modeling result and what we observe in real world. Groundwater mining and investigation of groundwater contamination issues by sampling or drilling are quite a bit similar to the economics surrounding oil drilling issues. Forsyth (1997) explores the specific issue of when it is time to stop investigating potential groundwater contamination, applying the Variable Sample Size Sequential Probability Ratio Test (VPRT) of Cressie and Morgan (1993). The economics of testing relates to what may be unknown probabilities of the hazard. Naturally, if the subjective estimate of the risk of the hazard is very small, a regulatory agency's inclination to test is probably going to also be quite small. The theoretical model incorporates a regulatory agency's subjective beliefs, updated using Bayesian

67 analysis that relies on the information obtained in testing. Forsyth (1997) empirically applies the VPRT model, which sequentially determines the optimal amount and location of drilling to contamination issues at the Savannah River Site, a nuclear waste production facility in South Carolina. Her model allows examination of two restoration options and the expected value of site investigation, all estimated using numerical simulation. A key result is that when the cost and benefit of remedial action are close together, the range of probabilities of the hazard for which testing is economical becomes narrower. Many studies have examined some element of risk in the ocean fishery. That risk might pertain to any number of random variables affecting the harvest or profit from the harvest of one or more species. McKelvey et al. (2003) consider a competitive harvesting game where there is incomplete information that arises from instabilities in the ocean. These in turn arise from environmental changes, possibly relating to climate shifts, and they affect the stock-recruitment relationship. In their simple model, there are two players (countries), and the randomness concerns the proportion of the total stock that each will get during the season. As the season progresses, more information is obtained by each country, but decisions must be made at the start of the harvest season, based on the imperfect information regarding the share each will have of the stock. As in many stochastic resource settings involving games, McKelvey et al. (2003) obtain interesting results primarily by conducting simulations of game outcomes. Finally, as most readers will know, in forest management, the optimal rotation is the optimal period in which to harvest a timber stand. Several researchers have considered the role that fire risk would play in the optimal rotation of timber. As an example, Reed (1984) develops a theoretical model and illustrates that under certain assumptions, the presence of fire risk shortens the optimal rotation. However, few researchers have implemented an empirical model, and especially one that considers amenity values in a multiple use context. Englin, Boxall and Hauer (2000) integrate an amenity value to recreational users of standing forests in the Canadian Shield region. When no recreational users are introduced into the problem, they find the usual relationship between fire risk and optimal rotation: as the probability of fire increases, the optimal rotation period shortens. But interestingly, for any given probability of fire, increases in recreational use actually lengthen the optimal rotation period because of the amenity value to them. 5.4 Environmental Uncertainty and Non-EU Problems/Valuation If one counts laboratory economics experiments as empirical work, and I do, then there are countless empirical studies of uncertainty with respect to choice-making, but almost all have to do with drawing balls from urns or some similar stylized choice, as Ellsberg (1961) originally proposed. These are surveyed quite a while back in Camerer and Weber (1992). Oechssler and

68 Roomets (2015) provide a very recent survey focused on Ellsberg "urn experiments," finding that the majority of them find that slightly more than half (50%) of the subjects studied appear to be ambiguity averse. Oechssler and Roomets (2015) offer a new "mechanical ambiguity" approach by using a clever devise called a Galton Box (or a "bean machine") and also find ambiguity aversion in similar proportion to strategic ambiguity, although their subjects fall into the low end of the studies they survey in the literature, in this regard.27 Several empirical studies consider relaxation of the strict EU framework, while still allowing recovery of predictions of behavior or values. Though these studies have at least parts that might fit best in this section on non-EU, Cameron (2005a) and Riddel and Shaw (2006) have already been discussed above, and both are CVM studies, as well as, for some functional forms, being departures from the straight or at least more familiar EU framework. Cameron (2005a) considers the effect of the mean and variance in subjective probability distributions on WTP for mitigating climate change, doing so with rather unusual functional forms. In her study the WTP is increasing in the mean of the distribution, and decreasing in the variance: her subjects are risk averse and thus, willing to pay less for mitigation when the benefits of doing so are uncertain. However, she does not find this result using a simple mean-variance model, rather, her functional form interacts the mean and the inverse of the variance. As mentioned above, Riddel (2014) also considers a CVM application in the context of climate change uncertainty. Previous CVM studies involving climate change mostly assume certainty in future impacts, but many scientists have characterized what we know about future climate change impacts as being "uncertain," in the sense that I have used the word throughout this paper (e.g. see Meinshausen et al. 2009). She elicits a subjective distribution over future global mean temperatures, and uses the CVM approach to estimate WTP for mitigation. The parallel to the gambling literature stems from allowing for skewness in the subjective distribution, similar to the gambler's preference for a long-shot bet: about 85% of her subjects exhibited subjective distributions that were not symmetric. The subjective distributions were elicited by asking them simple questions that led to creation of a histogram over mean temperatures 50 years into the future. In her study Riddel (2014) recognizes that some people might have a preference for a program to mitigate extreme climate change outcomes that arise with a very low probability. Not only the mean and variance, but also the degree of skew in the distribution becomes an element of her random utility model of program (climate mitigation) support, and hence, arguments in her 27 Interestingly, my colleague Richard Woodward proposed using a Galton Box to test for ambiguity preferences to me several years ago, well before either of us were aware of the Oechssler and Roomets (2015) study. He built a crude version of such a box in about 2013, in his garage.

69 functional representation of the WTP. She finds that more right skewed distributions are associated with higher WTP for mitigation. Though there are many studies that allow for ambiguity in the laboratory economics setting, but Riddel and Shaw (2006) provide one of the first empirical models that use survey data and introduce ambiguity about risks. The ambiguity is tied to people being unable to pinpoint a precise risk estimate using a risk ladder, instead providing a range on the ladder within which their risk belief resides. Naturally, the wider the range that is indicated, the less certain an individual is about where their risk estimate is positioned on the ladder. To estimate variation in risks for the uncertain people, an interval model is estimated that assumes the stated risks follow the Beta distribution. Watanabe and Fujimi (2015) offer a different approach to introducing ambiguity in a contingent valuation study of reduced mortality risk from bear attacks in Japan. The objective probability of a bear attack is quite small, and small probabilities apply nicely to the ambiguity frameworks because of the fact that these seem more difficult for people to grasp than larger probabilities (e.g. 50% and higher). Their ambiguity-related probabilities are provided to subjects as a range, rather than letting the subjects' subjective probabilities be used. The authors adapt the max-min expected utility model (MEU, referred to above, as theoretically developed by Gilboa and Schmeidler 1989 and considered for axioms by Olszewski 2007). The MEU model essentially averages the best and worst situations, and allows estimation of a weighting parameter that indicates the amount of ambiguity aversion. The magnitude of ambiguity is the range from high to low, ph pl , and the risk level is the midpoint of this range. Note that a range of 2 to 8% implies more ambiguity than a range of 4 to 6%, but both have the same midpoint, or risk. Watanabe and Fujimi (2015) find that their survey subjects who are ambiguity averse put more weight on the pessimistic/worse outcome pertaining. Their study is interesting because it suggests a benefit from programs that reduce ambiguity, even though they might not change risk. When subjects are ambiguity averse, reducing ambiguity magnitude can still increase welfare. The authors unfortunately offer no information on how their model influences the standard VSL, but clearly, it is different than a model with no ambiguity. This can easily be seen by examining the marginal rate of substitution between risk and wealth in their model, and noting that the expression (their equation (5)) contains the ambiguity attitude parameter. [When this parameter equals one, their MEU model becomes a maxmin model.] Because the context for some of the ambiguity studies is mortality risk, there are implications for what the VSL is under conditions of ambiguity. Treich (2010) demonstrates that ambiguity aversion increases the VSL, depending on the usual assumption that the marginal utility of income is larger when alive than when dead. Ideally, it would be interesting to compare VSL estimates for various degrees of ambiguity aversion. However, Riddel and Shaw (2006) find that on average, for their sample, there is ambiguity seeking, and simply report a combined VSL

70

estimate of about $5.45 million and this falls within the range of typical VSL estimates in the literature. The estimate combines the preferences from those who said they were certain about risks by using their reported stated risk with the preferences from those who said they were uncertain, using the mean risk that comes from the interval model.

There are now several studies that use the DCE approach, incorporate risk, and relax some of the assumptions of the conventional EU model. These include advances in the transportation and health literature, but these won't be considered here, even though it might well be that they are ahead of others in the environmental and resources arena. (e.g. Hensher et al. 2011; Van Houtven et al. 2011).

Roberts et al. (2008) develop a stated choice model for lake choice with risky attributes and this also was briefly mentioned above. This is really more what I'd deem as an ambiguity study and it involves introduction of a pwf. They in fact let the probability weighting function of a particular level of an attribute that affects random utility be entered into the linear utility equation, stating "This approach is entirely consistent with random utility theory, recognizing that the probability with which an outcome results from a choice alternative is simply another attribute of choice." (p. 586). The pwf they utilize follows others who deemed it the log-odds weighting function (e.g. Goldstein and Einhorn 1987), and is:

(

p)

p

p [1

p]

They integrate risk and the stated choice framework, including the percentage "chance" of an outcome as a stated attribute level (at a 0, 10, 50, 90, and 100% chance for an algae bloom; similar percentage chance for variation of water level from normal) for three given alternatives from which subjects choose (one of these is the status quo, for the uncertainty treatment). There are two "treatments:" one with certainty and no status quo, and one with risk (which they call uncertainty). The uncertainty treatment choice matrix is depicted in Figure 2.

Figure 2: Choice card for survey with uncertainty (Roberts et al. 2008)

Option A

Option B

Option C: (Status Quo)

50% chance of algae bloom 10% chance of algae bloom 50% chance of algae bloom

50% chance of water level 8 feet below normal

100% chance of water level 2 10% chance of water level 5

feet below normal

feet below normal

50% chance of normal water 0% chance of normal water

level

level

90% chance of normal water level

71

$2 user fee

$2 user fee

$0 user fee

Wibbenmeyer et al. (2013) apply the DCE approach to a sample of U.S. wildfire managers to assess the importance of fire-protection related attributes. They closely follow the empirical model developed by Hensher et al. (2011) in allowing for probability weighting functions to play a role in the choice of several wildfire scenarios and protection strategies. However, their model assumes no curvature in the utility function with respect to key variables, so it might be assumed that all managers are risk neutral.

The scenarios depict how many homes would be lost in a wildfire if the fire burns, the probability that fire will reach homes without suppression, the severity of the fire in the watershed (relating to tree density), and the probability that fire will reach the watershed, again without suppression. The strategies that the managers can choose from mix various levels of the following attributes (in Figure 3):

Figure 3: Attributes used in the Wildfire risk study of Wibbenmeyer et al. (2013): Example choice matrix

Attributes

Choice Option A Option B

Option C

Protect residential homes Protect Watershed Personnel exposure

Yes

Yes

No

Yes

100 aviation person hours (aph)/3,000 person days direct line production (pddlp)

100 aph/3,000 pddlp

Wildfire duration

< 14 days

Probability of success of

strategy

50%

>30 days 90%

Yes No 100 aph, 100 pddlp >30 days 75%

72

Cost of wildfire

$0.5 million

$4 million

$2 million

The internet-based survey the authors implement was given during a period in 2009 during a to district rangers and forest supervisors who work for the U.S. Forest Service, and a 28.4% response rate yielded 583 completed surveys. Their results indicate that their sample members are insensitive to differences across the probabilities that the wildfires will burn. An S-shaped probability weighting function is indicated by the data and model, with respect to the probability of success with protecting homes. This is in contrast with the usual finding of an "inverse S" pwf. The authors admit that their data only supports use of the single parameter form of the probability weighting function, but each probability in their model is fitted with its own function, i.e., the parameters are potentially different. It is only the home protection probability parameter that is greater than one, yielding the S function shape in this risk dimension. This study is unusual in that the wildfire managers essentially make decisions on behalf of the community, and also with respect to agency leadership and political expectations. There may also be issues with probability weighting when there are multiple attributes at risk. It is not clear from the article whether the managers understood the "status quo" probabilities that might be involved, nor whether they can opt out of the three choice alternatives (A,B,C). That may be important because in more traditional risk choice tasks people are given a lottery, with probabilities clearly summing to one. Also, as the authors point out, their probabilities given to subjects in the choice experiment are limited, and never close to zero or one, which is where we often see problems with decision making arise. In any case, the authors conclude that their sample of wildfire managers do not make decisions in line with minimization of expected economic loss, as might be expected from such a group of agents. A possible explanation of the results is that wildfire managers feel they might be able to control the probability of success of protecting homes, but have no such control over the probabilities of the burns. Another possible explanation, which cannot be tested with their data, is that there is a different emotional response or "affect" associated with damage to homes, versus damage to watersheds. In a similar study that involves most of the same authors as Wibbenmeyer et al. (2013), Hand et al. (2015) also estimate a choice model for wildfire managers. Many elements of the study are the same or similar to their earlier study. However, there are important key differences. This newer study allows for CRRA type of curvature with respect to key risk-related variables, and thus, a test of non-neutral risk preferences can be introduced. The study also explores the role of framing effects. Essentially, for a control group the fire personnel exposure is framed as the number of hours of time in fire-fighting airplanes (aviation). For the treatment group, it is directly recognized that more time in flight is correlated to fatalities. [U.S.F.S. statistics based on

73 the past 10 years of data suggest that there is an average of 4.801 fatalities for every 100,000 flight hours (see Hand et al. p. 1880). The thought of the authors is that introducing fatalities into the risk choice scenario is affect inducing. The authors also allow for heterogeneity in estimation in this version of their choice model, following the Latent Class Model approach that others have used (e.g. Conte et al. 2011; see other references in Hand et al. 2015). Last in this rich study of Hand et al. (2015), but certainly not least, the choice format utilizes a lottery type of presentation, so that the probabilities of a "good" versus "bad" outcome sum to one, and the managers can easily see this. A key finding this time is that the usual "inverted S shape" probability weighting function is supported for the data from the treatment group. Recall that the S shape held in the previous study. There are several possible differences that may lead to this, but the authors attribute this mostly to the affect framing for the treatment group. The authors determine that the treatment group and control group have similar CRRA coefficients, so that risk aversion holds for their entire sample of wildfire managers, but maintain that having separate models is warranted. They do not estimate WTP for marginal changes in aviation hours or fatalities. In quite a different application of the DCE to recreational fishing choices in the Gulf of Mexico, Wielgus et al. (2009) included the probability of an attribute in a choice model (fish size) being improved, to see if this mattered in their estimation of the probability that choices are made. They find that by doing so, choices seem to be more consistent than when certainty of the program's result is assumed. Uncertainty and Climate Change It seems to me that the only environmental or resource issue these days that gets at least semiregular play in the top journals in all of economics (American Economic Review, say) is climate change or global warming (I will hereafter just use CC, for climate change). I do not believe one can talk at length about this topic without bringing up risks, and I would argue here that risk estimates themselves vary and may be quite imprecise, and involve at least some degree of uncertainty (e.g. see Stainforth et al. 2005; Riddel's work (2014) appears to indicate that she shares my view on this point). What is happening or will happen with CC remains fairly difficult for the public to understand (see Bostrom et al. 1994, 2002 and Reynolds et al. 2010), even with trends that changed after the release and promotion of the movie, An Inconvenient Truth. This may be for good reason, as the "experts," the International Panel on CC (the IPCC) are careful to use the word "risk" in its description of the magnitude and timing of impacts. But I believe what they mean by risk is really what I've meant here with the word uncertainty, as many competing atmospheric and ocean current circulation models provide different estimates of the probabilities tied to outcomes (see discussion of the IPCC Guidance Notes on risk and uncertainty in Pyke et al.2008).

74 As noted earlier in the chapter, Bargiacchi (2003) puts climate change squarely into the uncertainty category, stating that one might read the precautionary principle (in the 1992 Rio Declaration of the United Nations) as especially relevant when there is uncertainty, and thus relevant to the whole climate change debate. She defines prudence as a deviation from the EU framework, and uses the Rank-Dependent EU framework (Quiggin, 1982) to develop her theoretical model, showing that paying attention to the worst possible outcomes is consistent with the structure of that model. Gangadharan and Nemes (2009) state that "...climate change is undoubtedly a case of uncertainty since less precise probability estimates exist in either science or economics..." [p. 146]. Note also that Cameron (2005b) conducts surveys of students, asking them to forecast future temperatures, and then explores how information affects their perceptions of these temperaturerelated risks. There is some slight evidence of Bayesian learning on the students' part, but she finds they place a large amount of weight on priors in certain situations. This too suggests that the climate change context is not one of straight-forward risk. If the reader still wants to put CC into the "risk" category, that might be defensible with respect to some aspects of it, however, two more points should be noted before doing so. Much of what we think we know about climate depends on the assumption of stationarity in distributions of climate events such as historical temperature or rainfall patterns. Stationarity has been questioned as a valid assumption to make because climate change may in fact change it (Milly et al. 2008), and thus, economic analysis needs to consider this as a possibility when considering the impacts of climate change on agriculture (e.g. McCarl et al. 2008) or environmental conditions. Second, as Pindyck (2007) points out, we don't know how human beings will respond to any of the changes that are contemplated in association with CC, or even how much demand or supply will respond to carbon taxes or what happens with CO2 emissions trading. So, even if the science gets better and uncertainty about risk is reduced, we are likely to be left with the truest sense of unknowns because of this (e.g. Curry et al. 2006). 5.5 Uncertainty in Natural Resource Problems Natural resource extraction of course has connections to environmental problems, with the most obvious being fossil fuel-related. Substitution away from fossil fuels to halt global warming could in fact be considered a resource problem or an environmental problem, as it involves both. Most of the applications of uncertainty modeling to natural resource problems have been theoretical, not empirical. Results from the theory allow for predictions (e.g. extraction of a resource might be faster or slower when certain risk conditions prevail), but it is rare that these can be tested using data. In fact it is fair to say that quite often in optimization problems, even under certainty, there are few or no empirics, or at least that the numerical simulation exercises

75 that one often sees are loosely connected to the dynamic optimization theory, at best. This is an area where new developments will help a great deal. As noted above, optimal control problems can certainly include risk, but also uncertainty, such as in the robust control framework. Woodward and Shaw (2008) consider environmental aspects of water resource management under unknowns related to fish stocks, following the robust optimal control models that others initially developed. Their particular issue is how to manage streamflow to protect endangered species, when little or nothing is known about the species' growth function, as is often the case. We typically might think that stock size and growth characteristics are simple measurement issues for biologists, but in fact habitat and predatory relationships may make prediction of these quite difficult. At extremes of course, such as water levels below some threshold allowing survival, biologists know what the change in stock size will be (the fish all die), but at lesser extremes they may be in the dark. Exemplary of the shortcoming of theoretical papers, Woodward and Shaw (2008) draw some unexpected conclusions from their results. They show that above the critical level where fish stocks would irreversibly crash, allowing for uncertainty (again as distinct from the cases of risk or certainty) suggests keeping more water in situ to protect the stock than if no uncertainty is considered. This is consistent with supporting a safe minimum standard. However, these conclusions are within a limited range in the growth function, and for other ranges, the implications of the model are inconsistent with what is observed in governmental policy reactions. What is often observed is that governmental policy responds greatly when stocks are nearly, or even thought to be extinct, but their model predicts that at or below the critical level, optimal management under uncertainty is consistent with giving up on the stock. Viscusi and Zeckhauser (2006) discuss risk perceptions of climate change and values, using temperature increases in Boston as an indicator, but there is no formal connection between perceived risks and the values that respondents (subjects taking an internet-based survey, all of which were students at Harvard University) have for climate changes in any econometric modeling. The authors first ask questions about the chances of temperature increases at points in a distribution (upper 95%, lower 5%, and median estimate), and results suggest that these are quite different across their subjects. The also ask questions about perceptions of increased hurricane intensities that might be associated with climate change. The results suggest some significant correlation for some subjects between their assessed climate change risks and background risks such as heart attacks. Second, the authors use a limited CVM-style payment card, asking how much subjects would pay in a gasoline tax to help reduce global warming. The subjects who estimate higher median temperature increases in their distribution do state higher WTP in gas taxes, which appears rational. Akter and Bennett (2009) estimate values for mitigation for uncertain climate change impacts and policies, attempting to incorporate ambiguity in the analysis. Their work builds on the ambiguity models that allow departure from the EU framework by incorporating the variance

76 term in the decision equation. As with Riddel and Shaw (2006), the larger the variance, the more ambiguity the decision maker faces. In the next and last section of the paper, I offer a summary of the newest trends involving risk and uncertainty, and suggest some future work. 6.0 Newest Trends and Future Work Newest Trends The focus of a great deal of mainstream economics these days is heterogeneity (unobserved and observed), and economists have begun to explore empirical models that allow for heterogeneity in risk preferences (e.g. Jianakoplos and Bernasek 1998) and risk perceptions (Riddel and Kolstoe 2013). The latest trends in empirical work also consider endogeneity and measurement error in subjective probabilities or risks. There are many approaches for dealing with endogeneity in explanatory variables, including instrumental variables (e.g. with normality, simple two-stage least squares (2SLS)), the use of control functions, and special regressor approaches (e.g. Lewbel, Dong and Yang 2012). There is controversy among econometricians over whether one approach is preferable to others. 6.1 Endogeneity and Measurement Error in Subjective Risk In explaining behaviors or willingness to pay (welfare measures), we may well wish to use subjective risks, as was discussed above. These may provide better explanations of behavior or variations in WTP than science-based probabilities or risk measures. For example, Jakus et al. (2009) use the simple stated probability of mortality risk associated with arsenic in drinking water in a model to explain whether the household drinks bottled water. The idea of using the subjective probabilities is appealing, but the door is then opened to the possibility that there is measurement error in the subjective risk estimate, and possible endogeneity. Shaw, Jakus and Riddel (2012) use the same survey data, but focus on treatment of home drinking water and whether this is a function of subjective risks. In one version of their model, they make use of the predicted value of the risk, but note that this does not satisfactorily take care of problems associated with measurement error and endogeneity. Alberini (2005) considers the possibility that unobserved variables affect both WTP (assumed to be log-normally distributed) and subjective risk (S) in an SP setting. To explore this she lets S = f(X,Z), where Z is a set of variables excluded from the WTP equation to ensure identification, and assumes S is normally distributed. Her context is road safety or traffic accidents, and she does find evidence that there are different implications of whether the respondent is paying attention to absolute, or only relative risk, depending on whether risk is assumed to be endogenous (and corrected for using 2SLS). This moved in the right direction, as compared to the naпve assumption that subjective risk is exogenous, but Alberini (2005) did not offer thoughts

77 on the nature of her welfare measure, nor did she consider whether it was appropriate to assume that the risk variable is normally distributed, allowing the use of the simple 2SLS correction. As noted in the VSL section above, Riddel and Shaw (2006) use the subjective probability in their model of WTA, which is an option price type of welfare measure. The WTA is in association with bearing increased mortality risks in their context which involves the transportation and storage of high-level radioactive wastes, and they estimate a model of the subjective risk assuming these probabilities follow the beta distribution. Riddel (2011) uses part of the same data set as was used by Riddel and Shaw (2006), but goes well beyond the earlier analysis in handling the potential problems with the subjective probability. She borrows the "very exogenous regressor," also called a "special regressor" approach to correct for the measurement error and endogeneity issues (see Lewbel, Dong and Yang 2012 for a discussion of the approach). Kalisa, Shaw and Riddel (2015) also apply the special regressor approach to the arsenic in drinking water data set used by Jakus et al. (2009). They find that failure to correct for measurement error/endogeneity is critical: it leads to very misleading results about the role of the subjective probability in their model of willingness to pay for the risk reduction. Suggestions for Future Work There remain many possible avenues of future work to integrate risk and uncertainty into resource economics. It should be clear from above that there is a lot more to do to incorporate these into DCE modeling. Another important thread, which I overlooked due to space limitations, examines whether contracts or bond schemes can be used to reduce risks from environmental damage from firms (Schmitt and Spaeter 2005). Second, as mentioned in the introduction, much work is needed on valuing reductions in morbidity risk. Second, more and more economists are turning to natural experiments to establish causality in empirical results, as well as looking to prediction markets. Another issue that is appearing, with high potential to help us learn about the effectiveness of different methods, is calibration, addressed first, below. In addition, many now believe that one simply cannot examine preferences over risk without also considering time preferences. This is certainly intuitive when resource problems involve tradeoffs between the present and the future, because the future is of course, "unknown." The time/risk interface is also briefly considered below. 6.2 Morbidity As medical science has made advances in preventing deaths, there may be prolonged lives, but lives involving illness that might stem from a disease that previously would have killed a patient. All economists know that cost of illness measures are lower bound estimates of the benefits of avoiding it. Cameron (2014) extensively reviews what we know and do not know about

78 estimating the value of reductions in morbidity, noting that we know far less about this than we know about valuing reductions in mortality. She also describes the need to consider the tradeoffs involved in living longer with an illness, versus possibly living a shorter, but healthier (being less sick) life. Bosworth et al. (2009) explore such trade-offs in a national survey, using a community-level approach to valuation: they find that avoided deaths and avoided illnesses are substitutes, for their sample of observations. Much more needs to be done to try to develop models that value morbidity reduction. 6.3 Natural Experiments and Prediction Markets Risk preferences and subjective risk estimates (probabilities) can potentially be revealed using natural experiments. Hurricanes were considered in this regard, as mentioned above, by Kelly et al. (2009) and Halstrom and Smith (2005). Reliance is on market transactions and changes in prices, which might be tied to variation in risk estimates, holding all other factors constant. The trick is of course in the latter. Global warming or carbon reduction permits are traded in several markets around the world. These might be examined for trends that reveal beliefs and preferences in global warming. In the United States in any case, beliefs have fluctuated over time, according to polling numbers. Of course, speculators will watch regulatory efforts carefully, because these affect the value of the permits in pollution abatement schemes. These sort of markets may lead, as described above, to predictions of probabilities, and in fact a new strand of literature has become known as focusing on prediction markets. Prediction Markets The internet and other means can be used to assess predictions of probabilities using trades in market commodities. The idea runs through the paper discussed above on hurricane futures (Kelly et al. 2009), and is discussed in general in Wolfers and Zitzewitz (2004). Another example is trades on uncertain political events, such as who gets elected President. A continent claim can be guaranteed, say for $100 if a Democrat is elected, and if the price of the claim is $10, the market seems to want to assign a 10% chance of this happening. 6.4 Calibration Most economists might believe that preferences are best, or perhaps even only, revealed by responses to real incentives. As noted above, the most notable resource economics method that involves hypothetical scenarios is the SP approach, which includes the contingent valuation method (CVM). For at least forty years, resource economists have questioned the validity of CVM in determining an individual's actual maximum willingness to pay (WTP). The basic issue of course, is whether CVM questions in one form or another, are incentive compatible (e.g. externally valid), the worry being that individuals can overstate, or understate their true WTP

79 (i.e. CVM leads to hypothetical bias). The stated choice experiment (DCE) approach can be implemented to allow recovery of WTP by including cost or payments as one attribute of the choice. It is a bit different than the CVM, but criticism nevertheless involves the issue of hypothetical scenarios that may not indicate true choices among alternatives, and thus true WTP. For example, Lusk and Schroeder (2004) find that using DCE, hypothetical choices overstate the probability of choosing a well-known commodity (beef steaks) as compared to choices that involve actual payments, and thus, overstate WTP. One approach to determining validity involves calibration of the hypothetical WTP by using values obtained in a laboratory auction. Fox et al. (1998; 2002) used a risk reduction associated with irradiated pork as an application of what they deemed "CVM-X," finding that the SP measures should be calibrated by factors that are less than one (supporting the notion that individuals may overstate WTP in SP approaches), and that the calibration factor varies with the commodity being valued. The calibration function is estimated using the laboratory auction bid as a dependent variable, as a function of the hypothetical WTP as well as some other factors, which do not appear to be significant. The coefficients from this model are used to predict the individual's auction bid, and the calibration factor is the average sample ratio of the predicted bid divided by the average sample survey bid. The authors do not mention potential endogeneity problems in using the stated hypothetical bid as a regressor. In what appears to be the first similar calibration exploration involving a stated-choice survey, Alfnes and Rickertsen (2007) use the marginal WTP based on the DCE modeling to explain or calibrate auction WTPs, i.e. the SC-based WTP is a right-hand-side variable in estimation of the variation of the WTP that is obtained from a 2nd-price Vickrey auction with real payments for beef. This seems a bit odd because if one suspects bias in a hypothetical response, it would seem more natural to let the hypothetical WTP be a function of the true WTP, not the other way around (see, for example, Schulze et al. 1996). In any case, the beef products can be hormonefree or not, and can be domestic or imported from a variety of locations. Over 1,000 subjects were recruited from various locations in Norway. Their results indicate a difference between the mean WTP for various types of beef, and for the auction versus the SC survey; the latter are substantially higher over all of the geographical regions the authors considered. However, there is more than one possible factor that explains this difference so it cannot completely be attributed to hypothetical bias. For example, the auctions involve bidding to determine the price, whereas the choices people make during the SC survey involve a posted price. Mixing SP and lab results can be done vis-a-vis risk elicitation, but I know of no one who has tried this yet. A variety of SP models have indeed been mixed with laboratory experiments, but there is a limit to what one can do because of issues involved in playing out a game or task involving an environmental outcome. For example, we might well wish to do risk elicitation where the risks related to impacts on resources because of global warming, but it is impossible to play out the impact in the laboratory. However, some are optimistic about use of laboratory

80 experiments in finding interesting and relevant results that could help predict behavior in the field. In fact, Shogren et al. (2010) go as far as to say that "...the lab versus field controversy is ultimately a red herring." (p. 134). The paper by Shogren et al. (2010) is focused on how laboratory experiments can best be used in non-market valuation, not in risk elicitation. However, the bulk of the paper describes experimental auctions, so it may be fair to say that when doing experiments in the lab, running auctions might be a key element. An auction of course involves the transfer or trade of a good or service, or rights to such a good or service, thus, to the extent that a resource issue can be couched as an auction, there is hope for calibration. The authors mention several situations involving food risks (irradiation, such as the Fox et al. 2002 study), but also wolf reintroduction, wetland restoration involving hunting leases, habitat preservation, and water quality improvements. Also, as I noted above, in reference to the conclusion drawn by Harrison, List and Towe (2007), that laboratory experiments and naturally occurring experiments (as might be experienced with environmental outcomes) might not jell particularly well, especially when there are multiple sources of risk (e.g. the background risk issue). The experimental setting in the laboratory produces results, even when incentives are "real," that may not conform to "real" preferences outside the laboratory is also one shared by Horowitz and McConnell (2000). In three experiments involving endowments of physical items (mugs and binoculars) and actual resale of those items, some subjects are found to ignore total or marginal opportunity cost, leading to implausible bounds on values. These results relate to buying and selling prices, with the thought that subjects should be reasonable regarding the compensation they demand when giving up a commodity, taking into consideration resale values and replacement, as well as transactions costs. 6.5 Time and Risk Many economists have recognized that most or all intertemporal problems inherently involve risk or uncertainty. One of the most basic empirical explorations concerns finding individual discount rates (IDRs). Many studies now suggest that finding IDRs is not a pursuit independent of studying risk preferences. I cannot cover all of these here, so am selective. The recent push to address both time and risk preferences uses a variety of approaches. Among the most simple is the application of the DCE framework to choices that reveal time and risk preferences by Ida and Goto (2009). The study pertains to heterogeneity between smokers and non-smokers, but the actual choice tasks do not directly involve smoking behavior; rather, they involve standard lottery and time delay, as would be true in the studies that separate these two choices into tables with MPLs.

81 Tanaka, Camerer and Nguyen (2010) conduct risk and time discounting field experiments for households in Vietnam. They use MPLs in a novel way that allows recovery of parameters that can identify a probability weighting function. For most of their sample, they do not find evidence in favor of the standard EU, instead finding the usual inverted S shape that Tversky and Kahneman (1992) suggest may hold, mentioned above. Carlsson et al. (2012) adopt a similar approach to Tanaka, Camerer and Nugyen (2010) extending the framework to explore the influence of a spouse on choices to determine if such choices seem to be made independently or jointly: their data for rural Chinese households suggests joint influences. Coble and Lusk (2010) also try to disentangle risk and time preferences, using a twist on the usual MPL approach. In their experiment the MPL has two attributes, and not one, making it quite similar to the DCE. Instead of presenting a pair of lotteries that only differ in payments to the subject for different probabilities, the lottery pair also incorporates a time delay in payment. They conclude that the discounted expected utility model is not supported: one needs more than a single parameter to characterize preferences: subjects demonstrate preferences to delay the resolution of risky outcomes. Andreoni and Sprenger (2009) suggest that the conventional framework of discounted expected utility (DEU) may be inappropriate because of differences in functional forms for certain, and uncertain utility, which lead to violations of the framework's properties. Specifically, the DEU implies that intertemporal allocations should depend on relative intertemporal risk. They introduce convex time budget constraints rather than linear ones, which involves getting experimental subjects to choose how to allocate resources (tokens symbolic of money) to sooner and later payments.These allocations allow estimation of implied discount rates as well as curvature in the utility function that stems from risk preferences. Using these convex time budgets (CTB), discount rates are found to be high: about 30% annually. Evidence provides support for the notion that certain and uncertain utility should be treated differently: they are not "interchangeable," as the DEU dictates. Interestingly, the results of their model are also not consistent with a probability weighting model (non-EU), but their model can generate the same phenomena as probability weighting models do. Their paper has led to controversy over whether their CTB approach is better or worse than the MPL approach, and a number of "comment" types of papers have emerged (see references in Grijalva et al. 2015). Last, but not least, Drichoutis and Nayga (2010) join a "behavioral economics" trend in research to explore the role of emotions in forming and revealing preferences. They run an experiment where mood states are induced by having two groups of subjects believe they have done well, versus poorly on a test, then compare the risk and time preferences between groups. The idea is that the positive or negative mood may influence implied discount rates or levels of risk aversion. This is similar to previous work that has explored optimistic versus pessimistic states of mind and responses relating to future outcomes (e.g. Bassett and Lumsdaine 2001). They find that subjects induced with a positive mood exhibit higher discount rates than a control group.

82 There may be some issues in simply having subjects engage in a cognitively demanding task (this appears to increase risk aversion alone, or independent of mood state), but the idea that mood states or emotions influence preferences during elicitation is an important area of future research. 6.6 Empirical Bayesian Models Risk perceptions are quite important in the natural hazards area, as mentioned above. This may be because of the lack of precision in modeling hurricane events, although this is getting better recently. When a hurricane is forecast for a particular area, with a particular magnitude, but doesn't pan out, then many costs associated with mitigation (including evacuation) can be incurred. Kelly et al. (2010) model how traders in the Hurricane Futures market update their perceptions of the risks of hurricane landfalls, finding that they behave consistent with Bayesian updating. In their context the securities market has prices that reflect traders' subjective believes of the probabilities of hurricane events. This echoes the notion that probabilities are indeed the prices for contingent claims. The authors note that most traders are actually meteorologists. They find that the credibility of the source of the hurricane information matters. Traders put weight on forecasts that come from other sources than the National Hurricane Center, which makes the official government forecasts. In fact, forecast accuracy changes as a hurricane gets closer to landfall, and they find that the traders are more accurate than the NHC for storms more than five days out, but less so when only two days or less out from landfall. An important point the authors raise is that quite often, a Bayesian averaging model would assume that forecasts are not correlated, but they may well be, making the information gleaned from one source correlated to another. Uncorrelated information may be of more value in forming an updated probability, and thus should perhaps be given more weight than a correlated source of information. The traders used in the analysis have high skill: they correctly predict hurricane landfall in about 84% of their trades. Thus, it remains to be seen whether an analysis like this might work for more uninformed traders, perhaps such as traders in global warming permits. ________________ Acknowledgements Thank you to the two reviewers and the editors for their comments on this paper. A lot, if not most, of what I know about risk and uncertainty has been learned from my collaborators and trusted colleagues and contacts over the years. I hope I did not forget any key people, but these start with Mary Riddel and Paul Jakus, who got me started on this exploration, largely because we pondered our own and others' motives and non-monetary rewards associated with the risky sport of rock climbing. Alphabetically, the group of people to thank also includes Bob Berrens, Trudy Cameron, Simone Cerroni, Catherine Eckel, Therese Grijalva, Paan Jindapon, Jim Hammitt, Jayson Lusk, Sandra Notaro, Bill Neilson, To Nguyen, Marco Palma, Roberta

83 Raffaelli, Lisa Robinson, Jason Shogren, Kerry Smith, Mark Walker, and Richard Woodward. Let this chapter serve as an expression of my thanks to them, as in a way, they are all coauthors of parts of what I have written here. It is quite possible that some of what I wrote is directly based on things that they have told, or discussed with, me over the years. I also have to thank various people over the years who did risky things and liked to talk about it: John Hollberg and Edward Morey (who I imagine still ski the trees without wearing a helmet), Chip Lee (my most regular climbing partner for the past 30 years), and several other cyclists and climbing partners. Finally, I'm really not sure a "thank you" is appropriate, but I've certainly had to think hard about many things upon receiving some, uh, "interesting" comments from some reviewers of my past papers. Some of these reviewers have quite a characteristic and identifying tone, but are best left un-named.

84 References/Further Reading Adamowicz, W., Dupont, D., Krupnick, A., & Zhang, J. 2011. Valuation of cancer and microbial disease risk reductions in municipal drinking water: An analysis of risk context using multiple valuation methods. Journal of Environmental Economics and Management, 61(2), 213-226. Adler, M. D., Hammitt, J. K., & Treich, N. 2014. The social value of mortality risk reduction: VSL versus the social welfare function approach. Journal of health economics, 35, 82-93. Еkerman, J., Johnson, F. R., & Bergman, L. 1991. Paying for safety: voluntary reduction of residential radon risks. Land Economics, 435-446. Akter, S. and J. Bennett. 2009. Estimating non-market values under scenario and policy ambiguity: the case of climate change mitigation in Australia. Australian National University, research report # 32 (June). Akter, S., Bennett, J., & Akhter, S. 2008. Preference uncertainty in contingent valuation. Ecological Economics, 67(3), 345-351. Alberini, A. 2005. What is a life worth? Robustness of VSL values from contingent valuation surveys. Risk Analysis, 25(4), 783-800. Alberini A, Cropper M, Krupnick A, Simon NB. 2004. Does the value of a statistical life vary with age and health status? Evidence from the US and Canada. Journal of Environ Economics and Management 48(1):769792. Alberini, Anna, M. Cropper, A. Krupnick, N.B. Simon. 2006. "Willingness to pay for mortality risk reductions: Does latency matter?" J. of Risk and Uncertainty 32: 231-45.

85 Alfnes, F., & Rickertsen, K. 2003. European consumers' willingness to pay for US beef in experimental auction markets. American Journal of Agricultural Economics, 85(2), 396-405. Allen, F. W. 1987. Towards a holistic appreciation of risk: The challenge for communicators and policymakers. Science, Technology, and Human Values, 138-143. Andersen, S., J. Fountain, G.W. Harrison, A.R. Hole and E.E. Rutstrom. 2012. Inferring beliefs as subjectively imprecise probabilities. Theory and Decision 73: 161-84. Andersen, S., Fountain, J., Harrison, G. W., & Rutstrцm, E. E. 2014. Estimating subjective probabilities. Journal of Risk and Uncertainty, 48(3), 207-229. Anderson, Lisa and J. Mellor. 2008. Predicting health behaviors with an experimental measure of risk. J. of Health Economics 27: 1260-74. Andreoni, J., & Sprenger, C. 2012. Risk preferences are not time preferences. The American Economic Review, 102(7), 3357-3376. Arrow, K. J. 1964. The role of securities in the optimal allocation of risk-bearing. The Review of Economic Studies, 91-96. __________1971.: Essays in the Theory of Risk-Bearing. Chicago, IL: Markham Publishing Company. Baker, Justin; M. Riddel; W.D. Shaw; and R.T. Woodward. 2009. "Explaining Changes in Subjective Hurricane Risks as Time Passes: An Analysis of a Sample of Katrina Evacuees." Journal of Risk Research, 12 (1/January): 59-74.

86 Bargiacchi, R. 2003. Climate change scenarios and the precautionary principle. Risk and Uncertainty in Environmental and Resource Economics, ed. Justus Wesseler, Hans-Peter Weikard, and Robert D. Weaver. London: Edward Elgar Publishing. Barnes, K. 2010. Clock ticking for an Istanbul earthquake: A wake-up call for seismic-hazards preparedness in Turkey. Nature (News), doi:10.1038/news.2010.121. Barrieu, P. and B. Sinclair-Desgagnй 2006. On precautionary policies. Management Science 52: 1145-54. Bassett, W. F., & Lumsdaine, R. L. 2001. Probability limits: Are subjective assessments adequately accurate?. Journal of Human Resources, 327-363. Ben-Tal, A., L. El Ghaoui, and A. Nemirovski 2009. Robust Optimization. Princeton: Princeton University Press. Berger, M. C., Blomquist, G. C., Kenkel, D., & Tolley, G. S. 1987. Valuing changes in health risks: a comparison of alternative measures. Southern Economic Journal, 967-984. Bernknopf, R. L., Brookshire, D. S., & Thayer, M. A. 1990. Earthquake and volcano hazard notices: An economic evaluation of changes in risk perceptions. Journal of Environmental Economics and Management, 18(1), 35-49. Bier, V. M., Haimes, Y. Y., Lambert, J. H., Matalas, N. C., & Zimmerman, R. 1999. A survey of approaches for assessing and managing the risk of extremes. Risk analysis, 19(1), 83-94.

87 Bin, O., & Landry, C. E. 2013. Changes in implicit flood risk premiums: Empirical evidence from the housing market. Journal of Environmental Economics and management, 65(3), 361-376. Binswanger, H. P. 1980. Attitudes toward risk: Experimental measurement in rural India. American journal of agricultural economics, 62(3), 395-407. Bismut, J. M. 1975. Growth and optimal intertemporal allocation of risks. Journal of Economic Theory, 10(2), 239-257. Blomquist, G. 1981. The value of human life: an empirical perspective. Economic Inquiry, 19(1), 157-164. Blomquist, G. C. 2004. Self-protection and averting behavior, values of statistical lives, and benefit cost analysis of environmental policy. Review of Economics of the Household, 2(1), 89-110. Bockstael, N. E., & Opaluch, J. J. 1983. Discrete modelling of supply response under uncertainty: the case of the fishery. Journal of Environmental Economics and Management, 10(2), 125-137. Bostrom, A., Morgan, M. G., Fischhoff, B., & Read, D. 1994. What do people know about global climate change? 1. Mental models. Risk Analysis, 14(6), 959-970. Bostrom, A., O'Connor, R. E., Bцhm, G., Hanss, D., Bodi, O., Ekstrцm, F.& Sжlensminde, I. 2012. Causal thinking and support for climate change policies: International survey findings. Global Environmental Change, 22(1), 210-222.

88 Bosworth, R., Cameron, T. A., & DeShazo, J. R. 2009. Demand for environmental policies to improve health: Evaluating community-level policy scenarios. Journal of Environmental Economics and Management, 57(3), 293-308. Boyce, R. R., Brown, T. C., McClelland, G. H., Peterson, G. L., & Schulze, W. D. (1992). An experimental examination of intrinsic values as a source of the WTA-WTP disparity. The American Economic Review, 1366-1373. Breyer, Stephen. 1993. Breaking the vicious circle: Toward effective risk regulation. Cambridge MA: Harvard University Press. Brier, G. W. 1950. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1), 1-3. Brookshire, David S. et al. 1985. A Test of the Expected Utility Model: Evidence from Earthquake Risks. J. of Political Economy 93 (2): 369-89. Buschena, David E. et al. 2001. Valuing Non-marketed Goods: The Case of Elk Permit Lotteries. J. of Environmental Economics and Management 41: 33-43. Camerer, Colin & Martin Weber 1992. "Recent developments in modeling preferences: uncertainty and ambiguity." Journal of Risk and Uncertainty, 5, 325-370 Cameron, T. A. 2005a. Individual option prices for climate change mitigation. Journal of Public Economics, 89(2), 283-301. _______________ 2005b. Updating subjective risks in the presence of conflicting information: an application to climate change. Journal of Risk and Uncertainty, 30(1), 63-97.

89 _______________ 2010. Euthanizing the Value of a Statistical Life. Review of Environmental Economics and Policy, 4, (2/summer): pp. 161178 ________________2014. Valuing Morbidity in Environmental Benefit-Cost Analysis. Annu. Rev. Resour. Econ., 6(1), 249-272. Cameron, T. A., & DeShazo, J. R. 2013. Demand for health risk reductions. Journal of Environmental Economics and Management, 65(1), 87-109. Carlsson, F., He, H., Martinsson, P., Qin, P., & Sutter, M. 2012. Household decision making in rural China: Using experiments to estimate the influences of spouses. Journal of Economic Behavior & Organization, 84(2), 525-536. Casman, Elizabeth A.; B. Fischhoff; C. Palmgren; M. Small; F. Wu. 2000. An integrated risk model of a drinking-water-borne Cryptosporidiosis Outbreak. Risk Analysis 20 (4): 495-502. Cerroni, S., S. Notaro and W.D. Shaw. 2012. Estimating and eliciting valid subjective probabilities: An experimental investigation using the exchangeability method. Journal of Economic Behavior and Organization 84 (September): 201-15. Cerroni, S., S. Notaro and W.D. Shaw. 2013. How Many Bad Apples are in a Bunch? Perceived Risks of Pesticide Residues in Apples: An Experimental Investigation. Food Policy, Vol. 41 (August): 112-123. Cerroni, S.; S. Notaro; R. Raffaelli, and W.D. Shaw. 2015. The effect of good and bad news on risk and time preferences: A choice experiment model of reduction in pesticide residue risk. Revised discussion paper, University of Aberdeen, (April).

90 Chavas, Jean-Paul and M.T. Holt. 1990. Acreage decisions under risk: the case of corn and soybeans. American J. of Agricultural Economics 72 (August): 529-38. Chesley, G. R. 1978. Subjective probability elicitation techniques: A performance comparison. Journal of Accounting Research, 225-241. Clark, C.W. and G.P. Kirkwood. 1986. Optimal harvesting of an uncertain resource stock and the value of stock surveys. J. of Environmental Economics and Management 13: 235-44. Coble, K. H., & Lusk, J. L. 2010. At the nexus of risk and time preferences: An experimental investigation. Journal of Risk and Uncertainty, 41(1), 67-79. Conrad, J. M., & Kotani, K. 2005. When to drill? trigger prices for the arctic national wildlife refuge. Resource and Energy Economics, 27(4), 273-286. Conte, Anna; John D. Hey; P.G. Moffatt. 2011. "Mixture models of choice under risk." J. of Econometrics 162: 79-88. Corso, Phaedra; James K. Hammitt, and John D. Graham. 2001. "Valuing Mortality Risk Reduction: Using Visual Aids to Improve the Validity of Contingent Valuation," Journal of Risk and Uncertainty, 23(2): 165 84. Cressie, N., & Morgan, P. B. 1993. The VPRT: a sequential testing procedure dominating the SPRT. Econometric Theory, 9(03), 431-450. Cropper, M.; J.K. Hammitt; L. Robinson. 2011. Valuing Mortality Risk Reductions: Progress and Challenges. Annual Rev. of Resour. Econ., 3:31336

91 Curly, Shawn P. and J. Frank Yates. 1985. The center and range of the probability interval as factors affecting ambiguity preferences. Organizational Behavior and Human Decision Processes 36: 273-87. Curry, J. A., Webster, P. J., & Holland, G. J. 2006. Mixing politics and science in testing the hypothesis that greenhouse warming is causing a global increase in hurricane intensity. Bulletin of the American Meteorological Society, 87(8), 1025-1037. Curtis, Kynda R. and K. Moeltner. 2007. The effect of consumer risk perceptions on the propensity to purchase genetically modified foods in Romania. Agribusiness 23 (2): 263-78. Cutchin, Malcolm et al. 2008. Concern about petrochemical health risk before and after a refinery explosion. Risk Analysis 28 (3): 589-598. Daigneault, A. J., Miranda, M. J., & Sohngen, B. 2010. Optimal forest management with carbon sequestration credits and endogenous fire risk. Land Economics, 86(1), 155-172. Dekker, T.; R. Brouwer; M. Hofkes; K. Moeltner. 2011. The effect of risk context on the value of a statistical live: A Bayesian meta-model. Environmental and Resource Economics. 49: 597-624. Desvousges, W.H., V.K. Smith, and A. Fisher. 1987. Option price estimates for water quality improvements: a contingent valuation study for the Monongahela River. J. of Environmental Economics and Management 14: 248-67. Diecidue, E., Schmidt, U., & Wakker, P. P. 2004. The utility of gambling reconsidered. Journal of Risk and Uncertainty, 29(3), 241-259.

92 Diecidue, E., & Wakker, P. P. 2001. On the intuition of rank-dependent utility. Journal of Risk and Uncertainty, 23(3), 281-298. Dickie, Mark and Shelby Gerking. 1996. "Formation of risk beliefs, joint production and willingness to pay to avoid skin cancer." Review of Economics and Statistics, 451-63. Dosman, Donna M.; W.L. Adamowicz; S.E. Hurdley. 2001. "Socioeconomic Determinants of Health-and-Food Safety Related Risk Perceptions." Risk Analysis 21 (2): 307-17. Drichoutis, A. C., & Nayga, R. M. 2013. Eliciting risk and time preferences under induced mood states. The Journal of Socio-Economics, 45, 18-27. Edwards, S. F. 1988. Option prices for groundwater protection. Journal of Environmental Economics and Management, 15(4), 475-487. Eeckhoudt, L., Gollier, C., & Schlesinger, H. 2005. Economic and financial decisions under risk. Princeton University Press. Eiswerth, M. E., & Van Kooten, G. C. 2002. Uncertainty, economics, and the spread of an invasive plant species. American journal of agricultural economics, 1317-1322. Ellsberg, D. 1961. Risk, ambiguity, and the Savage axioms. Quarterly Journal of Economics 75: 643-669. Englin, Jeffrey, P. Boxall, and G. Hauer. 2000. An empirical examination of optimal rotations in a multiple-use forest in the presence of fire risk. J. of Agricultural and Resource Economics 25 (1): 14-27.

93 Epstein, Larry G. 1999. A Definition of Uncertainty Aversion. Review of Economic Studies 66: 579-608. Etner, J., M. Jeleva and J.M. Tallon. 2012. Decision theory under ambiguity. Journal of Economic Surveys 26: 234-270. Faucheux, S. and G. Froger. 1995. Decision-making under environmental uncertainty. Ecological Economics 15: 29-42. Fiore, S. M., Harrison, G. W., Hughes, C. E., & Rutstrцm, E. E. 2009. Virtual experiments and environmental policy. Journal of Environmental Economics and Management, 57(1), 65-86. Forsyth, M. 1997. The economics of site investigation for groundwater protection: sequential decision making under uncertainty. Journal of environmental economics and management, 34(1), 1-31. Fox, J. A., Shogren, J. F., Hayes, D. J., & Kliebenstein, J. B. 1998. CVM-X: calibrating contingent values with experimental auction markets. American Journal of Agricultural Economics, 80(3), 455-465. Fox, John A.; D.J. Hayes; J.F. Shogren. 2002. Consumer preferences for food irradiation: how favorable and unfavorable descriptions affect preferences for irradiated pork in experimental auctions. J. of Risk and Uncertainty 21 (1): 75-95. Friedman, Thomas L. 2009. Going Cheney on Climate. New York Times, December 8th. Gangadharan, L., & Nemes, V. 2009. Experimental analysis of risk and uncertainty in provisioning private and public goods. Economic Inquiry, 47(1), 146-164.

94 Gayer, T., Hamilton, J. T., & Viscusi, W. K. 2000. Private values of risk tradeoffs at superfund sites: housing market evidence on learning about risk. Review of Economics and Statistics, 82(3), 439-451. Gierlach, E., Belsher, B. E., & Beutler, L. E. 2010. Cross-Cultural Differences in Risk Perceptions of Disasters. Risk analysis, 30(10), 1539-1549. Gilboa, I. 1987. Expected utility with purely subjective non-additive probabilities. Journal of mathematical Economics, 16(1), 65-88. Gilboa, Itzhak; A.W. Postelwaite; David Schmeidler. 2008. Probability and Uncertainty in Economic Modeling. J. of Economic Perspectives 22 (3/Summer): 173-88. Gilboa, I., & Schmeidler, D. 1989. Maxmin expected utility with non-unique prior. Journal of mathematical economics, 18(2), 141-153. Glenk, K. and S. Colombo. 2011. How sure can you be? A framework for considering delivery uncertainty in benefit assessments based on stated preference methods. J. Agricultural Economics 62: 25-46. _____________________. 2013. Modeling outcome-related risk in choice experiments. Australian J. of Agricultural and Resource Economics 57: 559-78. Goldstein, W. M., & Einhorn, H. J. 1987. Expression theory and the preference reversal phenomena. Psychological review, 94(2), 236. Gollier, Christian. 2001. The Economics of Risk and Time. Cambridge: MIT Press.

95 Gollier, C., Hammitt, J. K., & Treich, N. 2013. Risk and choice: A research saga. Journal of risk and uncertainty, 47(2), 129-145. Gonzalez, Richard And George Wu. 1999. On the shape of the probability weighting function. Cognitive Psychology 38: 129-66. Graham, D. A. 1981. Cost-benefit analysis under uncertainty. The American Economic Review, 715-725. Grijalva, Therese; J. Lusk; R. Rong; and W.D. Shaw. 2015. Convex time budgets and individual discount rates in the long run. Discussion paper, Weber State University. Grijalva, Therese; R. Berrens; W.D. Shaw. 2011. Species Preservation versus Development: An Experimental Investigation Under Uncertainty. Ecological Economics, 70 (March): 9951005. Hallstrom, D. G., & Smith, V. K. 2005. Market responses to hurricanes. Journal of Environmental Economics and Management, 50(3), 541-561. Hammitt, James K. 2000. Valuing mortality risk: Theory and practice. Environmental Science and Technology 34: 1396-1400. Hammitt, J.K., 2007. "Valuing Changes in Mortality Risk: Lives Saved Versus Life Years Saved," Review of Environmental Economics and Policy 1: 228-240. Hammitt, James K. and John D. Graham. 1999. Willingness to pay for health protection: Inadequate sensitivity to probability? J. of Risk and Uncertainty 8: 33-62.

96 Hammitt, James K. and N. Triech. 2007. Statistical vs. identified lives in benefit-cost analysis. J. of Risk and Uncertainty 35: 45-66. Hammitt, James K. and Y. Zhou. 2006. The economic value of air-pollution-related health risks in China: A contingent valuation study. Environmental and Resource Economics 33: 399423. Hand, M. S., Wibbenmeyer, M. J., Calkin, D. E., & Thompson, M. P. 2015. Risk preferences, probability weighting, and strategy tradeoffs in wildfire management. Risk analysis, 35(10), 1876-1891. Hanley, Nick; J.F. Shogren; B. White. 2007. Environmental Economics in Theory and Practice (2nd Edition). New York: Palgrave Macmillan. Hanaoka, C. et al. 2014. Do risk preferences change? Evidence from panel data before and after the Great East Japan earthquake. Discussion paper. Hansen, L. P., & Sargent, T. J. 2001. Robust control and model uncertainty. American Economic Review, 91(2), 60-66. Harrison, G. W., & List, J. A. 2004. Field experiments. Journal of Economic Literature, 10091055. Harrison, G. W., List, J. A., & Towe, C. 2007. Naturally occurring preferences and exogenous laboratory experiments: A case study of risk aversion. Econometrica, 75(2), 433-458. Harrison G W, Rutstrцm E.E. 2006. Eliciting subjective beliefs about mortality risk orderings. Environmental and Resource Economics 33: 325-46.

97 Harrison G W, Rutstrцm E.E. 2008. Risk Aversion in the laboratory. In: Cox J, Harrison G W (eds) Research in Experimental Economics 12:41-196. Hensher, D.A., W.H. Greene, and Z. Li. 2011. Embedding risk attitude and decision weights in non-linear logit to accommodate time variability in the value of expected travel time savings. Transportation Research Part B, 45 (7): 954-972. Heitz, Carine; S. Spaeter; A. Auzet; and S. Glatron. 2009. Local stakeholder's perception of muddy flood risk and implications for management approaches: A case study in Alsace (France). Land Use Policy 26: 443-51. Henriques, I. and P. Sadorsky. 2010. Can environmental sustainability be used to manage energy price risk? Energy Economics 32: 1131-1138. Henriques, I., & Sadorsky, P. 2011. The effect of oil price volatility on strategic investment. Energy Economics, 33(1), 79-87. Ho, Jung-Chun, et al. 2014. Perceived environmental and health risks of nuclear energy in Taiwan after the Fukushima nuclear disaster. Environment International 73: 295-303. Hoffrage, U.; S. Lindsey; R. Hertwig; and G. Gigerenzer 2000. Communicating statistical information. Science 290: 2261-62. Holt, C. A., & Laury, S. K. 2002. Risk aversion and incentive effects. American economic review, 92(5), 1644-1655. Horowitz, J. K., & McConnell, K. E. 2000. Values elicited from open-ended real experiments. Journal of Economic Behavior & Organization, 41(3), 221-237.

98 Huang, Chao; M. Burris; W.D. Shaw. 2015. "Differences in probability weighting for individual travelers: A Managed Lane Choice application." Revised for the Journal of Transportation, July. Hurd, M. D., McFadden, D., & Merrill, A. 2001. Predictors of mortality among the elderly. In Themes in the Economics of Aging (pp. 171-198). University of Chicago Press. Hurley, T.M. and J.F. Shogren. 2005. An experimental comparison of induced and elicited beliefs. J. of Risk and Uncertainty 30 (2): 169-88. Ida, T., and Goto, R. 2009. Simultaneous Measurement of time and risk preferences: Stated preference discrete choice modeling analysis depending on smoking behavior. International Economic Review, 50(4), 1169-1182. Itaoka, K., Saito, A., Krupnick, A., Adamowicz, W., & Taniguchi, T. 2006. The effect of risk characteristics on the willingness to pay for mortality risk reductions from electric power generation. Environmental and Resource Economics, 33(3), 371-398. Jakus, Paul M. and W.D. Shaw. 2003. "Perceived Hazard and Product Choice: An Application to Recreational Site Choice" J. of Risk and Uncertainty, 2003, Vol. 26 No. 1: 77-92. Jakus Paul M.; W.D. Shaw, T.N. Nguyen, and M. Walker. 2009. Risk Perceptions of Arsenic in Tap Water and Bottled Water consumption. Water Resources Research 45. Doi:10.1029/2008WR007427. Jianakoplos, Nancy, and Alexandra Bernasek. 1998. Are women more risk averse? Economic Inquiry 36:62030.

99 Jindapon, Paan and W.D. Shaw. 2008."Option Price Without Expected Utility" Economics Letters, Vol. 100 (September): 408-410. Jones-Lee, M. W. 1991. Altruism and the value of other people's safety. Journal of Risk and Uncertainty, 4(2), 213-219. Kahneman, Daniel and Amos Tversky. "Prospect Theory: An Analysis of Decision under Risk," Econometrica, 1979, 47(2): 263 92. Kalisa, T.; M. Riddel; and W. D. Shaw. 2015. Willingness to Pay for Arsenic-Related Risks: A Special Regressor Approach. J. of Environmental Economics and Policy, published online 6-25-15. Kelly, D. L., Letson, D., Nelson, F., Nolan, D. S., & Solнs, D. 2012. Evolution of subjective hurricane risk perceptions: A Bayesian approach. Journal of Economic Behavior & Organization, 81(2), 644-663. Kennedy, C. A. 2002. Revealed preference valuation compared to contingent valuation: radon-induced lung cancer prevention. Health Economics, 11(7), 585-598. Klibanoff, P., Marinacci, M., & Mukerji, S. 2009. Recursive smooth ambiguity preferences. Journal of Economic Theory, 144(3), 930-976. Knight F. 1921. Risk, Uncertainty, and Profit. Hart, Schaffner, and Marx, Boston MA. Konishi, Y. and K. Adachi. 2011. A framework for estimating willingness-to-pay to avoid endogenous environmental risks. Resource and Energy Economics 33: 130154. Krutilla, J. V. 1967. Conservation reconsidered. The American Economic Review, 777-786.

100 Kunreuther, H., Novemsky, N., & Kahneman, D. 2001. Making low probabilities useful. Journal of Risk and Uncertainty, 23(2), 103-120. Lambert, N. S., Pennock, D. M., & Shoham, Y. 2008 (July). Eliciting properties of probability distributions. In Proceedings of the 9th ACM Conference on Electronic Commerce (pp. 129138). ACM. Leung, B., Lodge, D. M., Finnoff, D., Shogren, J. F., Lewis, M. A., & Lamberti, G. 2002. An ounce of prevention or a pound of cure: bioeconomic risk analysis of invasive species. Proceedings of the Royal Society of London. Series B: Biological Sciences, 269(1508), 24072413. Lewbel, A., Y. Dong and T. Yang. 2012. Comparing features of convenient estimators for binary choice models with endogenous regressors. Canadian Journal of Economics, 45 (3):809-29. Liu, E. M. 2013. Time to change what to sow: Risk preferences and technology adoption decisions of cotton farmers in China. Review of Economics and Statistics, 95(4), 1386-1403. Lobb, Alexandra. 2011. Risk and Uncertainty in Environmental Economics: From theory to practice. Presented at the 55th annual conference of the AARES, Melbourne, Australia. Lusk, J.L. and K.H. Coble. 2005. Risk perceptions, risk preference, and acceptance of risky food. Amer. J. of Agri. Econ. 87 (2/May): 393-404. Lusk, J. L., & Schroeder, T. C. 2004. Are choice experiments incentive compatible? A test with quality differentiated beef steaks. American Journal of Agricultural Economics, 86(2), 467-482.

101 Macmillan, D., Hanley, N., & Buckland, S. 1996. A contingent valuation study of uncertain environmental gains. Scottish Journal of Political Economy, 43(5), 519-533. Magat, Wesley A., W. Kip Viscusi, and Joel Huber. 1996. "A Reference Lottery Metric for Valuing Health," Management Science 42(8), 11181130. Manski, Charles F. 2004. Measuring Expectations. Econometrica 72 (5/Sept.): 1329-1376. Margolis, Michael and E. Naevdal. 2008. Safe minimum standards in dynamic resource problems: Conditions for living on the edge of risk. Environmental and Resource Economics 40: 401-23. McCarl, B. A., Villavicencio, X., & Wu, X. 2008. Climate change and future analysis: is stationarity dying?. American Journal of Agricultural Economics, 90(5), 1241-1247. McKelvey, R., Miller, K., & Golubtsov, P. 2003. Fish-wars revisited: a stochastic incompleteinformation harvesting game. Risk and Uncertainty in Environmental and Natural Resource Economics, 93-112. Meinshausen, M.; N. Meinshausen; W. Hare; SCB Raper; K. Frieler; R. Knutti; DJ Frame; and MR Allen. 2009. Greenhouse-gas emission targets for limiting global warming to 2o C. Nature 458: 1158-62. Michel-Kerjan, E. et al. (8 coauthors). 2013. Catastrophe risk models for evaluating disaster risk reduction investments in developing countries. Risk Analysis 33 (6): 984-999.

102 Milly, P.C.D., J. Betancourt, M. Falkenmark, R.M. Hirsch, Z.W. Kudzewicz, D. P. Lettenmaier, and R.J. Stouffer. 2008. "Climate Change: Stationarity is Dead: Whither Water Management?" Science 319:573-574. Mueller, M. J. 1989. Natural Resource Economics under Uncertainty: Effects of Futures Markets. Natural Resource Modelling, 3, 261-287. Mrozek, J. R., & Taylor, L. O. 2002. What determines the value of life? a meta-analysis. Journal of Policy analysis and Management, 21(2), 253-270. Naoi, Michio et al. 2009. Earthquake risk and housing prices in Japan: Evidence before and after massive earthquakes. Regional Science and Urban Economics 39: 658-69. Nelson, R. G., & Bessler, D. A. 1989. Subjective probabilities and scoring rules: Experimental evidence. American Journal of Agricultural Economics, 71(2), 363-369. Nguyen, T.N.; W.D. Shaw; R.T. Woodward; R. Paterson; and K. Boyle. 2007. "An Empirical Study of Option Prices for Moose Hunting Permits: Lotteries and Guarantees." Ecological Economics, Vol. 63 (Issues 2,3 August): 476-84. Nguyen TN, Jakus PM, Riddel M, Shaw WD. 2010. An Empirical Model of Perceived Mortality Risks for Selected United States Arsenic hot spots. Risk Analysis 30(10):1550-1562 Nilim, A. and L. El Ghaoui. 2005. Robust control of Markov decision processes with uncertain transition matrices. Operations Research 53 (5): 780-98. Nyarko, Y. and A. Schotter. 2002. An experimental study of belief learning using elicited beliefs. Econometrica 70: 971-1005.

103 Oechssler, J. and A. Roomets. 2015. A test of mechanical ambiguity. J. of Economic Behavior and Organization 119: 153-62. Offerman, T., Sonnemans, J., Van de Kuilen, G., & Wakker, P. P. 2009. A truth serum for nonbayesians: Correcting proper scoring rules for risk attitudes. The Review of Economic Studies, 76(4), 1461-1489. Olsezewki, W. 2007. Preferences over sets of lotteries. Review of Economic Studies 74: 567-95. Palfrey, T. R., & Wang, S. W. 2009. On eliciting beliefs in strategic games. Journal of Economic Behavior & Organization, 71(2), 98-109. Peltzman, S. 1975. The effects of automobile safety regulation. The Journal of Political Economy, 677-725. Petrolia, D. R., Landry, C. E., & Coble, K. H. 2013. Risk preferences, risk perceptions, and flood insurance. Land Economics, 89(2), 227-245. Pindyck, R. S. 1980. Uncertainty and exhaustible resource markets. The Journal of Political Economy, 1203-1225. _______________ 1984. Uncertainty in the theory of renewable resource markets. Review of Economic Studies 51: 289-303. ________________. 2007. Uncertainty in environmental economics. Review of Environmental Economics and Policy 1 (1/Winter): 45-65. ________________. 2012. Uncertain outcomes and climate policy. J. of Environmental Economics and Management 63: 289-303.

104 Pollak, Robert A. 1998. Imagined risks and cost-benefit analysis. American Economic Review (Papers and Proceedings), 88 (2): 376-379. Pollan, Michael. 2001. Precautionary Principle. New York Times, December 9th. Portney, Paul. 1992. Trouble in Happyville. J. of Policy Analysis and Management,11 (1/Winter): 131-32. Pratt, J. 1964. Risk Aversion in the Small and in the Large. Econometrica 32 (1): 122-36. Pratt, John W. and R.J. Zeckhauser. 1996. Willingness to pay and the distribution of risk and wealth. J. of Political Economy 104 (4): 747-63. Prestemon, J. P., Pye, J. M., Butry, D. T., Holmes, T. P., & Mercer, D. E. (2002). Understanding broadscale wildfire risks in a human-dominated landscape. Forest science, 48(4), 685693. Provencher, Bill; D.J. Lewis; K. Anderson. 2012. Disentangling preferences and expectations in stated preference analysis with respondent uncertainty: The case of invasive species prevention. J. of Environmental Economics and Management 64: 169-82. Pyke, C. R., Bierwagen, B. G., Furlow, J., Gamble, J., Johnson, T., Julius, S., & West, J. (2007). A decision inventory approach for improving decision support for climate change impact assessment and adaptation. Environmental Science & Policy, 10(7), 610-621. Quiggin, John. 1982. A Theory of Anticipated Utility. J. of Behavioral Economics and Organization, 3: 323-43.

105 Rabin, Matthew. 2000. Risk aversion and expected-utility theory: A calibration theorem. Econometrica 68 (5): 1281-1292. Ramsey, Frank P. 1931. Truth and Probability. In RB Braithwaite (ed) The Foundations of Mathematics and Other Logical Essays. Routledge, London: 156-98 Randall, A. 2009. We Already Have Risk Management--Do We Really Need the Precautionary Principle?. International Review of Environmental and Resource Economics, 3(1), 39-74. Randall, Alan. 2011. Risk and Precaution. Cambridge University Press, New York. Ranger, N. and F. Niehцrster. 2012. Deep uncertainty in long-term hurricane risk: Scenario generation and implications for future climate experiments. Global Environmental Change, 22 (3): 703-12. Ready, Richard C.; P.A. Champ; J.L. Lawson. 2010. Using Respondent Uncertainty to Mitigate Hypothetical Bias in a Stated Choice Experiment. Land Economics 82 (2/May): 363-81. Reed, W.J. 1984. The effects of the risk of fire on the optimal rotation of a forest. J. of Environmental Economics and Management 11: 180-90. Reed, W.J. and H.R. Clarke. 1990. Harvest decisions and asset valuation for biological resources exhibiting size-dependent stochastic growth. International Economic Review 31: 147-69. Reynolds, T. W., Bostrom, A., Read, D., & Morgan, M. G. 2010. Now what do people know about global climate change? Survey studies of educated laypeople. Risk Analysis, 30(10), 1520-1538.

106 Riddel, M. 2009. Risk perception, ambiguity, and nuclear-waste transport. Southern Economic Journal 75: 781-97. Riddel, M. 2011. Uncertainty and measurement error in welfare models for risk changes. Journal of Environmental Economics and Management 61: 341-54. Riddel, M. 2012. Comparing risk preferences over financial and environmental lotteries. Journal of Risk and Uncertainty, 45(2), 135-157. Riddel, M. 2014. How do long-shot outcomes affect preferences for climate-change mitigation? Southern Economic Journal 80 (3): 752-771. Riddel, Mary and D. Hales. 2014. Risk misperceptions and selection in Insurance markets: An application to demand for cancer insurance. Discussion paper, Univ. of Nevada, Las Vegas. Riddel, M., & Kolstoe, S. 2013. Heterogeneity in life-duration preferences: Are risky recreationists really more risk loving?. Journal of risk and uncertainty, 46(2), 191-213. Riddel, M., & Schwer, R. K. 2006. Winners, losers, and the nuclear-waste dilemma. Environmental and Resource Economics, 34(2), 317-338. Riddel, Mary and Shaw WD. 2006. A Theoretically-Consistent Empirical Non-Expected Utility Model of Ambiguity: Nuclear Waste Mortality Risk and Yucca Mountain. J. of Risk and Uncertainty 32(2): 131-150 Roberts, David C.; T.A. Boyer; J.L. Lusk. 2008. Preferences for environmental quality under uncertainty. Ecological Economics 66: 584-93.

107 Rodgers, Gregory B. 1996. Bicyclist risks and helmet use patterns: an Analysis of compensatory behavior in a risky recreational activity. Managerial and Decision Economics, 17: 493507. Rolfe, John and Jill Windle. 2015. Do Respondents adjust their expected utility in the presence of an outcome certainty attribute in a choice experiment? Environmental and Resource Economics 60 (1): 125-142. Roseta-Palma, C. and A. Xepapadeas. 2004. Robust control in water management. J. of Risk and Uncertainty 29 (1): 21-34. Saha, Atanu. 1993. "Expo-power Utility: a Flexible Form for Absolute and Relative Risk Aversion." American J. of Agricultural Econ. 75 (November): 905-13. Salaniй, F. and N. Treich. 2009. Regulation in Happyville. Economic Journal 119 (April): 665679. Sadorsky, P., 2001. Risk factors in stock returns of Canadian oil and gas companies. Energy Economics 23, 1728. Savage, Leonard J. 1954. The Foundations of Statistics. 2nd Edition 1972. Wiley, New York. Schmeidler, D. 1989. Subjective probability and expected utility without additivity. Econometrica, 571-587. Schmitt, A., & Spaeter, S. (2005). Improving the prevention of environmental risks with convertible bonds. Journal of Environmental Economics and Management, 50 (3), 637657.

108 Schulze, W., McClelland, G., Waldman, D., & Lazo, J. 1996. Sources of bias in contingent valuation. The Contingent Valuation of Environmental Resources: Methodological Issues and Research Needs, edited by D. Bjornstad and J. Kahn. Cheltenham, UK: Edward Elgar. Scrogin, D.; R.P. Berrens; and A.K. Bohara. 2000. Policy changes and the demand for lotterrationed big game licenses. J. of Agricultural and Resource Economics 25 (2): 501-19. Segal, Uzi. 1987. The Ellsberg Paradox and Risk Aversion: An Anticipated Utility Approach." International Econ. Review 28: 175-202. __________. 1990. Two-stage lotteries without the reduction axiom. Econometrica 58: 349-77. Segal, Uzi and A. Spivak. 1988. Non-expected utility risk premiums: the case of probability ambiguity. J. of Risk and Uncertainty, 1: 333-347. Shaw, W.D., P.M. Jakus and M. Riddel. 2012. Perceived arsenic-related mortality risks for smokers and non-smokers. Contemporary Economic Policy 30 (July): 417-29. Shaw, W. D., Nayga, R. M., & Silva, A. 2006. Health benefits and uncertainty: an experimental analysis of the effects of risk presentation on auction bids for a healthful product. Economics Bulletin, 4 (20), 1-8. Shaw, W. Douglass and R.T. Woodward. 2011. Water Management, Risk and Uncertainty: Things we wish we knew in the Twenty-first Century. Western Economic Forum 9

109 (Winter), published on line at the website for the Western Agricultural Economics Association. ______________________________. 2008. On Why Environmental and Resource Economists Should Care about Non-Expected Utility Models. Resource and Energy Economics 30:66-89 Shogren, J.F. and T.D. Crocker. 1991. Risk, Self-Protection, and Ex Ante Economic Value. J. of Environ. Econ. and Manage 20:1-15. Shogren, J. F., & Crocker, T. D. 1999. Risk and its consequences. Journal of Environmental Economics and Management, 37(1), 44-51. Sileo, P. W. (1995). Intransitivity of indifference, strong monotonicity, and the endowment effect. Journal of Economic Theory, 66(1), 198-223. Slovic, Paul. 1987. Perception of Risk. Science 236: 280-85. ____________ 1993. Perceived Risk, Trust, and Democracy. Risk Analysis 13 (6): 675-682. ____________ 2001. "Cigarette Smokers: Rational Actors or Rational Fools?" Chapter 6 in Smoking: Risk, Perception and Policy, Paul Slovic (ed.). London: Sage Publications. Smith, J.B. 1980. Replenishable resource management under uncertainty: A reexamination of the U.S. Northern fishery. J. of Environmental Economics and Management 7: 209-19. Smith, V.K. Option Value: a Conceptual Overview. Southern Ec. Journal 654-68.

110 Smith, V.K. and W.H. Desvousges. 1987. An empirical analysis of the economic value of risk changes. J. of Political Economy 95 (1/Feb): 89-114. Smith, V. K., & Desvousges, W. H. 1988. The valuation of environmental risks and hazardous waste policy. Land Economics, 211-219. Smith, Joyotee and A.M. Mandac. 1995. "Subjective versus Objective Yield Distributions as measures of Production Risk. American J. of Agricultural Econ. 77: 152-61. Stainforth, D. A. et al.: 2005, `Uncertainty in predictions of the climate response to rising levels of greenhouse gases', Nature (433/27), 403406. Starmer, Chris. 2000. Developments in Non-expected utility theory: the hunt for a descriptive theory of choice under risk. J. of Economic Literature 38 (2). Tanaka, Tomomi; C.F. Camerer; Q. Nguyen. 2010. Risk and time preferences: Linking experimental and household survey data from Vietnam. American Economic Review 100 (1): 557-71. Thiene, Mara; R. Scarpa; W.D. Shaw. 2015. "Perceived Risks of Mountain Landslides in Italy: A Stated Choice Model of Subjective Risk Reductions." Unpublished, revised discussion paper, December 2015. Treich, Nicolas. 2010. The value of a statistical life under ambiguity aversion. Journal of Environmental Economics and Management 59: 15-26. Tsuge, T., A. Kishimoto, and K. Takeuchi. 2005. A choice experiment approach to the valuation of mortality. Journal of Risk and Uncertainty 31:7395.

111 Tversky A, Kahneman D. 1992. Advances in prospect theory: Cumulative representation of uncertainty. J Risk Uncertainty 5:297323. Van Houtven, G., F.R. Johnson, V. Kilambi, and A.B. Hauber. 2011. Eliciting benefit-risk preferences and probability-weighted utility using choice-format conjoint analysis. Medical Decision Making 31 (3): 584-93. Van Kooten, G. C., Krcmar, E., & Bulte, E. H. 2001. Preference uncertainty in non-market valuation: a fuzzy approach. American Journal of Agricultural Economics, 83(3), 487500. Vardas, G. and A. Xepapadeas. 2010. Model uncertainty, ambiguity and the precautionary principle: Implications for biodiversity management. Environmental and Resource Economics 45 (3): 379-404. Vбsquez, W.F. 2012. Reliability perceptions and water storage expenditures: Evidence from Nicaragua. Water Resources Research 48 (10), DOI: 10.1029/2011WR011024,2012. Vassanadumrongdee, S., & Matsuoka, S. 2005. Risk perceptions and value of a statistical life for air pollution and traffic accidents: evidence from Bangkok, Thailand. Journal of Risk and Uncertainty, 30 (3), 261-287. Viscusi, W. K. 1993. The value of risks to life and health. Journal of Economic Literature, 19121946. Viscusi, W. K., & Aldy, J. E. 2003. The value of a statistical life: a critical review of market estimates throughout the world. Journal of risk and uncertainty, 27(1), 5-76.

112 Viscusi, W.K. and T. Gayer. 2006. Quantifying and Valuing Environmental health Risks. Handbook of Environmental Economics, Vol. 2, Valuing Environmental Changes, K. Mдler and J. Vincent (eds). Amsterdam: Elsevier: pp. 1029-1103. Viscusi, W. K., & Hakes, J. 2003. Risk ratings that do not measure probabilities. Journal of Risk Research, 6(1), 23-43. Viscusi, W. Kip; W.A. Magat; and J. Huber. 1999. Smoking Status and Public Responses to Ambiguous Scientific Risk Evidence. Southern Economic Journal 66, No. 2 (Oct.): 250270. Viscusi, W. K., & Zeckhauser, R. J. 2006. The perception and valuation of the risks of climate change: a rational and behavioral blend. climatic change, 77(1-2), 151-177. Von Neumann, J. and O. Morgenstern. 1947. Theory of Games and Economic Behavior. Princeton University Press. Wakker, Peter. 1990. Under stochastic dominance Choquet-Expected utility and Anticipated Utility are Identical. Theory and Decision 29: 119-32. ____________. 1994. Separating marginal utility and probabilistic risk aversion. Theory and Decision 36: 1-44. Walters, C. 1975. Optimal harvest strategies for salmon in relation to environmental variability and uncertain production parameters. J. Fisheries Research Board of Canada 32 (10): 1777-85.

113 Watanabe, Masahide and T. Fujimi. 2015. Evaluating change in objective ambiguous mortality probability: Valuing reducing in ambiguity size and risk level. Environmental and Resource Economics 60 (1): 1-15. Wiener, J. B., & Rogers, M. D. 2002. Comparing precaution in the United States and Europe. Journal of risk research, 5(4), 317-349. Weitzman, M. L. 2009. On modeling and interpreting the economics of catastrophic climate change. The Review of Economics and Statistics, 91(1), 1-19. Wesseler, J., Weikard, H. P., & Weaver, R. D (editors). 2003. Risk and uncertainty in environmental and natural resource economics. Edward Elgar Publishing Ltd.[A collection of conference papers on the topic.] Wibbenmeyer, M. J., Hand, M. S., Calkin, D. E., Venn, T. J., & Thompson, M. P. 2013. Risk preferences in strategic wildfire decision making: a choice experiment with US wildfire managers. Risk analysis, 33 (6), 1021-1037. Wielgus, Jeffrey et al. 2009. "Including risk in stated-preference economic valuations: Experiments on choices for marine recreation." J. of Environmental Management 90: 3401-09. Wolfers, J., & Zitzewitz, E. 2004. Prediction markets (No. w10504). National Bureau of Economic Research working paper. Woodward, Richard T. and Richard C. Bishop. 1997. How to decide when experts disagree: uncertainty-based rules in environmental policy. Land Economics 73 (4/Nov): 492-507.

114 Woodward, R.T. and W.D. Shaw. 2008. "Allocating Resources in an Uncertain World: Water Management and Endangered Species." American Journal of Agricultural Economics, 2008, 90 (3/August): 593-605. Woodward, R.T. and D. Tomberlin. 2014. Practical precautionary resource management using robust optimization. Environmental Management 54: 828-839. Wu, George and Richard Gonzalez 1996. Curvature of the probability weighting function. Management Science 42: 1676-90. Yoder, J.K. 2004. Playing with fire: Endogenous risk in resource management. American J. of Agricultural Economics 86: 933-948. Yoder, J.K.; A.M. Ohler; and H.H. Chouinard. 2014. What floats your boat? Preference revelation from lotteries over complex goods. J. of Environmental Econ. and Management 67: 412-30. Zhang, J., & Adamowicz, W. L. 2011. Unraveling the choice format effect: A context-dependent random utility model. Land Economics, 87(4), 730-743.

115 A. Appendix: Risk Economics Frameworks A.1 Some Definitions and Behavioral Axioms Prospects Many risk modelers use the word prospect in defining and explaining lotteries. Prospects are contingent payments tied to events. Many decision theorists use the notation x E y to denote an outcome x when event E occurs and outcome y when the complementary event (not E) occurs. The outcomes in prospects are money (y can be zero, of course), typically paid to a subject when the lottery is played out. As in the microeconomics of certainty, there are several desirable features that preferences under risk or uncertainty might, or should have. Some of these are axioms of preference with close counterparts in the case of certainty, such as completeness and transitivity. One of the most important under risk is independence. Independence Consider a paired outcome situation, where a simple way of thinking of the outcome pertains to a prize with an objective known probability associated with winning it, as in a lottery. Let one prize be x, and the probability of winning it be p. In the 2nd case the outcome is z, and the probability of the z outcome is 1-p. Suppose that x is preferred to a third outcome y (x is a larger monetary prize), which also has probability p of happening. The independence axiom is simply that when the z prize is common in two lotteries, so that the lotteries only differ in the x and y outcomes, then the common z prize should not affect preferences, or: Eq. 2.1 p x [1 p] z p y [1 p] z Note that in laboratory experiments, and in the construction of some variants of risk models, the independence axiom may not hold. Stochastic Dominance28 An intuitively attractive concept is that a risk-averse person would prefer a distribution for a good (return on a risky investment), where that distribution describes the risk, that has a smaller variance. I.E. if a larger variance explains the higher risk of two profiles of risk that have the same expected value (mean return), then a risk averse person will want to avoid the profile with 28 Again, many of my thoughts here on stochastic dominance were largely inspired a long time ago from of notes I had obtained which were written by some economics professor. I honestly cannot recall who it was, and would otherwise credit him or her for many of these. Thank you, whoever you are! However, the concepts are quite standard and could be found in any number of textbooks on risk.

116

the larger variance, or, take a pass on the one with the more risk. However, this does not tell us when one distribution is more generally preferred to another, because the expected values might in fact differ. For that we turn to the concept of first order stochastic dominance.

Suppose over a compact support [a, b], we have two random variables, one of which is

associated with cumulative distribution F and another with cumulative distribution G. When can

one say that F is "better" than G for a particular agent? Expected utility (EU) can be constructed

as an integral over this support via the cumulative distribution function. Specifically, expected

utility

from

the

random

variable

with

cumulative

distribution

F

can

be

written

U(F)

=

b a

u(x)

dF(x) and expected utility of the random variable with cumulative distribution G is U(G) = ab

u(x) dG(x).

What if utility cannot be observed? Then, by only observing the properties of the cumulative distribution functions, can one know whether F dominates G or not? The answer is yes, one can know this, but only in certain cases. It could be expected, for example, that just about everyone will consider a prospect which has a lot of probability mass skewed towards higher returns to be better than a prospect which has a lot of probability mass skewed towards low returns. If such is the case, then we have what is called "first order stochastic dominance". This is defined as follows:

First Order Stochastic Dominance: F is said to dominate G according to first order stochastic dominance if

Eq. 2.2 F 1 G F(x) G(x) x[a,b] This can be seen in Figure 1 below when comparing the cumulative distribution function H with either F or G. Specifically, note that for any x [a,b] , F(x) H(x), G(x) H(x) as H lies uniformly above F or G. Considering any x [a,b] , then we see immediately that there is more "area" under the curve between a and x then there is under the F or G curve between a and x. Thus, there is a greater probability mass under H(x) than F(x) or G(x) for any x [a,b] , i.e. the probability that any t is less than x under H is greater than the probability that any t is less than x under F or G. Thus both F and G dominate H.

117 Figure 1 - First and Second Order Stochastic Dominance However, in Figure 1, it also seems as if the probability mass of F is "less dispersed" than G, and thus, in a sense, ought to be considered "less risky". An alternative criteria, then, would be argue that a particular distribution is better than another (at least for risk-averse people) if it has unambiguously "lower" risk. This is the purpose of the second order stochastic dominance criteria. The second order stochastic dominance criteria helps rank distributions according to relative riskiness in terms of the spread of the probability mass of the cumulative density functions. Define T(x) as the area between the two curves F and G in [a, x], specifically T(x) = ax[G(t) F(t)]dt. In Figure 1, we can notice that everywhere below z, G is above F, thus the area in between the curves below z is T1 = az[G(t) - F(t)]dt > 0. However, above z, G is below F, thus (negative of) the area between the curves above z is zb[G(t) - F(t)]dt < 0. Note that T(x) is cumulative, i.e. T(x) = az[G(t) - F(t)]dt + zx[G(t) - F(t)]dt for all x [a, b], thus add up the areas where G is above F and subtract from it the areas where G is below F (note: we can allow G and F to rise above and dip below each other several times over the range). Thus, in Figure 1, T(y) = ay[G(t) - F(t)]dt is the sum of the areas T1 and T2 where, notice, T1 > 0 and T2 < 0. If T(x) 0 for all x [a, b], then the cumulative area where G is above F is greater than the area where F is above G, i.e. the probability mass of G is more spread out than the probability mass of F - which we may say implies that G is "riskier" than F - which is what we see in Figure 1. Conversely, if T(x) 0 for all x [a, b], then the mass of F is more spread out than G. Note that spread is not the same as "variance". There are many situations where for two distributions, F and G, the distribution F has the same mean and less variance than G and yet also

118

yields lower expected utility than G. To see this, imagine a prospect with three returns (x1, x2, x3), where x1 < x2 < x3, where x2 happens to be the mean. Then decrease x1 by a small amount and also increase x3 by a small amount so that the same mean holds, The probability of either x1 or x3 occurring can also be descreased (thereby increasing the probability of x2) so that variance is reduced. With a concave utility function, it is still entirely likely that expected utility with these changes is actually smaller, as a consequence. Alternatively, imagine two skewed distributions with the same mean, albeit with the one with higher variance being skewed upwards and thus potentially preferable.

The point is that the reduction in variance, keeping the mean constant, does not necessarily mean a rise in expected utility (there are cases when a reduction in variance does increase expected utility unambiguously, but we must then restrict the kinds of utility functions we can accept, e.g. quadratic utility, or the distributions of returns must be completely describable by mean and variance).

We want to "rank" distributions F and G according to whether T(x) is positive or negative over the entire range. Thus, now the following can be defined:

Second Order Stochastic Dominance: we say F dominates G according to second order

stochastic

dominance,

or

F

2

G,

if

T(x)

=

x a

[G(t)-F(t)]dt

0

for

all

x

[a,

b].

Notice in Figure 1 that it is quite probable that F 2 G by this criterion, thus F dominates G according to second order stochastic dominance. Notice also that if T(x) compares H with F or G, then it is indeed true that F 2 H and G 2 H. Thus, first order stochastic dominance implies second order stochastic dominance, but not vice-versa.

SEU Axioms Several important axioms for SEU include:

1. Completeness: For any two bets, an individual can say which is weakly preferred (weak preference implies that one bet is at least as good as the other.)

2. Transitivity: A weak preference for gamble A over B, and for B over C results in a weak preference of A or C. This is akin to transitivity in certain preferences.

3. The Sure Thing Principle is quite similar to the Independence Axiom for lotteries, but here pertains to unknown probabilities. Let there be four gambles (G1 G4) with the following definitions, as an example:

G1: If Horse A wins a race, win trip to Paris; if Horse A loses, win trip to Philadelphia

G2: If Horse A wins a race, win trip to London; if Horse A loses, win trip to Philadelphia

G3: If Horse A wins a race, win trip to Paris; if Horse A loses, win trip to Montreal

119 G4: If Horse A wins a race, win trip to London; if Horse A loses, win trip to Montreal. Note that Horse A loses, then G1 and G2 have in common the prize of a trip to Philadelphia; and G3 and G4 share the prize of a trip to Montreal. There is no difference between either of these pairs when Horse A loses. In both pairs the difference is only for a win, and it is Paris versus London. This principle requires that G1 is preferred to G2 IFF G3 is preferred to G4. The fact that the loss for the two pairs is Philadelphia versus Montreal, should not matter, i.e. Savage (1954) assumed that one would not prefer G1 to G2 (Paris to London), then flip preferences and prefer G4 to G3 (London to Paris). To summarize this important one, it may be helpful to restate it: Klibanoff, Marinacci and Mukerji (2005) simply say that the sure thing principle states that when two acts are equal on a given event, then it should not matter (in terms of an individual ranking the acts based on his or her preferences) what they are equal to on that event.

Accents, dialects and languages of the Bristol region: A bibliography (state of 5 January 2015, 51 pages, 0.91 Mb

Modulation techniques to eliminate leakage currents in transformerless three-phase photovoltaic systems, 9 pages, 1.47 Mb

doc.uments.com

About Us :: Privacy Policies :: Terms of Service :: Feedback :: Copyright :: Contact Us :: DMCA Policy

Copyright © 2018 doc.uments.com