Environmental Decision Analysis: Meeting the Challenges of Making Good Decisions at CALFED

We present a methodology to support environmental decision-making based on the principles of decision analysis, an analytical approach to decisions that is designed to handle both complexity and uncertainty. The examples and applications are developed in the context of CALFED, a consortium of federal and state agencies in California formed to manage the Bay-Delta to meet common resource goals. We discuss departures from traditional decision analysis in an environmental setting, which include the need to represent multiple dimensions of value and to represent the differing values expressed by stakeholders, scientists, resource managers, and members of the public. The impetus for such an approach at CALFED is a recognized need to enhance communication between scientists and management and between program elements within the organization. In addition, the environmental decision analysis framework supports both the explicit representation of uncertainty in the decision problem and communication about risk, important elements of most environmental management decisions. We discuss the implementation of each phase of the decision analysis cycle: 1) formulation, 2) evaluation, 3) appraisal, and 4) decision-making. In Phase 1, we focus on the need to define specific metrics to capture broad environmental objectives. Specifically, we recommend the formulation of a set of common metrics for CALFED, to enhance communication and allow side-by-side comparison of projects. We introduce examples from a decision analysis pilot study undertaken at CALFED, in which common metrics for salinity, winter-run Chinook salmon survival, and habitat health were developed. We present decision support tools in accordance with Phase 2 and 3 of the cycle. In the final phase, we discuss how these tools—including expected cost-benefit analyses, value contribution diagrams, and 3-D trade-off plots—can be used to convey and compare alternatives. While decision analysis provides a spectrum of decision support tools, we emphasize that it does not dictate a solution but rather enhances communication between all involved, and all parties affected by trade-offs associated with different actions. risk management, environmental planning, multi-objective decisions


INTRODUCTION
In order to meet the challenges of environmental management in the upcoming decades, in an increasingly strained and constrained environment, we will arguably need new, innovative approaches to problemsolving. Just such an innovative approach emerged in August 2000, with the creation of the CALFED Bay-Delta Program. A consortium of 25 state and federal member agencies, CALFED is one of, if not the, broadest U.S. environmental collaborations, formed with the express goal of improving ecological health and water management in the San Francisco Bay-Sacramento/San Joaquin River Delta watershed. The Director described the mission of CALFED as, " [a] grand experiment in collaborative decision-making" (CALFED 2002). How is this experiment in collaborative decision-making fairing? How might it be improved?
This paper focuses on the latter question and presents a methodology and set of tools designed to support decision-making at CALFED. No such decision-support framework currently exists within the organization. The potential benefit of such a framework is great, from enhancing communication between the different program elements and management, to conveying uncertainty surrounding a given decision and its outcomes, to providing the ability to perform side-by-side comparisons of numerous different actions. Each member agency of CALFED retains its sovereignty and, hence, the discretion to make final decisions. However, agencies cooperate in the different CALFED program areas. For instance, there are mechanisms within CALFED through which agencies can pool funds designated for similar purposes, which they then administer jointly. While they do not have expressly conflicting objectives, working in isolation the program areas (and agencies) could undermine each other's work or, worse perhaps, compound problems. CALFED recognizes that interdependency; a formalized decision-support methodology, which provides a consistent approach to modeling values of interest and presenting trade-offs, would facilitate coordination between groups. Such a methodology would also enhance communication, again by introducing consistency and common metrics and visualization tools, between the 11 programmatic areas and the higher level management of the California Bay Delta Authority (CBDA)-an oversight committee comprised of leaders from key state and federal member agencies, public members from key regions appointed by the Governor, two at-large members appointed by the legislature, and a member from the Bay-Delta Public Advisory Committee (BDPAC)and BDPAC. CALFED grew out of a recognized need for coordination in the Bay-Delta: "[The current] structure [of the nearly two dozen state and federal agencies with independent regulatory and management responsibilities] poses a challenge to the efficient management of the Bay-Delta's resources" (CALFED 2001). The environmental decision analysis methodology we advocate here addresses that challenge.
It is important to note that while we present a number of decision support tools, including models, and simplified examples of applications of these tools, we do not intend to imply that the models are easily come by or, in some sense, "perfect" representations of reality. As with all modeling projects, there will be simplifying assumptions, and, as we will discuss, these assumptions, as always, color the results. Nor do we intend to ignore the fact that it may be difficult to reach consensus. What happens if there is general disagreement about how to proceed with the decision analysis? While this is a difficult situation, it is neither uncommon in decision analyses nor insurmountable, and we suggest there are two forces that can act to stabilize the process: (1) a strong commitment by CALFED's members to implement decision analysis and (2) the flexibility of decision analysis. Without the first-a commitment by CALFED's members to implement a formalized decision support methodology-the process is likely to be abandoned when it falters, as has happened in the past. Therefore, this commitment is crucial. Second, and one of the reasons we argue that this methodology is well suited to CALFED, decision analysis is a flexible framework. For example, the model has a number of drivers, or scenarios-and these can be varied by parties with different "futures" in mind. One party can run the model under one hypothetical future scenario, while another party tests a different set of scenarios. Parameterizations can also be adjusted to reflect different beliefs. (For a specific example, see the discussion of the decision diagram.) In what follows, we discuss the challenges of environmental management, key concepts that provide the foundation for environmental decision analysis, specific tools to support its application, and, finally, an implementation plan for use at CALFED. Our discussions are firmly rooted in the context of the CALFED experience and draw on work from a recently completed pilot project on decision-making, supported by members of the Delta Cross Channel-Through Delta Facility (DCCTDF) technical team. The pilot project developed common metrics and a pilot decision model; however, the methodology was not fully developed to support a complete analysis and final decision. Although the examples herein are developed in the context of CALFED, the concepts presented are generally applicable. The major challenges in environmental decision-making-uncertainty, complexity, integration, and communication-are those faced by all engaged in environmental management, including the member agencies of CALFED itself.

Decision Analysis
The methodology we develop relies on the precepts of decision analysis. Decision analysis is an approach to decision-making, pioneered in the late 60s and early 70s, that specifically addresses the need to incorporate uncertainty, or risk, into the process of making decisions. Viewed as one of the "quantitative advances in management" that accompanied advances in computing and technology, it can be described as the marriage of systems analysis and statistical decision theory (Howard and Matheson 1977). The central tenets of decision analysis include the systematic identification, through sensitivity analysis, of the key drivers in the decision at hand and the quantification of variables impacting the decision, where probability distributions represent the uncertainty surrounding these variables. Although there are different approaches to implementing decision analysis, the "decision analysis cycle" can be described in four specific phases, discussed in detail in the implementation section: 1) Formulate: identify objective(s) and all possible alternatives, or actions, for evaluation 2) Evaluate: quantify possible impacts of alternatives on metrics, using existing data, models, expert opinions, and probability distributions 3) Appraise: employ decision analysis tools, such as decision trees, to assess consequences of actions 4) Decide. Many early applications focused on decision problems in corporate settings, where the baseline for comparison between alternatives is their impact on some profit function. At the time, there were some accompanying discussions of applications to social decision-making (e.g. Howard and North 1972). Today, according to North and Renn (2005), there is still "more experience in applications to business decisions in private industry (Clemen 1996;Howard and Matheson 1989) and in areas of engineering such as safety analysis (NAE 2004)." Nonetheless, there have been an increasing number of applications of decision analysis to environmental decisions, providing some specific examples of successful implementation in an environmental setting. In a setting not dissimilar to CALFED-involving a large number of often contentious stakeholders, a science advisory team, resource managers, and members of the public-the California Marine Life Protection Act (MLPA) Initiative relied on decision analysis, and its accompanying decision-support tools, to guide their efforts. The Initiative was formed in 2004 to revitalize the 1999 MLPA, which required the design and implementation of Marine Reserve Areas (MRAs) off of California's coast. Two previous efforts, the first led by a team of scientists and the second giving precedence to a stakeholder group, ended in conflict and eventual dissolution. This new effort, using decision analysis, led to a consensus recommendation for design of a pilot Marine Reserve Area (MRA), presented to the California Department of Fish and Game for final review.
A number of other examples come from the literature. Anderson, et al. (2001) applied decision analysis to the problem of setting phosphorus loading levels in Lake Erie. In another example, decision-analytic methods were implemented to assess a water quality management issue in the Neuse River Watershed (Borsuk et al. 2001), where the analysis examined the potential benefits, and costs, of a mandated 30% reduction in nitrogen inputs to the watershed. Other examples of applications of decision analysis to environmental management problems include the siting of a coal mine in an ecologically sensitive region of the South Pacific, by Gregory and Keeney (1994), environmental conflict resolution over wildlife management in Africa, by Maguire and Boiney (1994), and a fisheries management problem on the Fraser River by McDaniels (1995).
Although they comprise a set of diverse environmental problems, the applications above all have one thing in common. Namely, they have a set of disparate objectives, as represented by different stakeholders, and they involve a complex and uncertain environmental setting. The decision analysis methodology provides tools to address both of these components. As North and Renn (2005) observe, "With increasing levels of complexity and statistical uncertainty, analytical tools become increasingly important." North and Renn also report, in their background "state of the science" paper on decision-analytic techniques, the recommendations of the Office of Management and Budget (OMB) in their 2003 report to Congress: "The OMB report cites its new revised regulatory analysis guidelines, which recommend that agencies use formal probabilistic methods in all large (over $1 billion) regulatory decisions." Similar recommendations, advocating the use of probabilistic analysis, were made to the EPA in a 2002 National Research Council report. The adoption of the decision methodology we are recommending at CALFED is in keeping with these policy recommendations at the federal level.

The Nature of the Challenge: 'Can't See the Forest for the Trees'
Environmental managers seek insight into how to manage the forest-that is, how to manage a complex ecosystem, which entails balancing social, economic, and ecological interests. Often, managers are presented with data, or opinions about, individual "trees"-various species of fauna and flora, constituents in water and air, and sediments, to name a few of the detailed elements for consideration in an environmental management context. This can lead to a pressing problem where the big-picture is obscured by a rash of data or details, as captured by the popular idiom: 'can't see the forest for the trees. ' Managers therefore need a framework for identifying the important elements, measuring or assessing them, and then integrating that information to produce a comprehensive view. This is the approach we advocate in the next section. First, below, we briefly discuss four defining challenges in environmental management problems: complexity and uncertainty, integration, and communication.

Complexity and Uncertainty
Complexity in environmental problems emanates from two chief sources: the physical complexity of the system and the complexity of the objectives. There are multiple objectives, or values, which can generally be grouped into three categories: social, economic, and ecological (or environmental). In the Concepts and Tools section, we explore the distinction between "information" (pertaining to the physical system) and "values" as a central tenet of environmental decision analysis. The uncertainty in environmental management problems originates from lack of information, imperfect monitoring and modeling, and the unpredictability of future conditions. Figure 1 illustrates four categories of decision-types, defined along the axes of complexity and uncertainty: simple-deterministic, simple-uncertain, complex-deterministic, and, finally, complex-uncertain. The first two types of decisions can be addressed through an informal reasoning process, which might be accompanied by a list of cost-ordered alternatives as in the case of example I. For the complex-deterministic decision, a model is required. Once the model is constructed, the decision is then obvious due to the deterministic nature of the problem; in our example, cost has been minimized. The main challenge in the deterministiccomplex decision type is constructing a physically accurate model; assuming that this can be accomplished, the answer to "What to do?" is forthcoming.
Environmental managers are constantly faced with 'Type IV' decisions. The decisions are both complex and uncertain. In Type IV decisions, which encompass nearly all environmental management decisions, we have the joint problem of complexity and uncertainty. Complexity requires sophisticated assessment and modeling tools, whereas the presence of uncertainty implies the additional need to talk about trade-offs between high-risk/high-payoff and low-risk/low-payoff alternatives. In the context of risk management, Klinke and Renn (2002) emphasize the need to elicit and consider "stakeholder concerns, economic budgeting, and social evaluations" in our effort to balance the costs of overly cautious actions against the costs of not being cautious enough. In the environmental management context, even without the presence of uncertainty, there are complicated trade-offs between different objectives entailed in decisions. Under uncertainty, these trade-offs require an additional assessment of risk, where risk can be divided into two parts: the probability, or likelihood, of a given outcome, and its magnitude, or cost.
For instance, understanding the risk of levee failure in the Delta is paramount to CALFED implementing effective environmental management policies (Mount and Twiss 2004). The "cost" of the event of a levee failure has to be assessed along multiple dimensions: what is the impact on the native fauna and flora, on salinity levels in the Delta, and on the operation of pumps in the south Delta? Examples of costs that need to be assessed include the economic losses from property damage and cost of water supply disruption. The probability, or likelihood, of such an event can be assessed with the help of technical models and expert input. This is an example of an uncertainty where the probability of the event is not yet well understood, nor are the consequences. Other uncertainties may be such that the probability of an event is well understood, but the magnitude is not, or vice versa. Klinke and Renn (2002) identify five components of uncertainty to be considered in an analysis: variability, systematic and random errors, indeterminancy, and lack of knowledge.

Integration
Another major challenge for environmental managers is integration. Rarely does an environmental agency have only one possible action, or project, to consider; generally there are alternatives. The need to compare the alternatives suggests the development of a set of metrics. Additionally, it may be useful to consider permutations of projects, i.e. the combined effect of executing two or more of the originally proposed projects. Furthermore, there may be multiple projects proposed under different initiatives. In this case, it does not suffice to examine the impacts of a given project on the current system. Rather, an action's impact must be examined on the future system, that is, the system as it would be if other simultaneously proposed actions were taken. In other words, an integrated assessment of the alternatives is required. This is particularly salient at CALFED, where there are 11 "Program Elements" with over-lapping goals and proposals: Water Management, Storage, Conveyance, Water Use Efficiency, Water Transfers, Environmental Water Account, Drinking Water Quality, Watershed Management, Levee System Integrity, Ecosystem Restoration, and Science. Examples of over-lapping program proposals include a comprehensive ecosystem restoration plan, North-Delta Flood Ecosystem (proposed by the Ecosystem Restoration Program (ERP)), and the Franks Tract project (proposed by the Levee System Integrity team). Without an integrated assessment methodology, managers at agencies with overlapping interests are left to discuss the merits of the two, as presented by the respective teams, without insight into the possible (positive or negative) interactions of the projects; and, as stated earlier, program actions risk undermining each other or miss opportunities for synergistic actions. The tools we introduce to help address integration include a "system map," or influence diagram, that captures the relationship between the variables in the system (as impacted by different projects), a decision model that supports comparison of different projects as well as permutations of projects, as well as a set of common metrics. In addition, we note that the development of "scenarios of interest," discussed below, can be used to guide modeling efforts; i.e., model runs can be performed using pre-specified input conditions as defined under the scenarios.

Communication
Finally, there is the omnipresent challenge of communication. How do scientists and program managers communicate effectively with decision-makers? The CALFED CBDA can make recommendations to individual agencies, or program elements, and agencies cooperate in the programmatic areas to which they belong. To the extent that CALFED provides a forum for discourse on the various alternative actions, and the pros and cons of each, it closely matches the definition of a "deliberative process," described in the National Academies report as one that "relies on mutual exchange of arguments and reflections rather than decision-making based on the status of the participants, sublime strategies of persuasion, or social-political pressure…and include[s] a debate about the relative weight of each argument and a transparent procedure for balancing pros and cons" (North and Renn 2005). CALFED aims to provide an inclusive setting for collaborative decision-making, an innovative approach we initially recognized. However, the quality of discourse, and the ability to make good decisions, is hinged on the ability of the 11 CALFED programs and CBDA to access current information and attain a comprehensive view of the system. In other words, the presentation, as well as the quality, of the analysis performed in each program element is critical. In the next section, we present tools to facilitate communication between the program elements, CBDA, and BDPAC.
These tools are organized around a central concept consisting of two parts. First, the tools provide managers with a comprehensive view of the system. Second, they are designed to allow managers to access and understand trade-offs between alternative actions. To achieve this, each alternative is assessed against a set of common metrics -allowing comparison between any two alternatives along a single dimension. The trade-offs between alternatives are then captured in two and three-dimensional graphical interfaces.

METHODS: CONCEPTS AND TOOLS
Below we introduce four concepts integral to the application of environmental decision analysis. These concepts address the issues of complexity, uncertainty, integration, and communication identified above. The concepts originate in the decision analysis and risk management literature; here we emphasize their adoption in the environmental management context. For instance, the distinction between values and information is a central tenet in both decision analysis and risk management. Klinke and Renn (2002) emphasize the need to separate measurable physical elements, or outcomes, from the criteria being used in the evaluation process: "The physical elements should be measured independent of the social and psychological criteria unless there is clear evidence of a common link." The point here is that public values and social concerns often provide the impetus for studies of risk, and these concerns, in turn, inform the decision-making criteria. However, the technical assessment of physical phenomenon remains the dominion of technical experts.
In an environmental management context, two parts of this process-separating values and information-are distinct. First, in a clear departure from other applications of decision analysis, including both medical and corporate, the single decision-maker is replaced by multiple stakeholders. This poses the challenge of multiple-objectives, which we address under the section on trade-off analysis.
Second, the assessment of information will almost always incorporate a degree of uncertainty, due to, among other things, variability in the environment. That is, we can think of the environmental uncertainties that we assess as falling into one of two categories: in the first category are the physical processes that we feel we have a good understanding of and also a good sense of the distribution (including the mean and variability) of outcomes. Of course, we still can't predict the outcome in a given time period. An example of this might be historical streamflows (ignoring for now the impact of climate change), for which we have over 100 years of data allowing us to characterize the averages and interannual variability. In category two are the physical processes of which we have only limited understanding and little to no data or reliable simulations. Decision analysis allows us to characterize category one and category two uncertainties. Both categories can be represented by a distribution, obtained from models and data in the first case and from a more abstract expert assessment in the latter, in our decision model.

Values vs. Information
The separation of values and information is fundamental to the decision analysis methodology-and, one can argue, to making good decisions. Values ultimately allow us to select a course of action, but the information we have dictates what we assess as the possible outcomes of these actions we are selecting between. The distinction may seem obvious: information is what we know (or don't know) and values represent what we desire. Why then introduce a methodology that makes a formal distinction between the two?
We separate these two for the sake of analysis and deliberation. The "analytic-deliberative" framework discussed by North and Renn (2005) pairs an analytic technique, such as a decision analysis, which is used to structure and present information on the decision, with a formal deliberation phase. Deliberation denotes a "style and procedure of decision-making," which includes "mutual exchange of arguments and reflections" rather than individual power-plays (North and Renn 2005). Deliberative, or participatory, processes also emphasize "rational discourse" and the importance of a transparent procedure for identifying and presenting topics for debate. To facilitate this process of open debate, or deliberation, we will introduce common metrics in the next section. Here, we use the term deliberation more generally to denote the process of reaching a decision; however, we note that the decision analytic tools we present support the style and procedure of deliberation that constitute the "analytic-deliberative" framework.
In environmental decision-making, there may be strong conflicts regarding both information and values. As North and Renn suggest, the deliberation phase in most environmental decision-making contexts requires exploring "why people disagree about what to do-that is, which decision alternative should be selected," and they note that this disagreement can center on the evidence, or information, regarding both what is at stake and what will transpire under various actions. For complex decisions with many variables, decision analysis provides tools to represent the expected impact of each possible action on a set of metrics of interest. One such tool is a decision tree, which stores information on the likelihood of different outcomes under different actions. This analysis, usually informed by the most recent data, studies, and the input of scientists, or experts, provides the foundation of the deliberation phase.
The advantages of having separated information from values are evident in the deliberation phase: if there are disagreements regarding what outcomes will occur under different actions, this can be addressed as an information issue. The common assumptions and data are accessible in the decision model, and their merits can be debated and the inputs ultimately changed if new, or conflicting, information surfaces. The informational issues become transparent in a way that is not possible without a tool to structure, present, and in most cases quantify, the available information. Furthermore, as discussed earlier, the structuring of the problem in the analysis phase provides a tool for clearly communicating between technical experts and environmental managers, or the public.
The deliberation phase in many ways distinguishes environmental decision-making from decision-making in other settings, such as the corporate or medical world, where you ultimately have one decision-maker and/or aligned values. The values associated with environmental decision-making are generally multidimensional. In other words, there is no clear "valuefunction." Reaching a decision requires discussion and eventual consensus (or majority vote) on what are the appropriate trade-offs; this is discussed more fully under the concept of trade-off analysis.
We have argued for the benefits of separating information from values. Still, we can ask, "Is it possible-don't our values infuse our information gathering?" Indeed, there is much discussion in the literature of the ways in which our values are intertwined with the information we collect and on the potential "artificiality" of the distinction between the two. For instance, in simply devising a set of alternative actions to compare, we are imposing our values, since the alternatives are designed to meet some objective-and that objective, as discussed, is a representation of our values. Additionally, a number of "biases" associated with assessing probabilities in the information-gathering phase have been explored (Kahneman et al. 1982;Ross 1977;Koehler 1996).
However, we justify the formal distinction of values and information in our analysis by observing that most institutions have a guiding objective: at a firm it may be "profit maximization," whereas CALFED has four goals outlined in the Record of Decision (CALFED 2000): water supply reliability, levee system integrity, water quality, and ecosystem restoration. These guiding objectives provide a template for the development of metrics, which comprise the value side of the model. Then, working backwards we can ask what information we need. We do not suggest that this will necessarily be a straightforward task, but it is an important investment of time and resources for an organization looking to implement decision-support tools.
The values above will indeed impact information choices; however, as stated, they are both broad and common values. What we are hoping to avoid is the selection of information based on narrow values not commonly held. We are attempting to distinguish between what we know versus what we desire as an outcome, to facilitate the deliberation phase. This leads us to a discussion of common metrics but, first, an example to illustrate the structuring of information, separately from values, using a common decision analysis tool: the decision diagram.

Decision Support Tool: The 'Decision Diagram
The "decision diagram" serves as our map, guiding us through the decision process. As any useful map, it charts our progression from our starting location (our current state of information), past the "landmarks" of interest on the journey (alternate actions we consider), to our destination (the possible outcomes). In Figure  2 we present a decision-diagram constructed in the pilot project.
There are two key features central to grasping how to navigate our decision problem using the map above: the first feature is the nodes, which represent four elements in a decision problem, as identified in the key, and the second feature is the arrows, or the concept of "influence." 1) Elements: There are four elements depicted above. The first are the "key uncertainties," or drivers in the system; then we have what are termed the "deterministic nodes." The difference between these two is how we ultimately represent them in our model. As will be discussed, uncertainties are represented with probability distributions. The deterministic nodes, such as salinity levels, may be represented by point estimates from model runs. All of the circled entities, both the uncertainties and the deterministic nodes, are really variables in the model, and, as such, we could choose to represent them either with probability distributions or with point estimates.
The two remaining elements are a set of actions we are choosing amongst and a set of metrics. As depicted above, the set of metrics appears to be the same as the deterministic nodes: in actuality, the metrics require a precise definition (the deterministic nodes represent general model output or data). This will be discussed in the next section.
2) Influence: The arrows in the diagram represent "influence" between the different elements. For example, global climate change influences, among other variables, water year type, in that under a "drier and warmer" global climate change scenario, the chance of a "dry year" may increase. Similarly, above we have illustrated the belief that seismic activity and subsidence influence levee integrity, that climate change impacts levee integrity, subsidence, salinity and water year type, and that, in turn, water year type affects water supply. The actions we take in the Delta impact all of the variables, which ultimately influence the metrics we care about.
Our decision diagram helps us navigate the decision problem in several important ways: first, it identifies the elements of concern. Second, it addresses the question, "What information do we need?" The answer to this question is not necessarily complete, in the following sense: whatever simplifications have been adopted in the decision diagram represent (intentional) omissions of potentially relevant information. According to the simplified diagram in Figure 2, we do not need information for our decision model on, for example, the direct relationship between winterrun Chinook salmon and climate change: there is no arrow between these two variables. (The indirect relationship through changes in water supply is represented, however.) Whether or not we actually do need this information to make an informed decision is a point that needs to be debated in the construction phase of the diagram; ultimately it is a subjective, if consensual, decision on what relationships to include. Also, we note here that in an initial phase it may be helpful to include all potential arrows and then systematically eliminate those which are found, through sensitivity analysis, to have a limited impact. Finally, as we illustrate below, the decision diagram also serves as a template for the construction of a "decision model," in which we attempt to quantify the various elements.
The simplified decision diagram in Figure 2 is intended as an example of a diagram for the decision on alternatives for the Delta, including two alternatives aimed at enhancing water quality (Through Delta Facility, or "TDF3," and Franks Tract), and one aimed at enhancing ecosystem health and productivity (North Delta Flood Eco). The simplifications entail a number of omissions, including the following: additional fish metrics (only impacts on survival rates for winter-run Chinook salmon are included when, in fact, there are several endangered fish species of prime interest, including additional Chinook salmon runs and the Delta smelt), additional species of interest, and other conditions such as the creation of habitat that is favorable, or unfavorable, to invasive species.
It is also important to note the impact of any simplifying assumptions made in the construction of an influence diagram. By not including an arc between two nodes we are assuming independence between the two variables-and, importantly, the results in our decision model will only inform the relative performance of alternate actions under this assumption. Consider, for example, the dashed arcs connecting climate change to salinity levels, levee integrity, and subsidence, which illustrate this point. If we don't include these dashed arcs, then our decision model will provide us insight into the trade-offs between actions assuming (1) that the subsidence rate does not increase or decrease due to sea level rise (climate change); (2) that the integrity of the levees is not directly impacted by sea level rise; and (3) that average salinity is not impacted by sea level rise.
Why might we choose, then, not to include the arcs? For one, if we are only interested in the short-term value of a project and we deem temperature increase under climate change as relevant, but that other mechanisms associated with climate change, such as sea level rise, will not have a significant impact in the relevant time span, we may choose to ignore these interdependencies between variables. Alternately, we may agree, for example, that while sea level rise threatens levee integrity it is a second order effect and subsidence is the first-order effect; hence, in an effort to simplify the assessments required in our decision model we do not include the arc. However, a decision to remove a relevance arc should be carefully consid-ered, as it represents the discard of information regarding a relationship between two variables that are themselves considered relevant to the problem. The decision diagram above may lead to different conclusions with and without the arrows connecting climate change to salinity levels, levee integrity, and subsidence. In the context of CALFED it may prove very difficult to achieve consensus on what arrows should be removed. The inclusion of all possible arrows may seem a favorable alternative, although it may prove analytically intractable. It is this which makes the structuring and refinement, or framing, of the decision problem using an influence diagram one of the most difficult-and yet arguably most important-phases of the decision analysis.
What if there is disagreement as to whether or not to include an arc-for example, the arc from climate change to subsidence? As discussed in the introduction, the decision analysis framework is flexible enough to handle this disagreement. The arrow remains in the diagram, indicating a possible relationship between the variables. However, in one version of the model, the values of subsidence given climate change can be held constant, i.e. they can still be modeled as if they aren't impacted by climate change even though we've built a structure to allow for the possible relationship.

Decision Support Tool: Common Metric
An institution-any institution-has a charter in accordance with the objectives, however loosely defined, of its members. As discussed, for a firm or company it may be the objective of "profit maximization," at least in the long run. For CALFED, the objective is defined in the Record of Decision as water supply reliability, levee system integrity, water quality, and ecosystem restoration. These broad institutional objectives can be broken down, often quite naturally, into specific metrics. The metrics are the indicators that inform a decision-maker, or manager, how he's doing on the "profit-maximization" front. For instance, to assess a project's impact on profits, a manager may want to assess revenues and costs as two metrics. At CALFED, examples of metrics include salinity levels at specific measurement points, such as the Contra Costa Canal in the Delta, repair costs for levee failures, and juvenile Chinook salmon survival indices.
In decision analysis, these common metrics are referred to as a set of "decision criteria" that meet the "clarity test" (Howard 1998). As noted above, the decision criterion may be a profit function, versus a set of common metrics, as will be the case in environmental decision-making. The clarity test refers to a process of defining each criterion, or metric, such that it is well understood by all and, hence, passes the clarity test. This process, helpful for metrics and variables alike, mitigates disagreements arising due to different definitions associated with ambiguous metrics.
CALFED has recognized the need to define a set of "performance measures" for common use across the 11 program elements in evaluating different projects, or actions, in the Delta. This effort has been led by the CALFED Science Program, charged with the task of performance measure development "to inform and guide adaptive management," as part of the "Science Integration" efforts at CALFED (CALFED 2002). Defining common metrics, or performance measures, facilitates inter-comparison between different actions under potentially different program areas: one can perform a side-by-side comparison of the alternatives using the common metrics. The effort already launched by CALFED to establish performance measures supports the implementation of environmental decision analysis.
What serves as a good metric? In his 1992 book on decision-making, Value-Focused Thinking, Keeney discusses three desirable characteristics of metrics, or, as he terms them, "attributes": measurability, operationality, and understandability. The first, measurability, increases the level of detail associated with an objective and allows direct measurement of the objective. For instance, the broad objective of "water supply reliability" may require several measurable metrics for assessment. One such metric might be "number of days in a year that the pumps are shut down due to high salinity conditions," which provides detail on what exactly we care about (operation of the pumps) and is also measurable (number of days they are shut down). Note that Keeney does not restrict measurability to refer to quantitative measures; he admits qualitative measures, such as levels "poor, fair, and good," with the caveat that these levels must be clearly defined. The Likert scale, introduced in 1932 by Rensis Likert, is a psychometric scale, traditionally with five points (although seven and nine points are not uncommon) having an inherent order: "smaller to bigger" or "less to more." Often used in surveys, this scale can be used to collect qualitative data for decision models.
The second characteristic of a good metric is operationality, defined as the ability of a metric to accurately describe the outcome of a proposed action in a way that facilitates value judgments; i.e., if one cares about salinity reduction in a specific season, does the metric describe the potential reduction (or increase) in net salinity during the season (vs. say, over the whole year)? Keeney offers three tests of operationality: is it clear where the metric is measured, how often it is measured, and how multiple measures are aggregated over space and time? Understandability is a third desirable characteristic of a metric: there is no loss of information from when a value is assigned to a metric and the time when that value is interpreted by another person.
Keeney's definitions of measurability, operationality, and understandability overlap with the concept embedded in our "clarity test." The essential idea is to create consensus on the definitions of the variables or metrics-what each represents and how it is measured-to avoid later conflicts and ensure consistent assessments among different experts. This requires careful definitions of each metric to remove ambiguity. Below we illustrate the establishment of a common water quality metric, which incorporates Keeney's three desirable qualities and therefore also passes the clarity test.
The enhancement of water quality is a stated objective under the CALFED ROD. How should this objective be measured? First, we need to define the attributes of "water quality" that interest us, as well as our objective with respect to these attributes. The Water Quality Program at CALFED has come up with the following ideas: they identify the "key concern" as "understanding the changes in concentration and timing of key drinking water constituents at drinking water intakes" (Contra Costa Water District 2004). These drinking water constituents are limited to salinity and dissolved organic carbon, as the evolution of these two constituents can be modeled in the hydrodynamic model currently employed for CALFED studies.
The simplified metric proposed by the drinking water quality program, for use in communication with decision-makers, is the following: Comparison of baseline DSM2 [model] run with [model] runs incorporating various projects. Plot of base case and with-project average monthly salinities (or carbon if that is constituent of concern) over year at Old River, Hwy 4 with minimum, average, and maximum change quantified on graph (Contra Costa Water District 2004).The justification for this metric is that it both addresses the magnitude of water quality changes, in terms of a net increase or decrease in the measured salinity, and that the plot itself will capture variability. Also, the location selected accurately reflects water quality shifts in the south and central Delta.
Discussions during our pilot study led to an additional simplification, based on what is thought of as a "critical period." The critical period for salinity is in the late summer/early autumn, after the spring flows and before the late autumn rains, when conditions in the Delta are at their driest. Therefore, in our pilot project we proposed just comparing average salinities under different plans of action for the critical period, August-October. This simplification has the following advantage: it resolves the ambiguity surrounding the aggregation over space/time.
In the original measure, it is unclear how to evaluate two projects where one might, say, decrease salinity in the spring and the other decrease it in the autumn. Note that although the concern over the critical period is well-known to water quality experts, it may not be apparent to all decision-makers. By explicitly presenting a metric defined for a "critical period," this ambiguity is resolved. The importance of this will become evident as we discuss the trade-off analysis and implementation. Below, we present our metric and evaluate it along the dimensions established above. See Table 1.
We have focused our attention here on common metrics, without attention to whether or not the other variables in the model need to also share common definitions. Before moving on to the next section, we pause to briefly address this point. The metric above is precise, in keeping with our definitions of measurability, operationality, and understandability; even so, we can imagine a case where, for two projects, a different hydrodynamic model is used to evaluate one than the other. The common metric is the same, but the measurement of the variable is not. This is an issue of consistency. Consistency is a generally desirable quality, and, following the establishment of common metrics, a CALFED panel may wish to consider this issue and specify ideal models or tools for variable measurement. However, when the need to make a decision is imminent, data from differing models may be better than no data, and while consistency is desirable, in that it allows one to conduct a more detailed analysis of how/where one alternative outperforms another, it is not strictly necessary.

Explicit Representation of Uncertainty
As discussed above, uncertainty is a critical component of environmental decision-making. Hence, explicit representations of uncertainty are a central element of environmental decision analysis. Managers face uncertainty on several fronts. There are large systemic uncertainties, examples of which at CALFED include climate change, hydrologic conditions (or "year type," i.e. critical, normal, or wet), and seismic risk. There are additional informational uncertainties regarding the quality, or accuracy, of specific model predictions, i.e. point salinity estimates in a hydrodynamic model. These models typically have some error margin, and point estimates might be better represented by a range of possible numbers, such as salinity: 110 ppm +-10%.
As we noted earlier in our discussion of complexity and uncertainty, there are several dimensions of uncertainty. In a risk management setting, the type of uncertainty, or risk, dictates different strategies: risk-based, precautionary, and discursive (Klinke and Renn 2002). When both the probability of occurrence and the magnitude of the damage are relatively well-known, a risk-based strategy, which emphasizes optimal means for mitigating and containing the risk, is appropriate; a precautionary strategy would be favored in the case where the consequences are not well understood. The discursive strategy is reserved primarily for risks where there is misperception (on the part of the public) regarding the risk, or there is acceptance of a risk but no consensus on how to manage it as the impacts may not be fully realized until a much later future date. The destruction of mangroves is an example of a risk demanding a discursive strategy, since the increased vulnerability of coastal areas is recognized by the scientific community but not (previous to the tsunami in South Asia and Hurricane Katrina) widely accepted by communities, who face(d) the long-term consequences. Another example, offered by Klinke and Renn, is that of electromagnetic radiation, which is harmless and yet perceived as dangerous by the public. A discursive strategy entails raising public awareness and, often, increasing public confidence in public agencies or regulatory bodies. Klinke and Renn identify "resilience" as a key objective when dealing with uncertainty and therefore advocate strategies that incorporate "diversity and flexibility." At CALFED, there is a good deal of uncertainty regarding the potential impacts of actions in the Delta, which suggests the value of actions that enhance future flexibility or adaptability. This value can be captured in part through assessments of actions under a range of scenarios, since an "inflexible" option will produce a large negative outcome under one or more of the scenarios. These outcomes are "weighted" by the likelihood of the scenarios they are associated with.
How do we explicitly capture uncertainty in our analysis? The field of decision analysis, in its approach to decision-making under uncertainty, tackles precisely this question. The basic mechanism for representing uncertainty is the probability distribution, which describes the likelihood of each outcome for a particular variable. In the "probabilistic phase" of a decision analysis, experts go through the exercise of "encoding uncertainty in each of the crucial state variables" (Howard 1968).
Probabilities can be thought of as weights: given three possible scenarios, we assign probability of 0.40 to the first, 0.40 to the second, and 0.20 to the first. What do our assignments tell us? The first scenario and the second are equally likely-and both are twice as likely as the third scenario. These assignments are a useful way to communicate uncertainty, since they offer an environmental manager a comprehensive picture of what might occur in terms of the relative likelihood of each event.
Arriving at the point where we can assign probability distributions to represent uncertainty requires some earlier refinements of the decision problem. First, we must have a set of stated variables of importance, which are well defined and meet the clarity tests introduced above for the common metrics. Then we must assemble all prior knowledge, which includes data, model output, and expert opinion. The actual process of eliciting the probability distributions is discussed in detail in the literature and emphasizes among other things the avoidance of cognitive biases, as discussed under values vs. information. Rather than deal in the abstract, we move straight to an example.

Decision Support Tool: The 'Decision Tree'
Decision analysis requires that we assemble all relevant information and distill it into a set of representative scenarios; we then present these scenarios in terms of their "likelihood" and consequences. These scenarios can then be represented in a decision-flow diagram or tree (Raiffa 1968).
We begin by identifying two key uncertainties impacting decisions at CALFED: climate change and hydrologic conditions in a given year. Both these uncertainties are relevant to assessing the performance of proposed actions in the Delta. In order to represent the uncertainty surrounding the two, we would like to assign probabilities, or weights, to a set of plausible outcomes. First, we must answer the question, what are plausible outcomes, or scenarios? This is where the data, models, and experts inform our analysis.
Drawing on available climate change data and historical data reporting the frequency of hydrologic conditions in the Delta, and the input of experts, we can devise representative scenarios for the two uncertainties, "climate change" and "water year type." Scenarios for water years are taken from an index developed by the California Department of Water Resources (DWR). The climate change scenarios are derived from results presented by Dettinger (2005). Specifically, the scenarios come from Dettinger's joint distribution on precipitation and temperature for 2050. (See, for example, Figure 5 in Dettinger 2005.) The scenarios are intended to be exhaustive; the probabilities sum to one.
Each scenario is labeled for easy reference and includes a description (see Table 2). The scenarios should meet the same three definitional criteria as the common metrics, defined above-measurability, operationality, and understandability. We offer sufficient detail below to identify each of the climate change scenarios in the context of the current literature, i.e. year, magnitude of precipitation and temperature change, and seasonal and geographic distribution of effects. For more detailed information, we refer the reader to Dettinger (2005).
DWR assigns water year type indices to each hydrologic year. There are five indices denoting the different hydrologic conditions: critical, dry, below average, above average, and wet. We have collapsed these five indices into just three representative indices: critical, normal, and wet. Table 3 illustrates our simplified scenarios, or indices, for the water year types. Note that we have included representative years from the historical record, which can be used in model runs for the three scenarios. (The representative years for the pilot study came from 17 years of hydrologic data available from the DSM2 hydrologic model.) We pause here to discuss one notable difference between the "critical" scenario and the other two scenarios, normal and wet: a critical year categorization requires the occurrence of successive dry years, i.e. it is not one dry year but several in a row. Assuming independence of year types, and assuming that we know the probability of a dry year (denoted pdry), we could compute the probability of a critical year: pdryn, where n is the number of dry years in a row necessary for a "critical" year designation. However, this approach does not guarantee that the above probabilities sum to one. If, instead, we rely on data of historical water year types (accessible from DWR), we can compute each probability as a frequency: percent of the time it was a dry, normal, and critical year (where it is assumed that these are exhaustive categories, i.e. the designations "below normal" and "above normal" have been combined in the "normal" category). This approach eliminates the complication posed by the "successive years" definition.
We have now defined two key uncertainties, the first with seven degrees, or levels, and the second with three-in total there are 21 possible scenarios or "states of the world." Imagine, before moving on, that we are willing to discard several of the climate scenarios with low probability. Dettinger (2005) observes that the joint probability distribution for precipitation and temperature in 2050 is bimodal: the warmer projections are accompanied by drier conditions, and the cooler projections are accompanied by wetter conditions. The "wetter and warmer 4?C+" and the "drier and warmer 0-2?C+" scenarios, as they represent the warmest (coolest) predictions paired with a precipitation increase (decrease), are the least likely. Hence, we discard these two and are left with five possible climate scenarios and 15 total scenarios: we can have a "critical," "normal," or "wet" year under each of the five climate scenarios.
Discarding these two low probability events as potential outcomes of climate change simplifies the decision model illustrated below. To the extent that observations from our physical models can be comfortably adapted to provide simplifications, we encourage such steps. Our choice to discard two potential climate change scenarios stands as an example of a possible simplification. Whether or not it would be a prudent simplification depends on a number of factors including our confidence in the probability model for climate change scenarios (in this case, Dettinger's model), the sensitivity of the other variables to the climate scenario, and the actual joint probabilities for the two scenarios, which we have not presented here. As cautioned earlier, the decision not to include information, such as relationships between variables or possible outcomes of uncertainties, represents a strong assumption. On the one hand, including all potentially relevant information, without regard to its relative importance, may lead to an intractable decision model; on the other hand, not including such information will certainly constrain the model outcomes, and potentially bias the results in favor of an alternative that performs well only according to the admitted (model) scenarios and not the actual (real-world) scenarios. Now, moving on, we want to assess the probability of being in a given state of the world. First, however, we note the following: the chance of being in a given water year type is not independent of climate change. In other words, climate change conditions give us information on the likelihood of a water year type. Under the climate scenario "warmer and wetter," we would expect the probability of a wet year to increase from the baseline (historical) probability. Alternately, under the "warmer and drier" scenario, the probability of a critical year increases.
Recall that this relationship was captured above in our decision map: global climate change influences the water year type. In this instance, then, we cannot simply rely on the historical record as an indicator for the relative likelihood of a wet, normal, or dry year. Instead, we must rely on models and expert opinions to produce a conditional distribution of relative weights, or probabilities. In Figure 3, we depict a decision tree with the two uncertainties discussed above.
Our uncertainty is explicitly captured in the scenarios we have created: each scenario has a probability, or weight, representing how likely it is to occur relative to the other scenarios, based on current evidence. (Note that the probabilities for the five climate change scenarios would have been renormalized to sum to one after discarding the two additional scenarios.) For instance, the likelihood of a wet year given that we are in climate change scenario "wetter and warmer 4?C+" is p1 multiplied by p6; the likelihood that we experience a wet year under the climate change scenario "same pre-  cipitation and warmer 2-4?C" is p4*p6. The actions we are comparing can now be evaluated in terms their performance under a range of likely scenarios. Specifically, we can analyze the expected outcomes under different actions, an important concept discussed in more detail in the next section on trade-off analysis.

Design Support Tool: Trade-off Analysis
Environmental management problems are ultimately about trade-offs: trade-offs between different objectives, trade-offs between costs and benefits, and tradeoffs between risks and rewards. This is not to say that there aren't ever "win-win" situations, but given the multiple dimensions of the problem-from diverse objectives, to costly alternatives, to underlying uncertainty, or risks-it seems too much to hope that these dimensions will be perfectly aligned. In fact, the structuring of the decision problem and the notion of the deliberation phase, in the analytic-deliberative model, are predicated on this idea: there will be some tradeoffs entailed in the final decision.
What does engaging in a trade-off analysis entail for CALFED? First we need to consider where this tradeoff analysis will take place. Decision-making power is retained by the individual agencies, which cooperate under the 11 program areas within CALFED. Each program area has one or more agencies serving as a "lead." (Agencies are leads in areas where they have statutory obligations, as well as legal authority and funding.) It is ultimately the leaders of these lead agencies who make the decisions. CBDA serves as an administrative umbrella, coordinating program interaction and providing recommendations; it does not have official decision-making power. That said, as the leaders of key agencies sit on CBDA, and it is they who ultimately have (or influence those with) decisionmaking power within their respective organizations, CBDA plays, at the very least, an auxiliary role in decision-making. Therefore, the trade-off analysis may be most meaningfully carried out at the interagency programmatic level, with the analysis and the final trade-off decisions likely to be discussed and influenced by CBDA. It is important to note that the decision analysis is iterative and requires feedback from decision-makers, and stakeholders, to refine the analysis. That is, a preliminary decision diagram should be circulated for feedback, as should the defined set of metrics for evaluation.
The execution of a trade-off analysis within a program area, such as the ERP, begins with the tasks described earlier-the creation of a decision diagram, the description of key uncertainties, or scenarios, and the defining of common metrics. (The formulation of CALFEDwide common metrics, ideal for fostering communication and interaction between program elements, is discussed in our concluding section.) The next step-the analysis-relies in part on tools described below: expected cost-benefit plots, value contribution diagrams, and 3-D plots to frame the trade-offs. Recommendations can be presented to decision-makers in terms of specific trade-offs, such as the example below that illustrates the trade-off between fish saved and cost.
How these trade-offs should be made requires an analysis of our values, or preferences, as identified in the common metrics. Traditional decision analysis prescribes the construction of a value function, such as a utility function (which translates dollar amounts into units of values). In decision problems with a single decision-maker, or a group of decision-makers whose values are aligned, and the ability to represent outcomes in terms of dollar amounts, the task is greatly simplified. Early examples of applications of decision analysis incorporate both of these simplifications, assigning dollar values to all outcomes and discussing profit as the metric of interest (Howard 1968;Raiffa 1968).
Environmental decision-making cannot exploit the simplifications above: there are multiple objectives and there is no consensus on values. In fact, it is precisely the opposite-there are a number of stakeholders with different, if not directly opposing, values. Additionally, the objectives (water quality, ecosystem restoration, etc.) often do not readily translate into dollar amounts. The complexities associated with moving from the context of an individual decision-maker to that of society were broached in an early paper by Howard (1975), entitled "Social Decision Analysis, " wherein a loose framework was proposed for gathering information on values or preferences from the public to guide the assignment of monetary values to metrics such as death, injury, sickness, or property destruction, which may collectively measure the social cost of an action. The mechanism for gathering this information and then structuring it was not specified explicitly.
There are now established methodologies for both representing multiple objectives as a value function and assimilating different values functions; methodologies also exist for translating quantities such as ecosystem restoration, or ecosystem health, into dollar amounts. Value-tree analysis and multi-attribute utility analysis are two examples of such methodologies. The translation of measures for non-dollar metrics into dollar values can be done through assignment of dollar amounts or by employing methodologies in environmental economics, such as travel cost models, that assess societal values for environmental protection (Freeman 2003).
Rather than advocate use of one of the above methodologies, we instead choose to present tools that will illuminate different trade-offs, leaving the issue of how to make the trade-offs for the ensuing debate between environmental managers. There are several reasons we do this. Primary is the fact that we are introducing a methodology for structuring and communicating information to support decision-making, not a methodology that dictates what decision to make. Other considerations include the fact that such an effort may not be necessary within an institution such as CALFED, given the familiarity and fairly welldefined interests of the various member agencies. Given the right tools to understand the trade-offs associated with different decisions, environmental managers may then be able to identify effective strategies. Additionally, an approach such as those discussed above may prove entirely counterproductive in a collaborative setting where discussion and bargaining, or negotiated rule making are important elements of the decision-making process.
The concept of trade-off analysis features in the risk management literature. Indeed, Klinke and Renn (2002) observe that where costs are impossible to quantify due to surrounding uncertainties, "painful value trade-offs" are inevitable. The trade-offs within CALFED are between risk and cost; however, there are also tradeoffs between numerous other social, economic, and eco-logical dimensions. Trade-off analysis, as opposed to the assessment of a single objective function, also features in specific applications of decision analysis. Anderson, et al. (2001) employ decision analysis to address the question: "to what extent do important trade-offs exist between different objectives in phosphorous management for the lake?" They also present a view of ecosystem management as "more than an issue of better science," requiring broad consensus on tradeoffs between alternate management plans.
Here our primary interest revolves around building on the decision analysis framework to present tools that allow managers and stakeholders to understand-to visualize-the trade-offs inherent in the decision they face. These tools aim to capture the relative performance of metrics, to illustrate the impact of uncertainty, and to allow the visualization of trade-offs along several dimensions. We present three such tools below.

Decision-Analytic Tools: Expected Cost-Benefit Analysis, Value Contribution Diagrams, and 3-D Trade-off Plots Expected Cost-Benefit Analysis
The standard engineering tool for project evaluation is cost-benefit analysis. A required component of environmental project assessment in many cases, cost-benefit analysis systematically identifies and then presents the costs and the benefits of a given project. The same problem of disparate units arises here, as in decision analysis: costs, presumably in dollars, cannot be directly subtracted from environmental benefits for a net assessment. Furthermore, the traditional cost-benefit analysis often presents a single estimate of costs and benefits, neglecting the uncertainty surrounding such estimates. In many instances, sensitivity analysis provides some insight into possible shifts in costs and benefits under different assumptions. In Figure 4 we present an expected cost-benefit diagram, where both the range of outcomes (computed as the standard deviation) and the "expected" or average outcome is visible. Here benefits and cost are explicitly presented as uncertain, with the degree of uncertainty visible. Note that although it is not possible to represent the weights on possible outcomes in such a plot, the average value indicates whether high or low outcomes are more likely.
The metric, increase in fish population, is measured here in number of fish predicted above the baseline population. The ratio above assumes essentially that the number of fish per dollar is constant, i.e. that there are constant, not diminishing, returns on the investment. This may not be an accurate reflection; nonetheless, if components of the project cannot be decoupled, then the metric provides insight into the "average number of fish" saved per dollar investment, and the ordering of alternatives is still a relative performance metric.
We can summarize the information in the above expected cost-benefit plot as follows: Alternative 1 has the highest expected value but also the greatest downside risk. If we are "risk averse," we might not prefer Alternative 1 to Alternative 2. Alternative 2 has a slightly lower expected value but less downside risk. Finally, Alternative 3 provides the lowest expected fish enhancement per dollar spent, and we are relatively certain about its performance (as indicated by the narrow standard deviation). If only concerned about fish enhancement, we can discard Alternative 3, as it is dominated by Alternative 1 and 2.

Value Contribution Diagrams
Another, more comprehensive, tool for assessing relative value trade-offs is the value contribution diagram. The value contribution diagram, similar in design to a standard McKinsey waterfall diagram, is essentially a "cascade of value." (For more on McKinsey waterfall diagrams see, for example, Rasiel 1999.) The various metrics of interests, in disparate units, are translated into common units: "percent improvement in the met-ric relative to the base case, or existing conditions," useful for review of the relative contribution of each action (see Figure 5).
The value contribution diagram should be read as follows: first, intuitively, the bigger the stack above the line (positive gains) the greater the relative gains of a given alternative, and, similarly, the bigger the stack below the line, the greater the relative costs. This does not necessarily mean that the favored alternative has the biggest stack above the line or, conversely, the smallest stack below the line, since the diagram does not weight the metrics. To be more specific, consider the first alternative. There is a roughly 50% improvement in salinity conditions, i.e. a decrease, from the base case. However, fish are negatively impacted by this alternative, as can be seen by the red-striped box below the line, corresponding to a roughly 25% decrease in fish survival relative to the base case. Whether or not this is a good trade-off clearly depends on the perspective of the decisionmaker, or stakeholder. (In the context of CALFED's goal to only pursue actions that have positive impacts on all target areas, it does not appear acceptable.) The costs for Alternative 1-capital, O&M, and levee failure-are all positive, which is illustrated by the stack below the line. (Here we use the convention that costs are below the line and benefits are above the line and a negative cost is a benefit.) The 45, 15, and 25 percentage boxes correspond to a 45, 15, and 25% increase in capital, O&M, and levee failure costs, respectively, relative to  the base. What if an alternative being considered actually lowers annual O&M costs from the current (base case) costs? Then the O&M cost box would be above the line, representing the percentage of cost savings.
The power of the value contribution diagram is its ability to convey, at a glance, the relative break-down of the alternatives under consideration. For instance, as discussed above, we see that Alternative 1 derives most of its value from its improvement of salinity conditions; it has a negative impact on fish. Alternative 2 is the least costly-the capital costs are the lowest of the three alternatives, as are the levee failure costs, and the O&M costs are the same as the other two-and also contributes the most to fish and habitat respectively. Note also that the salinity enhancement is about one third of that of Alternative 1-and Alternative 2 is roughly one-third of the total cost. Alternative 3 involves a large capital outlay but improves all three metrics and improves salinity more than Alternative 2. The value contribution diagram conveys relative performance of alternatives and allows comparison of alternatives across a suite of metrics.
However, the simplification comes at some cost; the expected percentage improvement in the metrics does not allow us to see the actual magnitude of such improvements. When comparing costs, for instance, we are only able to deduce that Alternative 2 is the least costly because all of the cost components for Alternative 2 are less than, or equal to, those for Alternatives 1 and 3. If, say, Alternative 1 had lower O&M costs (as indicated by a smaller box) and Alternative 2 had lower levee failure costs, it would be impossible to say, from the diagram, which had lower total costs (since then we would need the magnitudes for that comparison.) Nonetheless, the diagram is an important tool due to its ability to present a lot of crucial information and to provide a comprehensive view of actions. It should ideally be used in conjunction with other tools, such as the expected cost-benefit analysis, and the 3-D trade-off plots shown below.

3-D Trade-off Plots
The 3-D trade-off plot is a visualization tool for comparison of alternatives along three dimensions (see Figure 6). (Although there are often more than three dimensions of value for consideration, and it would be possible to create plots with additional dimensions, the 3-D plots are easy to read and interpret and can be used in succession to compare additional dimensions.) A 3-D trade-off plot is a tilted surface on which coordinates are plotted in three dimensional space-in this case, the three axes are salinity, habitat, and fish.
Unlike in the value contribution diagram above, we are not constrained by the need for common units. The axes, in this case, have disparate units: ppm of salinity, number of fish, and acres of habitat. We are constrained, however, to ensure that the increasing direction of the axes corresponds to an improvement in the metric of interest. This allows us to interpret the 3-D trade-off plots intuitively: points with higher values along the x, y, or z-axes, respectively, reflect enhancements for these variables. Hence, for the salinity metric, the axis measures "reduction in ppm of salinity," so that an increasing positive value corresponds to an improvement in this metric. Similarly, the metric for fish is increase in survival, measured in number of fish, and for habitat, an increase in the number of acres of favorable habitat.
In our example (Figure 6), we see that Alternative 3 enhances salinity conditions, habitat, and fish, and is the "preferred alternative" (as indicated by the black marker). Alternatives 1 and 2 trade salinity improvements off against habitat improvements.

Implementation
In this section, we briefly discuss the actual implementation of environmental decision analysis to support decision-making at CALFED. First, it should be noted that decision analysis can be employed by individual program elements, or project teams, to decide on recommendations for, and enhance communication with, environmental mangers. The tools we have presented above are designed to facilitate such communication between program elements and managers. Additionally, executing the following two steps will facilitate integration and communication within CALFED: 1) define a set of common metrics for use across programs and 2) develop assessments of the key uncertainties facing CALFED and use these to develop scenarios of interest. The latter, scenarios of interest, can then be used to define modeling runs and evaluations performed by individual program elements. For instance, consider the 15 scenarios defined in our example in Figure 3. Each program element would complete model runs, or provide alternate assessments, of the proposed actions under the 15 scenarios defined as a set of "common scenarios of interest" for CALFED. Each variable of interest (salinity, winter-run Chinook salmon survival, etc.) would be evaluated under each water supply scenario-"critical," "normal," or "wet"-for each proposed action in the Delta. Finally, the performance of each alternative is assessed using the common metrics and the evaluation tools presented above.
The set of common metrics will logically support assessment of the four main objectives at CALFED: water supply reliability, levee system integrity, water quality, and ecosystem restoration. Specific metrics may include those such as the average point salinity metric proposed earlier. Not all of the metrics will necessarily be applicable to every project evaluation; nonetheless, the development of such metrics will facilitate side-by-side comparison of alternatives across programs. Steps one and two above would ideally be completed by a panel representing the 11 program elements, including technical experts in the various areas, or possibly within the Science Program. In the diagram in Figure 7, we present an integrated view of environmental decision analysis within CALFED: as illustrated, execution of the actual decision analysis cycle is under the dominion of each program element. Each program element, by constructing its own decision diagrams, will be mapping the complex decision domain. While it would be daunting to consider constructing a complete decision diagram for the Bay-Delta in a single pass, illustrating all of the interconnections between variables in the Bay Delta, by sharing, and eventually piecing together these diagrams where they overlap, CALFED can produce a highly detailed map.

CONCLUSIONS
Environmental decision analysis is presented here as a tool to support decision-making at CALFED. In this context, we suggest that decision analysis performs vital functions: structuring the problem and ensuring thorough analysis; capturing uncertainty; and facilitating communication between program elements and management. Decision analysis also introduces a degree of formalism and transparency in decisionmaking, which facilitates communication and may be deemed appropriate for public institutions.
Implementing environmental decision analysis requires assessment of key uncertainties and their likely resolutions, representation of the "current state of knowledge," and definition of a common set of metrics for evaluation. As such, it can be an involved process. In order to be effective as a decision-support tool, deci- sion analysis must remain tractable. Examples of the successful application of decision analysis to complicated real-world problems, including hazardous waste disposal and other environmental problems, suggest that it is both tractable and beneficial. The two steps recommended at the institutional level-defining a set of scenarios of interest, as well as common metricswill improve tractability.
There are additional impediments to successfully implementing decision analysis. North and Renn (2005) acknowledge the difficulty in assigning values to represent outcomes in a setting of incomplete information and potential biases. Also, to be effective, there must be general buy-in, or consensus, that the important elements for consideration are captured in the model; this suggests the importance of an inclusive process, and review, in the initial formulation phase. As with the introduction of any new methodology in an institutional setting, there are hurdles to implementation. While these hurdles need to be addressed, they should not preclude the adoption of a new methodology for enhancing effective decision-making.
The concepts and tools introduced above focus on communicating information about important tradeoffs in environmental management. We have argued that these trade-offs are at the heart of environmental decision-making. Rather than present a single "decision criterion," we have provided tools to facilitate communication and debate between environmental managers. We recognize the central importance of bargaining and negotiated rule making in the decision process. Charged with the collective resource management objectives of water supply reliability, levee system integrity, water quality, and ecosystem restoration, CALFED faces the challenges of complexity and uncertainty. As an innovative, cooperative interagency body, CALFED is positioned to tackle these challenges. However, in the wake of restructuring, which led to the formation of CBDA and BDPAC, and the recently held Little Hoover Commission on CALFED governance, it may be a good time to ask whether it is not worth investing in a formalized process to support the task around which all other activities revolve, namely making good decisions in the Bay-Delta.