APPENDIX 4
Description of Weight of Evidence Process

This appendix provides more detail on the Weight of Evidence Process developed for spring/summer chinook (Fig. 8). We did not have time to initiate an analogous approach for fall chinook, steelhead, or sockeye. We first narrowed the problem down by sensitivity analysis to find the key uncertainties that affected the choice of management action (see Appendix 3). These analysis indicated that only seven of 14 uncertainties had a significant effect on outcomes, and three of these seven were particularly critical. The next steps involved an iterative series of written submissions, workshops, and syntheses to examine the evidence for and against alternative hypotheses for the seven key uncertainties. There were 25 submissions from PATH scientists (about 350 pages), which we synthesized into a 150-page document (Marmorek et al. 1998c). We used four criteria to evaluate alternative hypotheses:

    1) the clarity of the hypothesis (i.e., clear specification of stressors affecting survival, without confounding);
    2) existence of a reasonable mechanism or set of mechanisms by which the hypothesis operates;
    3) consistency with empirical evidence (i.e., Do stock survival indices and hypothesized stressors vary across space and time in a manner consistent with the hypothesis? How well do different hypotheses fit empirical data such as reach survivals and recruits per spawner?); and
    4) validity of the method of projecting the hypothesis into the future (i.e., are mathematical methods consistent with hypotheses and mechanisms that they were meant to represent? Are projections under current operations reasonable given recent measurements not used in model calibration?).

Arguments and counter-arguments under each of these criteria were laid out systematically, with reference to supporting evidence in the submissions and other literature. All of the documentation (i.e., submissions and synthesis document) was provided to the Scientific Review Panel (SRP) for review. Three weeks later, the four SRP members attended a workshop in Vancouver at which experts in elicitation (people not involved in the PATH process) led them through the following steps (Peters et al. 1998):

    1) training regarding process and judgmental biases;
    2) clear definition of the judgments to be obtained, and explicit exclusion of any recommendations of specific management actions;
    3) independent elicitation of the relative probability of alternative hypotheses, and the rationale for each SRP member’s conclusions;
    4) aggregation and discussion of differences among experts; and
    5) documentation.

Step 4 led the SRP members to recommend that PATH explore strong management experiments as an alternative approach to determining the probability of alternative hypotheses.

Subsequent to the workshop, the facilitation team applied the individual judgements of SRP members to the hypotheses in the decision analysis, calculated weighted-average outcomes, and compared those weighted averages to the situation in which all hypotheses were weighted equally (Peters et al. 1998). We found that applying the weights did not change the relative ranking of actions (A3 generally performed best), and had only a small effect on weighted average results, primarily because all four of the SRP members assigned similar weights to the passage/transportation models (which had large effects on the outcomes). These results were presented to the Implementation Team and a large contingent of interested public and media at a meeting in October 1998.