Home | Archives | About | Login | Submissions | Notify | Contact | Search
 ES Home > Vol. 5, No. 1 > Art. 19

Copyright © 2001 by The Resilience Alliance

The following is the established format for referencing this article:
Schiller, A., C. T. Hunsaker, M. A. Kane, A. K. Wolfe, V. H. Dale, G. W. Suter, C. S. Russell, G. Pion, M. H. Jensen, and V. C. Konar. 2001. Communicating ecological indicators to decision makers and the public. Conservation Ecology 5(1): 19. [online] URL: http://www.consecol.org/vol5/iss1/art19/

A version of this article in which text, figures, tables, and appendices are separate files may be found by following this link.

Report

Communicating Ecological Indicators to Decision Makers and the Public

Andrew Schiller1, Carolyn T. Hunsaker2, Michael A. Kane3, Amy K. Wolfe4, Virginia H. Dale4, Glenn W. Suter5, Clifford S. Russell6, Georgine Pion6, Molly H. Jensen6, and Victoria C. Konar6


1Clark University2Pacific Southwest Research Station, USDA Forest Service3University of Tennessee, Knoxville4Oak Ridge National Laboratory5U.S. Environmental Protection Agency, NCEA6Vanderbilt University


ABSTRACT

Ecological assessments and monitoring programs often rely on indicators to evaluate environmental conditions. Such indicators are frequently developed by scientists, expressed in technical language, and target aspects of the environment that scientists consider useful. Yet setting environmental policy priorities and making environmental decisions requires both effective communication of environmental information to decision makers and consideration of what members of the public value about ecosystems. However, the complexity of ecological issues, and the ways in which they are often communicated, make it difficult for these parties to fully engage such a dialogue. This paper describes our efforts to develop a process for translating the indicators of regional ecological condition used by the U.S. Environmental Protection Agency into common language for communication with public and decision-making audiences. A series of small-group sessions revealed that people did not want to know what these indicators measured, or how measurements were performed. Rather, respondents wanted to know what such measurements can tell them about environmental conditions. Most positively received were descriptions of the kinds of information that various combinations of indicators provide about broad ecological conditions. Descriptions that respondents found most appealing contained general reference to both the set of indicators from which the information was drawn and aspects of the environment valued by society to which the information could be applied. These findings can assist with future efforts to communicate scientific information to nontechnical audiences, and to represent societal values in ecological programs by improving scientist-public communication.

KEY WORDS: common language, communication, decision making, ecological indicators, ecological monitoring, environmental assessments, environmental values, public input.

Published: June 29, 2001


INTRODUCTION

Ecological assessments and monitoring programs often rely on indicators to evaluate environmental conditions. An ecological indicator is any expression of the environment that provides quantitative information on ecological resources; it is frequently based on discrete pieces of information that reflect the status of large systems (Hunsaker and Carpenter 1990). These include the condition of resources, magnitude of stresses, exposure of biological components to stress, or changes in resource condition. Because the act of selecting and measuring indicators involves a human cognitive and cultural action of observing the environment in a particular way under certain premises and preferences, indicator information implicitly reflects the values of those who develop and select them.

Ecological indicators are most often developed by scientists, expressed in technical language, and target aspects of the environment that scientists consider useful for understanding ecological conditions. Yet, setting environmental policy priorities and making environmental decisions involves considering public values for ecosystems (e.g., public participation under the National Environmental Policy Act). To adequately include societal values for ecosystems, the public and decision makers must be informed participants in the dialogue about what is important for an assessment and what should be measured. However, the complexity of ecological issues and the ways in which they are often communicated make it difficult for these parties to be involved in this dialogue.

The need for better ways to communicate technical indicator information has been mentioned recently in a diverse set of literature (e.g., U.S. EPA 1987, Pykh and Malkina-Pykh 1994, Hart 1995, Scruggs 1997). Ecologists have long sought concise and cost-effective measures that can characterize ecosystem conditions (Rapport et al. 1985, Shaeffer et al. 1988, McKenzie et al. 1992, Griffith and Hunsaker 1994, Munasinghe and Shearer 1995, Karr and Chu 1997), but data collection and analysis are only part of an effective effort using indicators. With the increased use of indicators for community sustainability initiatives (Walter and Wilkerson 1994), national economic and policy decision making (Straussfogel 1997), international development work (O’Conner et al. 1995), and environmental assessments (Suter 1990, 1993), one key element that emerges repeatedly is the need for effectively communicating technical information from and about indicators to diverse users (McDaniels et al. 1995, Slovic 1995, Ward 1999). Because understandability is a key attribute of indicators, determining how best to communicate them to varied users is, in our view, part of the process of finalizing indicator development.

Established in the early 1990s, the U.S. Environmental Protection Agency’s (EPA’s) Environmental Monitoring and Assessment Program (EMAP - http://www.epa.gov/docs/emap/) was designed as a long-term program using indicators to assess the status and trends in ecological conditions at regional scales for the conterminous United States (Hunsaker and Carpenter 1990, Hunsaker 1993, Lear and Chapman 1994). One goal of EMAP has been to disseminate information from and about ecological indicators to members of the public and to policy makers (Risk Assessment Forum 1992, Environmental Protection Agency 1997). Referring specifically to EMAP, the EPA recently stated that “A useful indicator must produce results that are clearly understood and accepted by scientists, policy makers, and the public” (Jackson et al. 2000: 4).

EMAP wanted to provide information on, for example, the changing conditions of regional forests, landscapes, and surface waters. EPA also wanted EMAP indicators to specifically address aspects of the environment valued by the American public in various regions. Such concerns about reflecting public values and communicating technical environmental information to diverse public and decision making audiences are not limited to EPA or the scope of EMAP (Harwell et al. 1999). Internationally, a number of researchers concerned with various aspects of environmental assessment continue to grapple with these issues (e.g., Burgess et al. 1999, Patel el al. 1999, Ward 1999, Kundhlande et al. 2000).

We engaged the subject of indicator communication within the context of an initial goal to develop a survey methodology effective in determining the extent to which indicators selected by scientists for EMAP provide information about societally valued elements of ecosystems. This goal was larger and more difficult than we originally thought. To begin to develop such a survey methodology, we first needed to translate ecological indicators into understandable language, as well as to develop methods for articulating environmental values such that respondents could associate indicators with what they value about the environment.

This paper focuses entirely on our efforts to translate ecological indicators into common language for communication with public and decision-making audiences. Although various decision makers and members of the public have different backgrounds and levels of understanding of the environment, people across a broad spectrum are now interested and engaged in topics relevant to environmental assessments (Johnson et al. 1999). As such, a single conceptual and methodological approach to communicating indicators of regional environmental conditions would be advantageous.

We use EMAP indicators as a case study for this effort. EMAP indicators make a good case study because they are representative of the types of indicators proposed by a number of ecological monitoring programs (McKenzie et al. 1992, Organization for Economic Cooperation and Development 1994, Syers et al. 1995, Levy et al. 1996). Furthermore, EMAP indicators are attractive for our purposes because they come from different disciplinary lineages, focus on divergent aspects of ecosystems, and use different aggregation and naming conventions. These incongruences provide numerous problems for communicating the science to lay persons, but also afford a rich case with which to begin developing ways to improve communication of environmental information between scientists and nonscientific audiences.

Because communication of ecological indicators is indeed an interdisciplinary problem, we assembled a multidisciplinary team for this research, including experts in forest ecology, landscape ecology, geography, aquatic ecology, ecological risk assessment, anthropology, economics, and psychology. Team member affiliations were Oak Ridge National Laboratory, the Vanderbilt Institute for Public Policy Studies, and the University of Tennessee.

We found that effective communication of ecological indicators involved more than simply transforming scientific phrases into easily comprehensible words. Rather, we needed to develop language that simultaneously fit within both scientists’ and nonscientists’ different frames of reference, such that resulting indicators were at once technically accurate and understandable.

This paper describes how we struggled with concepts of relating societal values to ecological indicators, including the process of shifting from describing what is measured by the indicators to depicting the kinds of information that various combinations of indicators provide about societally valued aspects of the environment. We include descriptions of two preliminary tests of the newly developed indicator translations. One assesses general understandability as well as comparability between indicator translations and valued aspects of the environment, and the other specifically centers on participant reactions to indicator translations when tested in an early draft survey. We end the paper with a discussion of potential applications of our findings and the relevance of this effort to the development of useful ecological indicators.

Details of our research to elicit values are addressed separately (A. Wolfe, A. Schiller, V. H. Dale, C. T. Hunsaker, M. A. Kane, G. W. Suter II, C. S. Russell, and G. Pion, unpublished manuscript) because of the explicit focus of this paper on indicator communication, as well as space limitations. The research reported here is based on our preliminary development and testing of indicator translations, but is not based on statistical tests. We present these initial findings with the goal of helping scientists to communicate more effectively with nontechnical audiences.


EMAP’S INDICATORS

We selected a diverse set of EMAP indicators for forests, streams, and landscapes to conduct this research (Lewis and Conkling 1994, O’Neill et al. 1994, Paulsen and Linthurst 1994). These three resource groups included 54 indicators titled by short scientific phrases (Appendix 1). Written support materials and interviews with members of the EMAP indicator development teams provided background on the indicators. From such information, we determined that the indicator “names” could be described more appropriately as “designations” for sets of measures or series of indicators that were used chiefly for communication among EMAP scientists.

Underlying these short-phrase names are single or multiple measurable items (for examples, see Bechtold et al. 1992, Burkman et al. 1992, Hughes et al. 1992, Lewis and Conkling 1994, O’Neill et al. 1994, Paulsen and Linthurst 1994, Riitters et al. 1995). Many of these indicators grew out of discipline-specific usage and were adapted to EMAP purposes by resource group scientists. Indicator form and content are not parallel across resource groups. Coming from different sources and often having a strong historical legacy, the indicators varied by areas of focus (e.g., physical, chemical, or biological attributes), location within ecological hierarchies (e.g., organism, population, community, landscape), and amount of aggregation under single indicator ‘names.’ Some indicators (e.g., fractal dimension) refer to individual measures, whereas others are groups of measures taken from similar portions of a tree or ecosystem that can reveal different things about ecosystem condition (e.g., branch evaluations, soil classification and physiochemistry). Other indicators consist of multiple measures that, together, can address a specific ecosystem condition (e.g., wildlife habitat).

These incongruities of emphasis, history, detail, and naming among the indicators make it difficult to communicate the science to lay persons. They also present potential hurdles for assessing conditions in complex natural systems, although some degree of difference among resource groups is understandable, given the multiple disciplines from which measures were drawn. This paper deals specifically with the issue of indicator communication and does not emphasize a critique of the indicators themselves. Nevertheless, we briefly discuss how our findings may be useful in reforming existing EMAP indicators to reduce some of their initial incongruities, and in developing new ecological indicators.


A REGION AS A CASE STUDY

We focused on the Southern Appalachian Man and the Biosphere (SAMAB) region for this research. We chose a regional scale because of EMAP’s primary emphasis on assessing and reporting regional environmental conditions. We selected the SAMAB region, specifically, because the research team was located within or adjacent to this region, and because a number of EMAP indicator development tasks had been initiated in the SAMAB region.

Following the Blue Ridge Mountains, the Ridge and Valley Province, and the Cumberland Mountains, the SAMAB region trends northeast to southwest, and includes parts of West Virginia, Virginia, North and South Carolina, Tennessee, Georgia, and Alabama (SAMAB 1996). Major environmental concerns in the region include acid deposition, reduced visibility because of air pollution, forest management to balance public interests (recreation, timber harvest, water quality), sprawl, and maintenance of the region’s characteristically high biodiversity. The region is quite diverse. It contains the largest concentration of federal lands in the eastern United States, including the Great Smoky Mountains National Park, the Blue Ridge Parkway, and the Chattahoochie, Sumter, Cherokee, Nantahala, Pisgha, Jefferson, and George Washington National Forests. The region also contains more than three million people. Primary cities in the area include Knoxville, Chattanooga, and Johnson City, Tennessee, Roanoke, Virginia, and Asheville, North Carolina.


DEVELOPMENT OF COMMON-LANGUAGE INDICATORS

Figure 1 is a flow chart outlining the fundamental steps taken to develop indicator translations. Forest indicators are used to illustrate interim steps in this process throughout the paper. We also developed common-language indicators for the six surface water and 32 landscape indicators proposed by EMAP (Appendix 1). Admittedly, the approach used was partially iterative, evolving as we received feedback from research participants, and as the collective thinking of our broadly multidisciplinary team matured and ultimately coalesced.


Fig. 1. Steps in the common-language indicator (CLI) development process.This paper focuses on the blue-highlighted area on the left. Arrows between left and right sides of the figure illustrate the relationship between the development of indicator language and the concurrent process of constructing a working list of valued aspects of the environment for the SAMAB region. Each helped to inform the other through analysis of small-group session responses. The values component of our research will be reported in a separate paper (Wolfe et al. in preparation).

GIF Image (16 K)


We considered the following factors during development of common-language indicators from EMAP’s technical indicators:

1. ability to be understood by persons without ecological expertise;
2. technical accuracy;
3. appropriate level of aggregation for nonscientists to logically associate the relevancy of indicators with valued environmental aspects.

We used both informal focus groups of nonscientists, called small-group sessions (Fontana and Frey 1994, Krueger 1994, Fowler 1998, Stewart and Shamdasani 1998), and frequent review by EMAP indicator experts to guide our decisions on developing common-language indicators. EMAP indicator experts provided feedback at each step about the technical accuracy of the indicator translations, and participants in the small-group sessions helped to guide us toward indicator language that was both understandable to nonscientists and easily related to environmental characteristics with which respondents were most comfortable (namely, valued aspects of the environment as expressed by participants in a separate series of small-group sessions). Small-group sessions will be described in more detail later in this section. Once the common-language indicators were developed, we tested them using an additional small-group session, as well as a series of five “think-aloud” interviews, described later.

Before we engaged either EMAP indicator experts or small-group sessions, our efforts centered on exploring ways to theoretically link ecological indicators and human values for ecosystems. Deciding how to build this conceptual bridge was the subject of considerable debate among our multidisciplinary team members. We began by developing a list of indicators and associated potential “values” related to ecosystems (Table 1, which focuses on forests). On the left side of Table 1 are some potential values for forests; on the right side are corresponding EMAP indicator names.


Table 1. This table represents how we began exploring potential relationships between EPA indicators and human values for ecosystems. The table follows an environmental risk framework with assessment endpoints tied to values, and direct and indirect indicators for these endpoints. Assessment endpoints are the operational expression of values that have been identified for the system. On the figure's left are a few potential values for forests, and on the right are listed potentially related forest indicators proposed for use by the Forest Resource Group of EMAP to assess regional forest conditions.

Value
Assessment Endpoint
Direct Indicator
Indirect or Diagnostic Indicator
Timber Production
Yield of Timber for Species 1, 2, 3...n.
Tree Growth
Dendrochronology
Soil Classification & Physiochemistry
PAR – Leaf Area
Canopy Diversity
Crown Measures
Branch Evaluations
Lichen Chemistry
Foliar Chemistry
Regeneration
Dendrochemistry
Mortality
Visible Damage
Game Production
Yield of Game for Species 1, 2, 3...n.
None
Wildlife Habitat
Soil Classification & Physiochemistry
Crown Measures
Vegetation Structure
Visible Damage
Canopy Diversity
Foliar Chemistry
Regeneration
Mortality
Hiking and Camping
Ability to Move through the Forest
None
Vegetation Structure
Crown Measures
Soil Classification & Physiochemistry
Regeneration
Miles of Trail per User
None
None
Forest Conducive to Pitching a Tent
None
Vegetation Structure
Soil Classification & Physiochemistry
Crown Measures
Visible Damage
Potable Water
None
None
Users per Campsite
None
None
Bird and Wildlife Observation
Bird Abundance and Diversity
Wildlife Habitat
Wildlife Habitat
Crown Measures
Canopy Diversity
Visible Damage
Mortality
Soil Classification & Physiochemistry
Vegetation Structure
Foliar Chemistry

Large-Mammal Abundance and Diversity
None
Wildlife Habitat
Foliar Chemistry
Mortality
Visible Damage
Crown Measures
Canopy Diversity
Soil Classification & Physiochemistry
Vegetation Structure


We realized that these two extremes of Table 1, values and EMAP indicator names, could not be easily compared by nontechnical audiences for several reasons, including a mismatch in detail and scale. We related a potential public value for the environment (timber production) to indicators of forest condition that ecologists can use to assess timber production (branch evaluations, tree growth, overstory diversity, regeneration, vegetation structure, dendrochronology, and others). For example, we reasoned that it would be difficult for someone to determine what the term “branch evaluations,” means because it is nonspecific and only indirectly related to a value (as is the case for many of the indicator names). Further, because this indicator appears to focus on tree branches only, we believed that, even with added detail of the underlying measures that make up “branch evaluations,” non-ecologists would have considerable difficulty relating it to timber production. The recognition of this incomparability moved us to develop more useful translations for the indicators.

We then began working with EMAP indicator experts to develop descriptions of what each indicator measures in the field, and what each can tell scientists about the environment. A similar evolution of detail occurred for the values side of our work (for details, see A. Wolfe, A. Schiller, V. H. Dale, C. T. Hunsaker, M. A. Kane, G. W. Suter II, C. S. Russell, and G. Pion, unpublished manuscript). The bridging between initial indicator names and values is schematically depicted in Fig. 2A.


Fig. 2. (A) Schematic depiction of our conceptual approach to "bridging" the relationship between potential values for the environment and technical indicators. (B) Examples of bridging steps between indicators and values for forests, streams, and landscapes.

GIF Image (11 K)


From this conceptual starting place, we used a series of five small-group sessions, each consisting of from three to six nonscientists, to evaluate and guide further development of common-language indicator descriptions. Sessions were run by an anthropologist, a geographer, and an aquatic ecologist from our research team. Session participants were support staff at Oak Ridge National Laboratory (ORNL). We engaged this group of persons because (1) as residents of the SAMAB region, they had some degree of familiarity with it; (2) we believed that their responses would be useful proxies for broader SAMAB publics in this initial phase of indicator language development; and (3) they were available at no cost. Volunteer participants were 25 – 50 years old, and all but one were female (a reflection of ORNL support staff demographics). All had completed high school and several had some college courses, but no college degrees. There was no overlap of participants among the sessions.

We conducted a total of 10 small-group sessions: five for developing indicator language translations, four for the side of our work involved with valued aspects of the environment, and one final session to evaluate the resulting common-language indicators. In this paper, we describe only the five groups involved with developing indicator language, as well as the additional small-group session and the five “think aloud” interviews used to evaluate the common-language indicators resulting from the primary set of five small-group sessions.

In these five primary small-group sessions, we first described the project and the SAMAB region (the latter so that participants would be reminded to think more broadly about the region than their own town or neighborhood). Then participants were provided the opportunity to ask clarifying questions. Next, we elicited discussion on the following three topics: (1) whether indicator descriptions were understandable; (2) what participants thought when reading indicator descriptions; and (3) whether participants thought that they could explain the meaning and relevance of the indicators to others.

Table 2 presents examples of the progression of indicator language development through the first four of the five small-group sessions. Specifically, feedback from session participants led us sequentially from descriptions of what each indicator measures, to what each indicator measures and what it indicates, to what each indicator conveys about the environment (Table 2).


Table 2. Interim steps in the development of common-language indicators (CLIs) are shown here. Four indicators from forests are used for illustration. The steps listed for each column refer to Fig. 1. Small-group session numbers refer to which small-group sessions worked with which language iteration.

EMAP Indicator Name
(Step 1)
What each indicator measures
(Step 2)
What each indicator measures and indicates
(Step 3)
What each indicator conveys about the environment (Step 4)

(Presented to Small-Group Session 1)
(Presented to Small-Group Sessions 2 & 3)
(Presented to Small-Group Sessions 3 & 4)
Bio-indicator Plants - Ozone
Visible damage to plant leaves caused by air pollution.
Measures the visible damage to leaves of particular plants, which provides information about the level of air pollution in forests.
Air pollution effects on plants.
Photosynthetically Active
Radiation - Leaf Area
The amount of sunlight captured and used by a tree’s leaves for energy and growth.
Measures the amount of sunlight captured and used by trees for energy, which provides information about total leaf area and density, and how efficiently energy is being gathered by trees for growth.
How efficiently energy is being gathered by trees for growth, which helps to determine tree health.
Lichen Communities
The number and different types of lichens living on woody debris.
Counts the different kinds and total number of small, moss-like plants that live on trees and rocks in a forest, which provides information about one aspect of plant diversity in the forest, and about possible effects of air pollution on the forest.
The effects of some air pollution on the forest, including changes in the numbers and types of certain plants.
Root Ecology
The condition of trees at the root/soil interface.
Measures the condition of tree roots, which can help identify reasons for observed tree growth and vegetation structure, including soil condition (wetness, compaction, etc.), and disease and other stresses to trees.
How soil condition, vegetation structure, disease, and other stresses affect trees.


A suite of important findings emerged from these first four small-group sessions. We learned that (1) asking respondents to consider all indicators created an excessive burden on them; (2) respondents did not want or need descriptions of what indicators measure; (3) respondents preferred to be presented with the types of information that the indicators could provide about the environment; and (4) based on evolving lists of valued aspects of the environment (developed concurrently in a separate series of small-group sessions), there was a mismatch between the level of specificity reflected in information provided by individual indicators (detailed), and the level of specificity that respondents needed to consider relations between indicators and environmental characteristics about which respondents were most comfortable thinking (general). To our surprise, respondents were content to let “the scientists” choose what to measure, as long as measures provided information that respondents could understand and considered valuable. Further, although ecologists frequently use a hierarchical framework to order ecosystem attributes and processes (Angermeier and Karr 1994, Harwell et al. 1999), session participants did not respond positively when we grouped individual indicator descriptions by such a framework; their frame of reference differed from that of the scientists. These results guided the multidisciplinary team to develop a different approach.

Scientists developed the indicators to be used in various combinations to assess the condition of ecosystems. Thus, it is natural to consider how combinations of indicators could provide information to members of the public about valued environmental conditions. Combining the indicators into broader narratives means that fewer indicator descriptions would be needed. This simultaneously allows indicator language to be more easily related to valued aspects of the environment by reducing incongruities in detail between the two. In sum, participants seemed to be asking for indicator language that could describe the broad types of information that various combinations of indicators can provide about valued environmental conditions, regardless of where in the ecological hierarchy the indicators focus.

Based on these findings, a subset of the research team combined the indicators using this conceptual framework. Their work was reviewed by other members of the team as well as EMAP resource group leaders, and was then presented to participants in the fifth small-group session. Indicator language was carefully constructed to refer generally both to the set of indicators from which the information was drawn and to aspects of the environment valued by society to which the information could be applied. These translations based on combined indicators became our common-language indicators (CLIs).

The process for combining the forest indicators is depicted in Fig. 3. All of the CLIs that we developed for forests, streams, and landscapes are listed in Appendix 2. The combined indicator descriptions, like all previous translation attempts, had to pass the dual tests of being understandable to a nontechnical audience and maintaining technical accuracy with respect to the individual indicators from which they were crafted. Like those in preceding sessions, participants in the fifth small-group session were asked whether indicator descriptions were understandable, what they thought of while reading them, and whether they thought they could explain to others what the indicators mean and how they are relevant. Participants in this fifth session had a much easier time understanding and discussing these combined indicator descriptions than any of the descriptions used in the foregoing sessions. For the first time, participants began to move away from grappling with understanding the indicators, to spontaneously talking among themselves about changes in environmental conditions that they had noticed. Instead of struggling with interpretation, participants were engaging directly the environmental issues that EMAP indicators could help to illuminate.


Fig. 3. Relationships between individual indicators for forests and the common-language indicators (CLIs) developed during this project. The left-hand column was presented to and discussed with small-group session five, and then was tested using a subsequent small-group session that made comparisons between CLIs and valued aspects of the environment. Finally, a series of five ‘think-aloud’ interviews further tested the CLIs using a preliminary draft survey.

GIF Image (7 K)


TESTING THE COMMON-LANGUAGE INDICATORS

Next we began testing the CLIs developed, both with respect to understandability and with regard to comparability with a list of valued aspects of the environment being developed through a separate series of small-group sessions. Tests included an additional small-group session, as well as a set of five separate “think-aloud” interviews, both of which are described in this section. For the reader to better understand the tests of comparability between CLIs and valued aspects of the environment, we provide a very brief description of what we mean by valued aspects of the environment, and how we chose to use this practical construction of values in this study.

From “values” to “valued aspects”

A parallel process to the development of common-language indicators was the construction of a suite of “valued aspects of the environment” from “values” (Fig. 2A schematically depicts this transition). It is not our goal to detail the process in this paper, but to present its context and results such that the reader can better follow our tests of comparability between CLIs and the draft list of valued aspects of the environment that resulted from it (for details of the values side of our work, see A. Wolfe, A. Schiller, V. H. Dale, C. T. Hunsaker, M. A. Kane, G. W. Suter II, C. S. Russell, and G. Pion, unpublished manuscript).

Value to the average person may not be a confusing word; it is popularly used to mean the general importance or desirability of something. However, a literature review on values relating to the environment (Bingham et al. 1995) demonstrated that different definitions of value have emerged in different disciplines. A number of classification schemes for human values have also been developed (e.g., Maslow 1954, Ralston 1988, Hargrove 1989, Paehlke 1989, Callicott 1992). The diversity of and conflicts among the “value” taxonomies in the literature make it clear that no single agreed-upon taxonomy or system of organization exists. Additionally, the taxonomies provide little guidance for presenting the concept of environmental “values” to members of the public or decision makers.

Initially, we derived a list of held values that were potentially relevant for the environment, based on extant literature and the team’s expertise. However, such deeply held, core values (such as freedom and justice) proved extremely difficult for session participants to identify with, to articulate, or to associate with the environment in any meaningful way. We then developed categories in an attempt to assist participants in a second small-group session on values to organize and relate held values to the environment. We thought that, by placing held values into categories, participants could more readily make the necessary links between such broad values and the environment. However, participants in the second small-group session on values found the idea of categories confusing, ambiguous, and unworkable. For example, session participants disagreed about categorization, partly because they thought that some values fit equally well into multiple categories.

Participants unanimously stated that a longer, more explicit list of valued aspects of the environment was superior to a shorter, broader list of value categories that attempt to organize vaguely conceptualized held values in relation to the environment. Drawing from this guidance, the literature, and the team’s expertise, we crafted a more specific list of valued aspects of the environment, in which “aspects” are defined as elements that people value about the environment, and “valued” indicates importance assigned to these elements. Stated simply, we sought to identify and refine a list of those aspects of the environment that people deem important in the SAMAB region. In subsequent small-group sessions, we continued the iterative process of refining this list. The resulting list of valued aspects of the environment expressed by session participants is presented in Appendix 3.

Testing CLIs in relation to valued aspects of the environment

CLIs were tested using both a small-group session and a set of five separate “think-aloud” interviews. We first describe the small-group session and its results, and then turn to the “think- aloud” interviews.

The small-group session was run like previous sessions, but focused on assessing comparability between CLIs (Appendix 2) and valued aspects of the environment (Appendix 3), in addition to understandability of indicator translations. Participants were provided lists of the CLIs and valued aspects of the environment, and were asked to use a pen to relate items in one list to those in the other. Then, participants were asked how understandable they found the CLIs, if they could explain the CLIs easily to other participants, and how difficult they found the process of associating CLIs with valued environmental aspects for the region.

Respondents in this small-group session noted that they found the CLIs understandable and easy to describe to others. Respondents also clearly indicated that the CLIs could be related, with relative ease, to valued aspects of the environment. Examples of the kinds of relationships that participants noted between indicators and values for forests, streams, and landscapes are illustrated in Fig. 2B. This figure shows relationships in the context of translation from EMAP indicator names to CLIs, on one hand, and from values to valued aspects, on the other.

We next evaluated an initial draft of a written survey based on CLIs and valued aspects of the environment using five one-on-one “think-aloud” interviews (Ericsson and Simon 1984), specifically for the purpose of further testing the ability of nonscientists to associate CLIs (built from EMAP indicators) with valued aspects of the environment for the SAMAB region. “Think-aloud” interviews help to reveal how individuals with varying expertise in a subject area reason and solve problems related to that subject, and are particularly useful in that they help to illuminate respondents’ thought processes while answering questions because the respondents are cued to verbalize as they reason through the questions (Forsyth and Lessler 1991, Mingay et al. 1991).

“Think-aloud” interviews were conducted at Vanderbilt University by a psychologist member of our team. The subjects were graduate students, none of whom were in programs focusing on environmental topics. Interview techniques were designed to keep the participants talking as they worked on the tasks requested of them, based on procedures typically used for administering concurrent verbal protocols (e.g., Bishop 1992, Bolton and Bronkhorst 1995).

The survey listed CLIs and valued aspects of the environment for the SAMAB region, and asked respondents to (1) associate each CLI with every aspect of the environment for which the CLI could provide information; and then (2) provide separate importance ratings for each CLI, based on its importance in assessing conditions of each valued environmental aspect with which it had been associated.

In general, respondents indicated that the wording of questions, CLIs, and valued aspects of the environment were reasonably clear, and that associations between CLIs and valued environmental aspects were easy to perform. Together with the last small-group session held at Oak Ridge, this finding suggests that the translation of technical indicators into common language was successful.

Respondents did note several problems specifically with the draft survey instrument. Some doubted that the list of aspects of the environment comprised a “coherent whole,” and confusion surfaced about whether they should provide “importance” ratings in terms of personal desires or in terms of what they perceive to be important for society generally.

Further, it was apparent that the length of a comprehensive survey may be problematic. Some respondents found the survey monotonous, and some were frustrated by having to rate each CLI for each valued aspect, rather than rating a CLI once. In addition, combining technical indicators to create common-language indicators resulted in some CLIs being perceived by interviewees as too similar to some valued aspects of the environment. Several respondents also were perplexed when asked to rate items that they considered to be definitive elements of forests, such as “forests cover large areas of land,” “the air is clean and unpolluted in forests,” and “healthy plants grow in forests.” This kind of reaction is problematic because it may dissuade respondents from answering the questions. In sum, comparability between CLIs and valued aspects of the environment was very effective; however, more experimentation is needed on both the structure and length of a survey built from these components to address these issues.


DISCUSSION

Our approach to translating scientific indicators into common language was originally initiated with the larger ambition of developing a survey method to determine if indicators proposed by EMAP provided information considered relevant by members of the public. In this paper, we have focused entirely on our efforts to develop common-language translations of environmental indicators to improve dialogue between scientists and nonscientists, using a subset of EMAP’s indicators as a case study.

In this fairly specific context, we discovered that the best approach was to describe the kinds of information that various combinations of indicators could provide about environmental conditions, rather than to describe what in particular was being measured or how measurements were performed. Study participants responded best to indicator translations that contained general reference to both the set of indicators from which the information was drawn, and aspects of the environment valued by society for which the information could be applied. Although more testing and refinement are needed, our findings may have relevance for a variety of applications in communication between scientists and nonscientists that have proven to be challenging (e.g., U.S. EPA 1987, Pykh and Malkina-Pykh 1994, Ward 1999).

CLIs could be used to help convey status and trends in ecological conditions to the public and decision makers in reports, on television, and through other media. As an example, our investigation suggests that nonscientists may find information on the changing levels of “contamination of forest plants by air pollution” in a region to be more salient than specific information on the individual measures used by EMAP that together inform that topic (e.g., foliar chemistry, lichen chemistry, dendrochemistry, and branch evaluations). If our findings are any indication, descriptions of individual scientific measures or their results may receive less weight in the minds of nonscientists than the environmental implications that can be discerned from a combined set of these measures described in a few-word narrative focused on changes in broader environmental conditions. Without a reporting mechanism such as CLIs, much environmental information presented as discrete facts or findings may be ignored by the public and minimally used by decision makers, regardless of scientific relevance.

Changes in environmental conditions reported through CLIs reflect changes in combinations of measures; this provides a link, albeit generalized and not expressly quantitative, to underlying data. The CLIs were never intended to be quantitative expressions of the combination of measures from which they are built. Rather, they were developed to communicate about broader environmental conditions for which specific proposed indicators could provide certain types of information. More detail about each of the CLIs could be made available at the request of information users, including quantitative data, by moving back toward the original set of EMAP indicators from which CLIs were crafted. The relevancy of specific data for the user would likely improve, however, because such data would be presented in the context of the broader CLI that they inform.

Changes in time or spatial scale for the underlying data upon which the CLIs are based simply change the information presented in the CLIs. In essence, the CLIs describe environmental conditions and how they are changing over time, or how they differ when spatial scale or assessment location varies, for broader environmental elements than could be reported with an individual EMAP indicator. Changes in conditions from one place to another or from one time to another do not get hidden or “contaminated” in the process of being communicated via a CLI. Rather, changing conditions are merely reported with regard to environmental aspects that make sense to people. For instance, instead of reporting on changes in “lichen chemistry” over time in one region, or comparing “lichen chemistry” between regions, this indicator information is brought together with complementary pieces of information from other indicators to describe broader environmental conditions to which people can better relate, such as “the contamination of forest plants by air pollution,” or “the overall health of forest plants.”

As a consequence of this unique positioning, CLIs provide conceptual links between monitoring programs such as EMAP and formal ecological risk assessment by bridging current indicators (measurement endpoints) and valued aspects of the environment for which people desire information (these are directly connected with specific related assessment endpoints; Suter 1990, 1993). Harwell et al. (1999) also provide a framework called “essential ecosystem characteristics” that, like our CLIs, may assist in linking ecological monitoring and risk assessment.

Scientists themselves may find some value in reconnecting to the broader picture when developing ecological indicators. With a focus on technical accuracy and methodological refinement, too often the science of the environment can become myopic. Focusing only on individual indicators can unintentionally move the spotlight away from the information for which society is looking, and into a realm that is disconnected from the information’s intended utility. Stepping back from the science with informed public input, as we have done, oddly can clarify the science by reconnecting means with ends.

The CLI process may also help scientists refine existing ecological indicators. The process that we used to create CLIs reflects the nature of the EMAP indicators initially provided to us. As mentioned earlier, substantial incongruities exist in EMAP indicators with regard to the type of measures used, how those measures are combined under indicator headings, and naming conventions. CLIs cannot correct existing disparities between what is measured by various EMAP indicators, except that they facilitate dialogue with the public on this topic. The CLI process could, however, be applied to help standardize data aggregations and develop indicator descriptions that are more useful. In this important way, our approach to developing CLIs could be used to improve existing indicators and bring them to a more useful form.

The performance of common-language indicators can be evaluated based on straightforward and practical criteria. Such criteria should evaluate CLIs on their links back to scientific measures as well as forward toward communication ability and relevance to the information needs of users. Each of the groups involved in CLI development and use would likely have a different role in evaluating the CLIs. Environmental scientists, for example, should not be expected to agree with nontechnical members of the public regarding understandability of the translated indicators. Instead, scientists could most advantageously focus on determining if the CLIs are true to the underlying measures from which they are built. Members of the public could be polled to determine if the CLIs relate to valued aspects of the environment in respective regions or nations, and if so, to what degree. And lastly, a broad selection of both decision makers and members of the public could be polled to determine if the CLIs effectively communicate to audiences in different regions and with different backgrounds.

It is likely that CLIs resulting from our process would be somewhat different in various regions of the United States, and particularly likely that CLIs would differ in various countries in which cultures and values may be quite different. Generally, however, we believe that the process of CLI development would work well in these disparate contexts, but that the resulting CLIs would vary from place to place and culture to culture, even if based on similar environmental indicators. In fact, that they would likely differ means that the translation process functions as it should. The degree to which science is involved in the decision-making process, however, varies greatly between countries and could have an influence on the success of this approach, depending on the nature of such science–policy–public relationships.

Final thoughts

Although the findings of this project are preliminary and not based on statistical tests, our results suggest that CLIs are a promising concept. Both the notion of CLIs and the approach used to develop them require further testing and refinement. A primary part of further tests is a non-trivial challenge to develop a survey based on CLIs and valued aspects of the environment that is complete enough to represent the holistic nature of ecosystems (e.g., forest, desert, wetland, stream, landscape), yet concise enough to hold respondents’ interest. Some members of our team have recently developed and tested an experimental approach to such a survey that evolved out of the work described here, focused on forest ecosystems (Russell et al. 2001).

We hope that others find the CLI process helpful as a template for further work on developing effective communication tools for technical environmental information. Effective communication on environmental status and trends has significance for the scientific community, decision makers, members of the public, and governmental agencies charged with collecting and reporting such information. As the Ecological Society of America stated “To be useful to decision-makers, ecological information must be both accessible and relevant to their mandates and responsibilities” (Lubchenco et al. 1991). We look forward to advances in this area of inquiry that may further illuminate value-indicator relationships, as well as more general advances in communicating technical ecological information to policy makers and members of the public.


RESPONSES TO THIS ARTICLE

Responses to this article are invited. If accepted for publication, your response will be hyperlinked to the article. To submit a comment, follow this link. To read comments already accepted, follow this link.


Acknowledgments:

We would like to thank the EMAP resource group personnel who helped on this project, including Timothy Lewis, Robert Hughes, Phil Larsen, Robert O’Neill, and Kurt Riitters. We would also like to thank H. Kay Austin, Eric Hyatt, Michael Brody, and Marjorie Holland for their assistance with this project, and Roger Kasperson, Rob Goble, and six anonymous reviewers for extremely helpful reviews of earlier versions of this manuscript. We also wish to thank C. S. Holling and Lee Gass for believing in the value of the research on which this paper was based, and for their unvarying support. Finally, we would like to thank those people who participated in the group sessions.

This research was supported by the U.S. Environmental Protection Agency under IAG DW89936506-01 with the U.S. Department of Energy under contract DE-AC05-84OR21400 with Lockheed Martin Energy Systems. Although the research described in this article has been funded by the U.S. EPA, it has not been subjected to the agency’s peer and administrative review, and, therefore, may not necessarily reflect the views of the agency; no official endorsement should be inferred.


LITERATURE CITED

Angermeier, P., and J. R. Karr. 1994. Biological integrity versus biological diversity as policy directives: protecting biotic resources. BioScience 44(4):690-697.

Bechtold, W. A., W. H. Hofford, and R. L. Anderson. 1992. Forest health monitoring in the South in 1991. USDA Forest Service General Technical Report SE-81.

Bingham, G., M. Brody, D. Bromley, E. Clark, W. Cooper, R. Costanza, T. Hale, G. Hayden, S. Kellert, R. Nargaard, B. Norton, J. Payne, C. Russell, and G. Suter. 1995. Issues in ecosystem valuation: improving information for decision making. Ecological Economics 14:73-90.

Bishop, G. S. 1992. Qualitative analysis of question-order and context effects: The use of think-aloud responses. In N. Schwarz and S. Sudman, editors. Context effects in social and psychological research. Springer-Verlag, New York, New York, USA.

Bolton, R. N., and T. M. Bronkhorst. 1995. Questionnaire pretesting: computer-assisted coding of concurrent protocols. In N. Schwarz and S. Sudman, editors. Answering questions: methodology for determining cognitive and communicative processes in survey research. Jossey-Bass, San Francisco, California, USA.

Burgess, J., C. M. Harrison, and P. Filius. 1999. Environmental communication and the cultural politics of environmental citizenship. Environment and Planning A 30(8):1445-1460.

Burkman, W. A., W. H. Hoffard, and R. L. Anderson. 1992. Forest health monitoring. Journal of Forestry 90(9):25-26.

Callicott, J. B. 1992. Aldo Leopold’s metaphor. Pages 42-56 in R. Costanza, G. G. Norton, and B. D.Haskell, editors. Ecosystem health. Island Press, Washington, D.C., USA.

Environmental Protection Agency. 1997. Environmental Monitoring and Assessment Program: research strategy. EPA/620/R-98/001. Office of Research and Development, U.S. EPA, Washington, D.C., USA.

Ericsson, K. A., and H. A. Simon. 1984. Protocol analysis: verbal reports as data. Massachusetts Institute of Technology Press, Cambridge, Massachusetts, USA.

Fontana, A., and J. H. Frey. 1994. Interviewing: the art of science. Pages 361-376 in N. K. Denzin and Y. S. Lincoln, editors. Handbook of qualitative research. Sage, Thousand Oaks, California, USA.

Forsyth, B. H., and J. T. Lessler. 1991. Cognitive laboratory methods: a taxonomy. In P. B. Biemer, R. M. Groves, L. E. Lyberg, N. A. Mathiowetz, and S. Sudman, editors. Measurement errors in surveys. John Wiley, New York, New York, USA.

Fowler, F. J. 1998. Design and evaluation of survey questions. Pages 343-374 in L. Bickman and D. J. Rog, editors. Handbook of applied social research methods. Sage, Thousand Oaks, California, USA.

Griffith, J. A., and C. T. Hunsaker. 1994. Ecosystem monitoring and ecological indicators: an annotated bibliography. EPA/620/R-94/021. U.S. Environmental Protection Agency, Athens, Georgia, USA.

Hargrove, E. 1989. Foundations of environmental ethics. Prentice Hall, Englewood Cliffs, New Jersey, USA.

Hart, M. 1995. Guide to sustainable community indicators. Quebec-Labrador Foundation (QLF)/Atlantic Center for the Environment, Ipswich, Massachusetts, USA.

Harwell, M. A., V. Myers, T. Young, A. Bartuska, N. Gassman, J. H. Gentile, C. C. Harwell, S. Appelbaum, J. Barko, B. Causey, C. Johnson, A. McLean, R. Smola, P. Templet, and S. Tosini. 1999. A framework for an ecosystem integrity report card. BioScience 49(7):543-556.

Hughes, R. M., T. R. Whittier, S. A. Thiele, J. E. Pollard, D. V. Peck, S. G. Paulsen, D. McMullen, J. Laxorchak, D. P. Larsen, W. L. Kinney, P. R. Kaufman, S. Hedtke, S. S. Dixit, G. B. Collins, and J. R. Baker. 1992. Lake and stream indicators for the U.S. Environmental Protection Agency's Environmental Monitoring and Assessment Program. Pages 305-335 in D. H. McKenzie, D. E. Hyatt, and V. J. McDonald, editors. Ecological indicators. Volume 1. Elsevier Applied Science, New York, New York, USA.

Hunsaker, C. T. 1993. New concepts in environmental monitoring: the question of indicators. Science of the Total Environment Supplement part 1:77-96.

Hunsaker, C. T., and D. E. Carpenter, editors. 1990. Ecological indicators for the Environmental Monitoring and Assessment Program. EPA/600/3-90/060. Office of Research and Development, US EPA, Research Triangle Park, North Carolina, USA.

Jackson, L., J. Kurt, and W. Fisher, editors. 2000. Evaluation guidelines for ecological indicators. EPA/620/-99/005. U.S. Environmental Protection Agency, National Health and Environmental Effects Research Laboratory, Research Triangle Park, North Carolina, USA.

Johnson, K. N., J. Agee, R. Beschta, V. Dale, L. Hardesty, J. Long, L. Nielsen, B. Noon, R. Sedjo, M. Shannon, R. Trosper, C. Wilkinson, and J. Wondolleck. 1999. Sustaining the people's lands: recommendations for stewardship of the national forests and grasslands into the next century. Journal of Forestry 97(5):6-12.

Karr, J. R., and E. W. Chu. 1997. Biological monitoring and assessment: using multimetric indexes effectively. EPA 235-R97-001. University of Washington, Seattle, Washington, USA.

Krueger, R. A.1994. Focus groups: a practical guide for applied research. Second edition. Sage, Thousand Oaks, California, USA.

Kundhlande, G., W. L. Adamowicz, and I. Mapaure. 2000. Valuing ecological services in a savanna ecosystem: a case study from Zimbabwe. Ecological Economics 33:401-412.

Lear, J. S., and C. B. Chapman. 1994. Environmental Monitoring and Assessment Program (EMAP) cumulative bibliography. EPA/620/R-94/024. Research Triangle Park, North Carolina, USA.

Levy, K., T. F. Young, R. M. Fujita, and W. Alevizon. 1996. Restoration of the San Francisco Bay-Delta River system: choosing indicators of ecological integrity. Prepared for the Cal Fed Bay-Delta Program and the U.S. Environmental Protection Agency. Center for Sustainable Resource Development, University of California, Berkeley, California, USA.

Lewis, T. E., and B. L. Conkling. 1994. Forest health monitoring: Southeast loblolly/shortleaf pine demonstration report. Project Report EPA/620/SR-94/006. U.S. Environmental Protection Agency, Washington, D.C., USA.

Lubchenco, J., A. M. Olson, L. B. Brubaker, S. R. Carpenter, M. M. Holland, S. P. Hubbell, S. A. Levin, J. A. MacMahon, P. A. Matson, J. M. Melillo, H. A. Mooney, C. H. Peterson, H. R. Pulliam, L. A. Real, P. J. Regal, and P. G. Risser. 1991. The Sustainable Biosphere Initiative: an ecological research agenda. Ecology 72:371-412.

Maslow, A. H. 1954. Motivation and personality. Harper, New York, New York, USA.

McDaniels, T., L. J. Axelrod, and P. Slovic. 1995. Characterizing perception of ecological risk. Risk Analysis 15(5):575-588.

McKenzie, D. H., D. E. Hyatt, and V. J. McDonald, editors. 1992. Ecological indicators. Elsevier, New York, New York, USA.

Mingay, D. J., W. Carter, and K. A. Rasinski. 1991. Respondent rating strategies: evidence from cognitive interviews. Unpublished paper presented at the American Association for Public Opinion Research Conference, Phoenix, Arizona, USA.

Munasinghe, M., and W. Shearer. 1995. Defining and measuring sustainability: the biological foundations. The World Bank, Washington, D.C., USA.

O’Conner, J. C., K. Hamilton, C. Sadoff, K. Canby, A. Knute, and P. Bogdonoff. 1995. Monitoring environmental progress: a report on work in progress. The International Bank/World Bank, Washington, D.C., USA.

O’Neill, R. V., K. B. Jones, K. H. Riitters, J. D. Wickham, and I. A. Goodman. 1994. Landscape monitoring and assessment research plan. EPA/620/R-94/009. U.S. Environmental Protection Agency, Las Vegas, Nevada, USA.

Organization for Economic Cooperation and Development. 1994. Environmental indicators: OECD core set. Organization for Economic Cooperation and Development (OECD), Paris, France.

Paehlke, R. C. 1989. Environmentalism and the future of progressive politics. Yale University Press, New Haven, Connecticut, USA.

Patel, A., D. J. Rapport, L. Vanderlinden, and J. Eyles. 1999. Forests and societal values: comparing scientific and public perception of forest health. Environmentalist 19(3):239-249.

Paulsen, S. G., and R. A. Linthurst. 1994. Biological monitoring in the Environmental Monitoring and Assessment Program. Pages 297-322 in S. L. Loeb and A. Spacie, editors. Biological monitoring of aquatic systems. CRC Press, Boca Raton, Florida, USA.

Pykh, Yuri A., and Irina G. Malkina-Pykh. 1994. Environmental indicators and their applications (trends of activity and development). Working Paper of the International Institute for Applied Systems Analysis, Laxenburg, Austria.

Ralston, H. III. 1988. Environmental ethics: duties to and values in the natural world. Temple University Press, Philadelphia, Pennsylvania, USA.

Rapport, D. J., H. A. Regier, and T. C. Hutchinson. 1985. Ecosystem behavior under stress. American Naturalist 125:617-640.

Riitters, K. H., R. V. O’Neill, C. T. Hunsaker, J. D. Wickham, D. H. Yankee, S. P. Timmins, K. B. Jones, and B. L. Jackson. 1995. A factor analysis of landscape pattern and structure metrics. Landscape Ecology 10:23-39.

Risk Assessment Forum. 1992. Framework for ecological risk assessment. EPA/630/R-92/001. U.S. Environmental Protection Agency, Washington, D.C., USA.

Russell, C. S., V. H. Dale, J. Lee, M. H. Jensen, M. A. Kane, and R. Gregory. 2001. Experimenting with multi-attribute utility survey methods in a multi-dimensional valuation problem. Ecological Economics 36(1):87-108.

SAMAB. 1996. The Southern Appalachian Assessment: Summary Report [one of five]. U.S. Forest Service, Atlanta, Georgia, USA.

Scruggs, P. 1997. Colorado Forum on National Community Indicators. Conference Proceedings from 1996. Redefining Progress, San Francisco, California, USA.

Shaeffer, D., E. E. Herricks, and H. W. Kerster. 1988. Ecosystem health. I. Measuring ecosystem health. Environmental Management 12(4):445-455.

Slovic, P. 1995. The construction of preference. American Psychologist 50(5):364-371.

Stewart, D. W., and P. N. Shamdasani. 1998. Focus group research: exploration and discovery. Pages 505-526 in L. Bickman and D. J. Rog, editors. Handbook of applied social research methods. Sage, Thousand Oaks, California, USA.

Straussfogel, D. 1997. Redefining development as humane and sustainable. Annals of the Association of American Geographers 87(2):280-305.

Suter, G. W. II. 1990. Endpoints for regional ecological risk assessments. Environmental Management 14:9-23.

Suter, G. W., II. 1993. Ecological risk assessment. Lewis Publishers, Boca Raton, Florida, USA.

Syers, J. K., A. Hamblin, and E. Pushparajah. 1995. Indicators and thresholds for the evaluation of sustainable land management. Canadian Journal of Soil Science 75:423-428.

U.S. EPA. 1987. Unfinished business: a comparative assessment of environmental problems. Volume 1. Overview Report. Science Advisory Board, U.S. EPA, Washington, D.C., USA.

Ward, R. C. 1999. Monitoring progress toward 'sustainable development': implications for water quality monitoring. European Water Pollution Control 7(4):17-21

Walter, G. R., and O. L. Wilkerson. 1994. Information strategies for state-of-environment and state-of-sustainability reporting. International Journal of Sustainable Development and World Ecology 1:153-169.


APPENDIX 1

EPA Environmental Monitoring and Assessment Program (EMAP) Indicators (Lewis and Conkling 1994, O'Neill et al. 1994, and Paulsen et al. 1994).

FORESTS
STREAMS
LANDSCAPES
Foliar Chemistry
Water Quality
Flood Indicator
Lichen Chemistry
Microbial Metabolism
Riparian Zones
Dendrochemistry
Bird Assemblage
Loss of Wetlands
Bio-indicator Ozone
Physical Habitat Quality
Agriculture Near Water
Visible Plant Damage
Periphyton Assemblage
Amount of Edges
Branch Evaluations
Macrophytic Assemblage
Watershed/WaterQuality Indicator
Lichen Communities
Fish Assemblage
Dominance
Regeneration
Fish Tissue Contamination
Miles of Roads
Soil Classification and Physiochemistry
Benthic Macroinvertebrate Assemblage
Amount of Agriculture and Urban Area
Soil Classification and Physiochemistry
Benthic Macroinvertebrate Assemblage
Amount of Agriculture and Urban Area
Overstory Diversity

Contagion
Vegetation Structure

Fractal Dimension
Mortality

Recovery Time
Root Ecology

Edge Amount per Patch Size
Tree Growth

Land Cover Transition Matrix
Crown Condition

Corridors between Patches
Dendrochronology

Diffusion Rates
Wildlife Habitat

Inter-Patch Distances
Scenic Beauty

Actual vs. Potential Vegetation


Percolation Thresholds


Largest Patch


Scales of Pattern


Loss of Rare Land Cover


Cellular Automata


Habitat for Endangered Species


Change of Habitat


Wildlife Potential


Percolation Backbone

APPENDIX 2

A complete list of the common-language indicators (CLIs) created for forests, streams, and landscapes.

FORESTS

1. Contamination of forest plants by air pollution.

2. The health of forest plants.

3. Habitat quality for birds and deer.

4. Woodland productivity for forest products.

5. Forest structure scenic rating.


STREAMS

1. The chemical characteristics of stream water that help determine how water can be used by plants, animals, and people.

2. The kind and number of living things, other than fish, in a stream.

3. The kind, number, and edibility of fish present in the stream.


LANDSCAPES

1. The environment’s ability to increase or decrease the effects of fire and wind on the region.

2. The environment’s ability to provide habitat for different kinds of wildlife, including game and rare species.

3. The environment’s ability to resist and recover from a variety of disturbances.

4. The environment’s ability to filter and maintain water quality, and to reduce flooding.

5. The diversity and pattern of land cover types (forest, water, agriculture, cities, etc.), and which land cover types are dominant.

APPENDIX 3

Working list of valued aspects of the environment for the SAMAB region, developed from the small-group sessions that focused on this issue.

Clean air
Clean water
Forests that provide timber and paper
Lands that provide coal and minerals
Productive farms and pastures
Balance between man-made and natural areas
Water recreation (boating, swimming, fishing, and others)
Land recreation (hiking, hunting, organized sports, and others)
Forests covering large areas
Wilderness and other natural areas for public use
Easy access between outdoor areas and cities
Scenic beauty
Historically and culturally important areas
Spiritually and emotionally important areas
Wildlife (birds, bears, fish, and all others)
The region’s ability to support many different kinds of plants and animals
Healthy trees, flowers, and other plants
Wild plant foods and medicinals (ginseng, mushrooms, herbs, and others)
Less damage from floods, fires, and erosion due to the moderating effects of the environment


Address of Correspondent:
Andrew Schiller
George Perkins Marsh Institute
Clark University
950 Main Street
Worcester, MA 01610-1477 USA
Phone: (508) 751-4602
Fax: (508) 751-4600
aschille@black.clarku.edu



Home | Archives | About | Login | Submissions | Notify | Contact | Search