Home | Archives | About | Login | Submissions | Notify | Contact | Search

 E&S Home > Vol. 13, No. 2 > Art. 4

Copyright © 2008 by the author(s). Published here under license by The Resilience Alliance.
Go to the pdf version of this article

The following is the established format for referencing this article:
Fernandez-Gimenez, M. E., H. L. Ballard, and V. E. Sturtevant. 2008. Adaptive management and social learning in collaborative and community-based monitoring: a study of five community-based forestry organizations in the western USA. Ecology and Society 13(2): 4. [online] URL: http://www.ecologyandsociety.org/vol13/iss2/art4/


Research

Adaptive Management and Social Learning in Collaborative and Community-Based Monitoring: a Study of Five Community-Based Forestry Organizations in the western USA

Maria E. Fernandez-Gimenez 1, Heidi L. Ballard 2 and Victoria E. Sturtevant 3


1Colorado State University, 2University of California - Davis, 3Southern Oregon University



ABSTRACT


Collaborative and community-based monitoring are becoming more frequent, yet few studies have examined the process and outcomes of these monitoring approaches. We studied 18 collaborative or community-based ecological assessment or monitoring projects undertaken by five community-based forestry organizations (CBFs), to investigate the objectives, process, and outcomes of collaborative ecological monitoring by CBF organizations. We found that collaborative monitoring can lead to shared ecological understanding among diverse participants, build trust internally and credibility externally, foster social learning and community-building, and advance adaptive management. The CBFs experienced challenges in recruiting and sustaining community participation in monitoring, building needed technical capacity for monitoring, and communicating monitoring results back to the broader community. Our results suggest that involving diverse and sometimes adversarial interests at key points in the monitoring process can help resolve conflicts and advance social learning, while also strengthening the link between social and ecological systems by improving the information base for management and increasing collective awareness of the interdependence of human and natural forest communities.


Key words: adaptive management; collaborative monitoring; multiparty monitoring; community-based monitoring; resilience; social-ecological systems; social learning



INTRODUCTION

In this paper, we examine the roles of collaborative and community-based ecological monitoring and assessment in community-based forestry organizations (CBFs) in the western USA. Community-based forestry in the USA seeks to achieve the triple goals of ecological sustainability, social equity, and economic prosperity for social–ecological systems located in forested landscapes (Gray et al. 2001, Baker and Kusel 2003). Collaborative or participatory monitoring involves multiple individuals or organizations with different interests and forms of expertise in the design and implementation of monitoring. Multiparty monitoring is a form of collaborative monitoring involving representatives of opposing stakeholder groups, such as environmental organizations and timber industry representatives. Community-based monitoring implies the direct involvement of local community members in monitoring, either through their participation in collaborative monitoring efforts, or by training and contracting local workers to carry out monitoring projects.

Although programs such as New Mexico’s Collaborative Forest Restoration Program (CFRP; Secure Rural Schools and Community Self-Determination Act of 2000, Title IV Community Forest Restoration) and the Pilot Stewardship Program (Pinchot Institute for Conservation 2005) require multiparty monitoring, and several handbooks provide guidelines on how to develop a multiparty monitoring project (Savage 2003, Pilz et al. 2005), few published studies have examined the process and outcomes of collaborative monitoring. The objective of this research was to investigate the nature, benefits, and challenges of collaborative and community-based monitoring in CBFs. Based on the existing literature and our preliminary findings, we posed five initial propositions about the nature of collaborative monitoring in CBFs:
  1. Collaborative monitoring leads to shared understanding of the ecosystem.
  2. Collaborative monitoring fosters social learning and builds community.
  3. Collaborative monitoring builds trust and credibility within and outside CBFs.
  4. Community involvement in monitoring leads to communication of monitoring findings to the broader community.
  5. Community involvement in monitoring increases the likelihood that monitoring information will be acted upon and used to make decisions.
The paper is organized as follows. First we place CBF monitoring efforts in the broader context of citizen involvement in ecological monitoring and research by reviewing the literature on community-based, multiparty and volunteer monitoring specifically, and civic science more broadly. We also introduce adaptive management and social learning as frameworks for understanding the key objectives and outcomes expressed by the CBF organizations we studied. We then present our findings as they relate to the descriptive objectives of our study, and evaluate the evidence to support or refute our initial propositions about the nature of collaborative monitoring in CBF organizations. Finally, we discuss the implications of this study for community forestry, and collaborative and community-based natural resource management more broadly.

Community Participation in Science and Monitoring

Although collaborative and multiparty monitoring are relatively new terms, these are only the most recent iterations of a long tradition of citizen participation in natural history and other formal scientific endeavors (Withers and Finnegan 2003). A litany of terms has evolved to describe the various ways that individual citizens or groups of non-scientists may participate in designing and carrying out scientific research or environmental monitoring. Citizen science and volunteer or community monitoring refer to actions taken by individuals or organized groups of citizens to collect data for scientific research or management-oriented environmental monitoring, such as recording rainfall and temperature in their back yards to help scientists better understand spatial variability in weather patterns (Cifelli et al. 2005) or sampling water quality in lakes and rivers across the USA (Lopez and Dates 1998, Nicholson et al. 2002, Greve et al. 2003, Boylen et al. 2004). In citizen science and volunteer monitoring, the data-gathering objectives and protocols are usually established by scientists or management agencies, and the citizens who gather data are not usually involved in planning the research or analyzing or interpreting the results. These efforts give citizens a meaningful role in data gathering, but are seldom initiated or controlled by non-scientists, often are not geared to address community-defined problems, and rarely challenge the role or methods of conventional science in resource management.

In contrast, civic science (Lee 1993), refers to the democratization of science and its reorientation toward public dialog and interpretation. According to Shannon and Antypas (Shannon and Antypas 1996), “Civic science seeks to reunite these divided roles and responsibilities [science and democracy],” and challenges the traditional stance of science as objective knowledge situated outside of, rather than part of, society. Carr (Carr 2004) advocates for what she calls community science, which she describes as more inclusive than citizen science and more radical than civic science. She defines community science as, “the interaction between conventional (institutional, professional) and community-based (unaffiliated, volunteer) scientific knowledge systems. Community science is driven by community issues and concerns rather than theoretical frameworks or basic research questions.”

In its orientation toward community-defined questions, community science shares features with participatory research. Participatory research includes a spectrum of community involvement from functional participation in data gathering to empowering participation of community members as full partners in all parts of the research process (Johnson et al. 2001). Like participatory research, collaborative and multiparty monitoring involve community members and other stakeholders throughout the monitoring process, including developing monitoring objectives and protocols, gathering data, and analyzing and interpreting the results (Kusel et al. 2000, Bliss et al. 2001). Unlike volunteer monitoring or participatory research, which are well-studied phenomena, little research has investigated the process and outcomes of collaborative or multiparty monitoring in natural resource management (Kusel et al. 2000, Bliss et al. 2001, Hartanto et al. 2002, Mungai et al. 2004), and many of the existing studies are from an international development context.

In the USA, community participation in monitoring is increasing due to government cuts in monitoring programs, the growing need for information on local environmental changes, increasing recognition of the value and importance of including stakeholders in management processes, and a corresponding desire on the part of citizens to participate in management decisions that affect them (Moir and Block 2001, Weber 2003, Fernandez-Gimenez et al. 2005a, 2005b). Recent research suggests that community participants can help identify indicators and develop monitoring plans that are meaningful and credible to local people (Gasteyer and Flora 2000). Collaborative monitoring projects have also yielded significant social benefits, such as increased trust and improved relationships (Kusel et al. 2000, Fernandez-Gimenez et al. 2005b). We expected that community involvement in monitoring by CBF organizations would also lead to greater sharing of monitoring results throughout the local community, and a greater likelihood that monitoring results would be used in future decision making.

Adaptive Management, Social Learning, and Resilience

Recent advances in the theory and practice of natural resource management have focused on strategies for learning and applying new knowledge in the context of complex and uncertain environments. Adaptive management (Holling 1978) strives to overcome the limitations of conventional natural resource science and command-and-control resource management by treating management actions as structured experiments, and attempting to document and learn from both planned actions and unplanned environmental surprises. The concepts of social and organizational learning emerged in the late 1970s (Bandura 1977, Argyris and Schon 1978), and their subsequent development and application to resource management have been influenced heavily by systems thinking (Argyris and Schon 1978) and educational theory (Kolb 1984). Following Keen et al (2005), we define social learning in the natural resources context as an intentional process of collective self-reflection through interaction and dialog among diverse participants (stakeholders). This definition emphasizes learning through interactions in a group setting embedded in a particular biophysical and sociocultural context, and the nature of learning as a conscious act of collective self-reflection.

Drawing on the language of systems analysis and organizational learning (Argyris and Schon 1978, Senge 1990), Keen et al. (2005) and Keen and Mahanty (2006) describe how social learning can occur at several levels, from learning about the consequences of specific actions (single-loop learning), to learning about the assumptions underlying our actions (double-loop learning), to learning that challenges the values and norms that underpin our assumptions and actions (triple-loop learning). This emphasis on higher-order learning is echoed by others such as Woodhill (2003, cited in Bouwen and Taillieu 2004), who contends that social learning, “is more than just ‘community participation’ or learning in a group setting. It involves understanding the limitations of existing institutions and mechanisms of governance and experimenting with multi-layered, learning-oriented and participatory forms of governance,” (p.143). Several recent studies have evaluated evidence to determine whether specific collaborative resource management processes led to social learning and identified factors that facilitated and hindered learning (Buck et al. 2001, Schusler et al. 2003, Ison and Watson 2007, Mostert et al. 2007). In this study, we looked for evidence of social learning as an outcome of collaborative and community-based monitoring.

Both adaptive management and social learning are thought to enhance the flexibility and responsiveness of social–ecological systems, enabling these linked human and natural systems to better cope with and adapt to stress and change, without changing their fundamental nature. Together, these approaches should lead to more resilient social–ecological systems (Berkes and Folke 1998). Resilience here means the ability of a system to absorb stress and disturbance without changing its underlying structure and controlling processes (Carpenter et al. 2001). This view of resilience incorporates the notion of a dynamic system where disturbance and natural variation play integral roles, and human ability to learn from, adapt to, and maintain these dynamic systems is the key to their long-term persistence. According to this view, resilience is a value-neutral characteristic, as systems in an undesirable state can be resilient. In the context of community-based forestry, the goal of many CBF organizations in the USA is to restore or maintain healthy, interdependent relationships between human communities and forested ecosystems, and resilience of such healthy systems is seen as a desirable attribute.

In this study, we investigated the role of collaborative and community-based monitoring in facilitating adaptive management and social learning in CBF organizations. We looked for evidence of adaptive management by examining whether and how monitoring information was or was not used to make management decisions, and sought evidence of the different levels of social learning described by Keen et al. (2005).



METHODS

Sampling Frame and Study Sites

The Ford Community-Forestry Demonstration project funded 13 CBF organizations for 5 years beginning in 1999. In the last year of the program (2004), we were invited to develop a research program based on the experiences of the 13 demonstration projects. Due to the short duration of the Ford program, and the diversity of funded groups and their environmental settings, we did not attempt to measure or make inferences about direct ecological impacts. Because assessment is a key element in natural resource planning, and monitoring is essential to measuring short- and long-term environmental outcomes, we focused our analysis on ecological assessment and monitoring in CBF organizations.

We purposely selected seven of the 13 funded groups for study based on each group’s interest and willingness to participate in the research, and its involvement in on-the-ground ecological stewardship, assessment, and monitoring activities. Of these seven groups, five engaged in some form of community-based or collaborative monitoring and were the focus of this analysis. The five studied groups were all located in the western USA, and worked on public lands exclusively or on a mix of public and private lands. The groups were: the Alliance of Forest Harvesters and Workers (AFHW), the Jobs and Biodiversity Coalition (JBC), the Public Lands Partnership (PLP), Wallowa Resources (WR), and the Watershed Research and Training Center (WRTC). Table 1 provides a summary of each group’s social and ecological setting, the ecological threats to the system, and the group’s primary social and ecological objectives related to ecological stewardship of forests.

Data Collection and Analysis

We collected data on the ecological stewardship, assessment, and monitoring activities of each group using a combination of on-site interviews and participant observation, telephone interviews, and document review. We visited each group for a minimum of 3–5 days of interviews and field tours. Initial interviews were with CBF staff, agency partners, community participants, and others potentially affected (e.g., environmental groups, industry representatives). We made additional site visits to three of the study groups (AFHW, PLP, WRTC) as participant observers in monitoring or related stewardship activities in order to observe interactions among multiparty and community participants in these projects. Participant observation focused on these three groups because of their proximity to the researchers’ institutions, and the timing of the groups’ monitoring activities, which coincided with the study period. After completing our initial analyses, we conducted additional interviews to seek potentially contradictory evidence and substantiate or reject our initial findings. We also took part in a “ground-truthing” workshop that included group representatives, during which we presented our preliminary conclusions and interpretations, and gaps in our information about projects. Participants provided information to fill the gaps, and either validated or disputed our initial interpretations from their perspectives. At this workshop, none of our major findings was challenged, although members corrected or added some of the project-specific information found in Tables 1–3.

In all, we conducted formal interviews with 51 individuals in the five groups. Documents reviewed included project proposals, internal reports, and reports to the Ford Foundation; ecological assessment and monitoring project protocols, interim and final reports, and meeting minutes; public presentations by CBF organizations about their stewardship and monitoring projects; and existing case studies and published literature on the study organizations. The data formally analyzed to test our propositions thus consisted of texts: interview transcripts and project-related documents. Field notes from participant observation and informal interviews provided a broader understanding of the context and function of each group and project.

Formal interviews were audiorecorded, transcribed, and coded using QSR N*VIVO software (QSR International Pty. 1999–2000. NVivo 1.2 Software QRR revision 1.2, Victoria, Australia). This qualitative data analysis software facilitates the process of coding texts for specific themes, by allowing the researcher to mark passages that relate to a particular theme, assign this text a code (e.g., monitoring objectives, trust building), and sort the database created through this process by code. Codes addressed descriptive research questions (e.g., ecological stewardship and monitoring objectives, processes, and outcomes) and evidence related to our propositions (e.g., trust, social learning, community-building, shared ecological understanding, communication, and application of monitoring results), and to the stages and types of community participation in ecological assessment and monitoring. The resulting coding reports were synthesized within and across CBF organizations to assess the evidence in relation to our propositions and identify emergent themes in the data. For example, we assembled all the text passages that related to a particular code generated from interviews with a particular group, and synthesized the overall evidence with respect to that code and group. We then compared evidence across groups with respect to each proposition. Throughout this analysis process, we actively sought discrepant data that contradicted our preliminary propositions, and were alert for emergent themes that were not part of our initial analytical framework. The process of iterative analysis of qualitative data through which propositions are developed and tested is referred to as grounded theory (Strauss and Corbin 1990), and is a widely accepted approach to qualitative research in the social sciences (Miles and Huberman 1994).

In the Results section and the associated tables and appendices, we present our findings in three ways. Table 1 presents summary information on the characteristics of our study groups and their social–ecological contexts, and Tables 2 and 3 summarize the aims and outcomes of the monitoring projects they undertook, and the ways in which communities participated in each phase of each project. The text reports basic descriptive data based on these tables, and presents a synthesis of the qualitative analysis findings supported by representative quotations from interview texts. The appendix offers brief narrative descriptions of three monitoring projects that typify the most prominent patterns of community involvement we identified.



RESULTS

Community Roles in CBF Ecological Assessments and Monitoring

The five CBF groups studied engaged in a wide range of ecological assessment and monitoring activities that involved community members in a variety of roles throughout the monitoring process (Tables 2 and 3). Four of the five groups we studied conducted or contributed to ecological assessments or inventories, including two landscape-scale ecological assessments and an inventory of non-timber forest products. Two groups conducted compliance monitoring (monitoring to see whether management actions were implemented as prescribed) and all groups were involved in some form of effectiveness monitoring (monitoring to assess the effectiveness of management relative to management objectives).

The CBF participants and staff stated a variety of reasons or objectives for initiating specific monitoring projects (Table 3). All of the 18 projects monitored to learn about the system, 13 monitored to build trust, 11 monitored to determine the effects of management actions, nine monitored to help manage conflict, six monitored to train local people for jobs, which they then provided to community members, and three monitored in part to promote civic engagement.

In the 18 projects studied, community members were involved in a variety of ways (Table 3). The most common forms of community involvement were: (1) combined multiparty and community involvement, that is, projects that involved both representatives of opposing interest groups and unaffiliated citizens, and (2) projects that involved community members who did not necessarily represent multiple opposing interest groups. Two other types of community involvement observed were: (3) multiparty involvement of interest group and agency representatives and CBF staff, without broad community involvement, and (4) involvement of CBF staff as representatives of the broader community.

In all of the 18 assessment and monitoring projects we studied, community members were involved at some stage of the monitoring process, but few projects engaged community members in most or all phases of monitoring (Table 3). Multiparty involvement (with or without unaffiliated community members) was most common in the objective-setting (nine projects), design (eight projects), interpretation (seven projects) and communication (eight projects) phases of monitoring. Unaffiliated citizen involvement was most common in the data collection phase (10 projects). Only one project involved community members in formal data analysis, which was more often conducted by third-party consultants or researchers (six projects) or agencies (six projects).

Overall, three general patterns of community participation emerged: (1) community involvement primarily in the objective-setting, design, and interpretation phases, (2) community involvement primarily in the data-gathering phase, and (3) community involvement in most or all phases of monitoring. In the Appendix, we describe a representative case of each of these three patterns.

As illustrated in Table 3, we found that community involvement took different forms and occurred at different phases in the monitoring process depending on the project objectives. When CBFs used monitoring as a strategy to manage conflict or build trust, monitoring projects emphasized multiparty and community involvement in the objective-setting, design, and interpretation phases, or throughout all phases of monitoring. When the primary goal of the project was to provide job training and employment opportunities, participation of individual community members in data collection was most important.

Benefits and Outcomes of Community Involvement in Monitoring

In this section, we evaluate evidence from our interviews and document review to assess support for our five initial propositions. We provide representative quotations from the analyzed texts as examples of the evidence we used. In the final subsection, we assess the relationships between CBF monitoring project objectives, the kind and stage of community participation, and documented outcomes.

Proposition 1: Understanding Ecosystems

In all the groups studied, monitoring by CBFs led to new knowledge about the impacts and effectiveness of specific management practices (Table 2), but also to a greater appreciation on the part of CBF participants of the complexity of ecosystems and the difficulty of obtaining complete and reliable information about their behavior. As one PLP participant observed, “I think there’s a better understanding of how complicated that ecosystem is. It isn’t just take one thing out or add one thing and everything goes back to paradise. And that’s a common conception.” The same individual went on to reflect on what community members learned about the process of monitoring, “You heard people from the community express how difficult that monitoring is and I think that was an eye-opener for them. That it wasn’t just the agencies not doing their job as far as monitoring. I mean that’s not an easy process.” When a CBF was engaged in numerous inventory and monitoring projects over time, the collective sum of its work advanced understanding of the local ecosystem overall.

The PLP’s role in the Uncompahgre Plateau Project’s ecological assessment and project design was to bring a “community perspective.” An agency participant noted that PLP did this in part by facilitating a collaborative approach that included broader citizen participation in the discussion of project objectives and treatments, “It’s been good, because we’ve been forced to have dialog and I think probably [as a result of] those meetings they have, a lot of people have a common understanding.”

Similarly, an agency staff member observed that the collaborative process of the Upper Joseph Creek Watershed Assessment led to a greater shared understanding of that ecosystem, “It was really neat to see what had come out of the collaborative process. The entire county seems to be pretty much on the same page. Not everybody is agreeing with everybody else, but there’s enough agreement and enough common ground and commonality to know what is out there.”

Even when broad consensus was reached on important issues, differences in ecological assumptions or understanding remained among stakeholders. For example, in WR’s Upper Joseph Creek Watershed Assessment, there was broad agreement on the historic conditions in the warm dry sites and the measures needed to restore these areas, but similar consensus was lacking for the cool dry sites. In JBC, the U.S. Forest Service and environmentalists agreed on the overall objective of restoring a fire-adapted ecosystem, but debate continued on the appropriate remaining basal area required to achieve this goal. In these cases, collaborative ecological assessment and monitoring helped to clarify conflicting ecological assumptions held by various stakeholders, increasing overall understanding of convergent and divergent mental models and system function, even when agreement about some aspects of the ecosystem was not reached. Overall, we found strong evidence for increased ecological knowledge and shared understanding across all the studied groups (Tables 2 and 3).

Proposition 2.1: Social Learning

The literature on social and organizational learning defines three levels of learning, often described as single-, double-, and triple-loop learning. We found evidence of social learning at all three levels among the CBFs we studied. With respect to single-loop learning, all of the studied CBFs clearly described what they learned about the impacts or effectiveness of management from their monitoring efforts (Table 2). For example, JBC’s monitoring showed that their thinning prescriptions achieved the desired basal area with little impact to the forest understory or soils, and WRTC learned from the Chopsticks project that piling and burning slash caused more soil damage than other slash treatments or than using a yarder to harvest. The herbicide trials and monitoring conducted by WR led to recommendations about the type of chemical, rates, and timing of application for optimal treatment of specific invasive weed species.

Collaborative assessment or monitoring sometimes changed participants’ assumptions about ecological processes, as well as their social assumptions—examples of double-loop learning. One participant in PLP’s Burn Canyon Monitoring effort described the shift in participants’ ecological assumptions and beliefs as follows. “I think the expectations were completely different from the two parties. The environmentalists knew that salvaging timber was going to be damaging and that it would be better to leave it unsalvaged. That was the unspoken, or even the spoken, expectation of the environmental community. And almost the opposite of the business timber industry was that salvage logging had no impact at all. What we see is, well it’s right in the middle, it’s not either. It’s not a huge impact, but there is an impact.”

Another PLP member spoke more generally about changing his views as a result of the collective learning process, “Everyone is going to come to the table with their opinions, but you are able to actually learn about stuff that you may have had preconceived notions about but may not be true. Maybe you will learn that. I know I have.” An agency staff person who interacts with PLP on the UP Project remarked that she had seen attitudes and beliefs of participants change as a result of the combination of research evidence and dialog about research results among diverse interest groups facilitated by PLP. For example, recent research conducted on fire frequency in piñon-juniper communities demonstrated that the historic fire intervals in this type were much longer (i.e., fires were less frequent) than previously thought. This finding was controversial and unpopular with some PLP participants, but they were eventually convinced and changed their beliefs to reflect this new knowledge.

We also found examples of social attitudes and assumptions that were altered through the participation in CBF field activities. One Native American AFHW member recounted, “I went on a field trip and I have my own way of thinking about things—this is not right, all this commercial harvesting of products out of the woods. But there was a man, I think Laotian maybe, on the bus they chartered for our field trip. Just hearing his story, hearing his life and what occurred before he came here and how important it was for him to go out and harvest mushrooms. It was a real human thing, it wasn’t about the money, and that made me stop and look at it a little different.”

Changes in norms and values (triple-loop learning) were more difficult to attribute to monitoring alone, and reflected the larger collaborative dialogs that the study CBFs engaged in. One example was a shift in values on the part of an environmentalist participant in PLP who came to better appreciate the role of land-based livelihoods in the economic vitality of his community. In AFHW, mushroom monitoring helped shift values of both harvesters and agency staff toward protecting mushroom fields from unnecessary disturbance and promoting sustainable harvesting.

The most significant indicator of social learning we observed may be the intentional approach to learning that many CBFs take. Four of the five studied groups demonstrated an explicit commitment to monitoring and adaptive management, and three of these groups actively promoted organizational learning and critical self-reflection in their staff, participants, and communities. This commitment was demonstrated in the public “lessons learned” meetings that WR facilitated following the Upper Joseph Creek Watershed Assessment process and the “learning workshop” PLP organized to promote broad community dialog about what was learned from the restoration and monitoring projects the group carried out.

In sum, we found clear evidence of single-loop learning in all studied groups, and strong examples of double- and triple-loop learning in several groups. However, evidence of double- and triple-loop learning was more difficult to observe, so we are less certain about its prevalence. The examples of multiple-loop learning that we described here, together with the overt commitment to learning taken by three of the groups, point to the transformative potential for collaborative monitoring.

Proposition 2.2: Community Building

We found good evidence that CBFs used collaborative monitoring as a community-building strategy. In the words of a WR staff member, “Getting out and doing collaborative monitoring is a very strong partnership building exercise. And it has, certainly on the Upper Joseph Creek Watershed, allowed for a lot more convergence of opinions and perceptions of what the current concerns and appropriate strategies we could implement to achieve a diverse set of values. And that was a result of getting out there and working together on the assessment, rather than each having our own data sets and science that we’re using in contentious debate and argument. Basically, learning together in the place we’re interested in.”

In addition, several CBFs saw collaborative monitoring as a direct and tangible way to “re-connect” people with the land, strengthening awareness of the interconnectedness of ecological and human communities. The PLP used the Burn Canyon monitoring project to bring local high school youth into the woods and expose them to the complex ecological and socioeconomic issues related to the debate over salvage logging. Discussions in the monitoring work group often focused on the need to connect newcomers to the area to the cultural and social significance of traditional land-based livelihoods. The PLP members also viewed monitoring as a means to educate citizens about and engage them in natural resource issues, fostering civic engagement and environmental citizenship. The WRTC took a similar approach in their Post Mountain thinning project.

Members of the AFHW reported that monitoring instilled harvesters with a sense of ownership over the resources they use and care for, and empowered them as stewards of the land. “The empowerment that’s going on, the ownership. ... The mushroom monitors are so excited about the pictures. Why are they so excited? I think because they spell ownership.”

Thus, for at least three of the CBF groups, community building was an explicit objective of collaborative monitoring, whereas for another community building was a valued outcome even if it was not initially a project objective.

Proposition 3: Trust

Similar to community-building, some CBFs embarked on collaborative monitoring projects specifically to gain trust and credibility with a wide range of community members, federal agencies, and outside environmental organizations. As a WR staffer recalled, “It was really important to establish the organization as an entity in itself and also important to align ourselves with ecological monitoring, ecological projects. ...So we began to work on these ecological demonstration projects to build trust, to build the understanding, and to increase our own knowledge.”

In other cases, trust building was an outcome, if not an explicit goal, of collaborative monitoring. Sometimes trust, or at least greater respect and understanding, developed among diverse stakeholders participating together in a multiparty monitoring process. In other cases, monitoring results led to greater trust in the CBF on the part of agencies and environmentalists. A JBC participant talked of the importance of the collaborative aspect of project design and monitoring in building trust and credibility with outside environmental groups: “It’s really the process we went through that we found is important to talk about. You know, there are no outliers, nobody taking pot shots at us now, we can show other people that are trying to do something like this that hey, you can do this, you can get something done.” The monitoring data JBC gathered were also important in maintaining credibility with the Forest Service, as it showed that the JBC project came close to typical Forest Service prescriptions in the basal area remaining after thinning.

For 13 of the 18 projects, trust building between stakeholders was an explicit goal. For all but two projects, participants reported that increased trust was a key outcome of their work. Thus, we found strong evidence that collaborative monitoring increases trust, whether or not it is an initial goal of community involvement.

Proposition 4: Communication of Monitoring Results

We expected that involving diverse stakeholders and community members directly in monitoring would increase the likelihood that monitoring results would be communicated back to and throughout the community. Although CBFs frequently reported project results to the community (Table 3), some groups felt that communication remained a challenge for them, and that they did not do enough to share their findings within their broader communities. Sometimes communication took place formally, through community meetings, websites, or publications. In the case of WR’s Upper Joseph Creek Watershed Assessment, some 70 participants were involved in the process, a significant fraction of the community. The PLP held a public “learning workshop” on restoration and monitoring, in part to showcase and discuss their experience with the Burn Canyon Monitoring project.

Often, however, monitoring results were communicated informally, as when local contractors involved in ecological surveys shared their observations with friends and neighbors. As one WRTC staffer reflected, “What we’ve discovered is ... when you have an exlogger sitting at the bar and telling someone about the cool fisher tracking plates, that gets into the community a lot faster than the scientists who have been here every year studying the red-legged frogs, but nobody in town knows anything about red-legged frogs.” A PLP participant referred to the informal conversations that PLP members have with other people in the community as a “ripple effect” that “inoculates the community against ignorance.”

One WR staffer admitted that, in general, the group’s monitoring results were not well communicated. Occasionally, the contractors or community members who collected the data gave a public lecture. A web page for monitoring results had been discussed, but had not yet been developed. A major constraint to formal communication back to the community was lack of funding. In some cases, there were not yet results to report. Overall, most CBF groups communicated monitoring results formally or informally to their membership, and in some cases, to the broader community of CBF organizations, but these groups also wished that they communicated more often and effectively with the general population of their local communities and regions.

Proposition 5: Application of Monitoring Results

We expected that involving stakeholders directly in monitoring would help ensure that monitoring results were used to complete the adaptive management cycle by altering future management actions based on new knowledge about the system. In slightly less than half of the projects we found moderate or strong evidence that monitoring or assessment data were applied to future management and monitoring (Table 3). We found that CBFs used the knowledge gained through collaborative monitoring in several ways. For some CBFs, the emphasis was on learning from the multiparty community monitoring process and attempting to improve upon, expand, or apply this process to other projects and settings. For example, a PLP leader reflected on the application of collaborative monitoring to other monitoring projects and forest planning generally, “And I’m thinking about the transferability of this process in two specific ways. Number one, we have the power line project, assuming that ever goes. And secondly, in the forest plan revision, and under the new rule, monitoring is written all the way through that and it’s not well defined. And so I’m thinking that community monitoring groups might in fact play a role in defining that.”

For others, learning documented in ecological monitoring reports or from visual inspections of demonstration projects has led to modification of future project designs. As one WRTC staff member explained, “What we did at Chopsticks changed the prescriptions. ... we had enough decision space to change based on what we learned. That was pretty fun.” In the case of mushroom monitoring by the AFHW, monitoring designed and carried out by mushroom harvesters led the U.S. Forest Service to alter the location of a timber sale to protect mushroom fields, and increased harvesters’ compliance with permit regulations.

When WRTC facilitated the Post Mountain multiparty monitoring project, they found that the collaborative process helped them think in advance about how the data would be used, and thus narrow down the data that they would collect. “And that’s been a very important dialog [about ecological objectives] during our multiparty monitoring meetings, because people brought pages and pages of information that they wanted to monitor, but when we asked how will it be used, what will we do with it, why do we want it... boy the list shrunk. People [realized] oh, I guess we don’t need all that information.”

All groups also learned about the technical aspects of monitoring design and analysis, and are applying this knowledge to future projects. There are also a number of instances in which it is not clear whether or how monitoring data were used. In at least one case, the monitoring data were so voluminous and complex the CBF was not sure how to analyze them. In another (JBC Mill Creek Projects #1 and #2), monitoring was used to validate current practices, and thus did not trigger a change in management, but rather demonstrated the acceptability of existing management. In the case of wildlife population surveys (WR eagle and grouse surveys), the results were documented in agency reports, but it was not clear how they were used for management decision making.

Do Types and Stages of Community Involvement Influence Collaborative Monitoring Outcomes?

Ecological learning was a universal objective of the CBF monitoring processes we studied, and occurred in all projects. But significant advances or changes in shared ecological understanding occurred in only a few instances (Table 3). The greatest increase or change in shared ecological knowledge occurred in projects of extensive spatial scope where fairly rigorous data collection and analysis methods were applied (WR and PLP watershed assessments), and a diverse cross-section of stakeholders participated in most or all phases of the assessment. Other projects with moderately strong ecological learning outcomes were those with a more narrow focus or large uncertainties initially, and relatively robust sampling and analysis methods (e.g., PLP Burn Canyon, WR weed monitoring, WRTC Chopsticks and NTFP projects). In these projects, clear objectives and design, rather than the type or phase of community participation, seem to determine the level of ecological learning. Less learning occurred in projects with less broad-based community involvement or where monitoring was viewed primarily as a way to validate existing knowledge rather than address uncertainties or conflicting assumptions.

Social learning, community-building, and building trust and credibility were interrelated outcomes, and the patterns of participation we observed in relation to these three outcomes were similar. Social learning and trust building were not always explicit objectives, but were an outcome of most collaborative monitoring projects we studied. In general, the cases in which a diversity of citizens or multiparty group of resource users, agency personnel, and environmental advocacy group representatives were involved in the design of the monitoring project resulted in more social learning and community building, and stronger relationships of trust between the stakeholders in the project. Projects designed by the agencies or researchers alone (WRTC Chopsticks and NTFP projects, WR eagle and grouse surveys) resulted in less social learning, community building, and trust, as did projects designed by citizens or resource users alone (AFHW weed project). These patterns do not hold across all of our case studies, but represent the prevailing relationships.

Our data do not allow for an in-depth analysis of why trust was built more in some projects than others, but we offer the following potential explanation. We observed two ways in which trust and credibility were built through the monitoring process. The first occurred through the repeated interactions among diverse stakeholders over the course of a multiparty or collaborative monitoring process. These interactions enabled participants to get to know one another as individuals, moving beyond initial responses and assumptions based on stereotypes or positions (double-loop social learning). They also allowed individuals to demonstrate qualities such as reliability, consistency, transparency, and respect for others’ viewpoints, which may be the foundations upon which trust is built (Wagner and Fernandez-Gimenez 2008). Such repeated interactions among diverse participants are also important for both social learning and community building, possibly explaining why we observed higher levels of these outcomes in projects that involved more diverse stakeholders over multiple phases in the monitoring process. Contrary to the dominant pattern just described, we also observed that trust or credibility was built in some projects where community members gathered data, and design was developed by either the agency or the community alone rather than collaboratively (e.g., WR lynx survey, WRTC ecosystem surveys, AFHW mushroom survey). Here, credibility of community monitoring data increased (in the agency’s eyes) when a skeptical agency saw concrete evidence that local people were able to collect data that met existing agency standards (WR, WRTC), or were otherwise persuasive in quality (AFHW mushroom survey).

Although monitoring results from nearly half the projects were clearly applied to the design of later management or monitoring projects, it is difficult to say if application of results was a function of who was involved when. In many cases, the nature and purpose of the project may have been more important in determining whether the results were used rather than the type or timing of community participation. For example, when there was a clear need to learn which management practice was most effective, results were more likely to be applied to subsequent projects (WR aspen, weed monitoring, WRTC Chopsticks). The large-scale assessment projects also led to high levels of application. In these projects, a primary objective was learning to inform future project planning, and results were widely applied.

Finally, in many projects, the type and reach of communication was related to the type of community participation. When there was strong citizen participation in multiple phases of the project, results were more likely to be communicated, formally or informally, to CBF participants and the local community. Informal communication within the local community often occurred when citizens participated in data collection. When CBF staff were primary participants, communication to CBF participants, other CBF groups and partner agencies was strong, but results were not always formally communicated to the local community. Once again, the strongest examples of widespread communication within the local community came from the two large assessment projects (PLP UP Watershed Assessment and WR Upper Joseph Creek Watershed Assessment). We attribute this to two main factors. First, a broad cross-section of the local community participated in each project, as well as a large number of community members overall. Second, each project took an intentional social learning or civic engagement approach, convening community meetings at intervals to reflect on the lessons learned from the data gathered and the collaborative assessment effort.

Challenges and Barriers to Collaborative Monitoring in CBF

Challenges to collaborative monitoring fall into several categories: resources, participation, and communication, and technical and institutional hurdles. Funding, time, and labor were often the limiting resources in conducting any kind of monitoring, and as collaborative monitoring took more time, it often demanded more funding and labor as well. Nevertheless, most of the CBF groups we studied were carrying out monitoring at least in part to fill a gap in agency monitoring programs.

Participation challenges included difficulty in getting or keeping key stakeholders involved (e.g., environmental groups, tribes, some agencies), over-reliance on specific individuals (e.g., a dedicated volunteer with specific knowledge, a visiting scientist), and generally difficulty in mobilizing and maintaining long-term volunteer commitment to monitoring. In a large multiparty collaborative ecological assessment, internal power differentials also presented a challenge to balanced and equitable participation. Communication challenges included keeping all members of a large group up to date on project process and discussions when not everyone comes to every meeting, as well as distributing monitoring results broadly throughout the community.

Technical challenges in collaborative monitoring can be significant. Many CBFs struggled to determine an acceptable level of scientific rigor for community monitoring projects, and lacked technical expertise in monitoring design and protocols. Involving many people in monitoring design occasionally led to “too many cooks in the kitchen,” resulting in an untenable monitoring design or data that could not be analyzed. Outside consulting scientists were not always helpful in resolving these issues, and sometimes made impractical recommendations. On the other hand, some groups found consultants and researchers to be an important resource in helping them design monitoring projects and analyze data.

One indicator of the technical weakness of many CBF groups was the general lack of community participation in data analysis. Although contracting out data analysis to consultants or researchers may be more efficient and effective than training or hiring in-house staff, a participant of one group that did involve community members in some of the data analysis felt this involvement was an important element in the group’s success in conducting a collaborative ecological assessment. “Having them involved in the analysis makes them understand it better. If you can see the raw data, [and the protocols], you have more ownership and better understanding. Some of the success of where we got with the environmental groups was because they didn’t feel like somebody was making it up.”

Lack of technical capacity to analyze data and write technical reports can lead to perceived lack of credibility on the part of some CBF partners. For example, the minutes from PLP’s Learning Day reported, “People like        have a hard time with community monitoring if it isn’t reported in a manner that befits data—i.e., a report. For it to be legitimate in his eyes, it must look legitimate.”

One CBF participant highlighted the importance of having clear management and monitoring objectives to help overcome some of these technical challenges: “I think that an important thing is to be sure that they are really clear about their goals are, you know, what kinds of stuff they want to learn from the monitoring. And then, to make sure that they tie whatever measurements they do to what they want to learn, I think that’s really important. And another thing, too, I’m not sure how well I’ve done on this yet because I haven’t gotten to this phase, is to kind of think about the analysis of all the data that you’re collecting, because I think that that’s a downfall of a lot of monitoring projects is that, they collect all this data, and then at some point nobody really knows what to do with it, it’s just a bunch of papers in a notebook somewhere.”

The CBFs working across agency jurisdictions faced the institutional challenge of differing monitoring standards and methods and differing vegetation classification systems among agencies. In the case of the AFWH weed monitoring project, the agency would not accept or use the information gathered by CBF members. In one project conducted by WR, the CBF had to go to great lengths to protect the confidentiality of data gathered on private land. Other institutional hurdles included frequent agency staff turnover, shifting agency priorities that reduced funding and staff available for monitoring and assessment projects, and short-term stewardship projects with no funding for monitoring long-term ecological impacts.



DISCUSSION AND CONCLUSION

Overall, we observed the greatest benefits, in terms of ecological and social learning, trust, and community building, and application and communication of results, in the two projects that involved a large number and diverse cross-section of community members throughout all or most phases of a collaborative ecological assessment process (PLP UP and WR Upper Joseph Creek Watershed Assessments). These projects were also supported by commensurately large budgets from philanthropic and agency sources, and thus may not represent a realistic model for many smaller-scale CBF collaborative assessment or monitoring efforts. Nevertheless, they point to the potential significant benefits of collaborative ecological assessment and monitoring on this scale. They also highlight the benefits of their overt approach to civic engagement and social learning—a lesson that can be transferred to smaller-scale efforts.

Many of the other monitoring projects we studied were relatively young or were short-duration projects from the outset, and faced on-going challenges. However, a number of these also achieved important ecological and social learning, and trust-building outcomes with much lower levels of financial and institutional support than the two landscape-scale assessments. The projects that stand out in this category were once again those where community members participated in many phases of the monitoring process. Still, not all projects that involved community members throughout the monitoring process were highly successful. In some cases, this was because the project was among the first collaborative monitoring efforts the CBF undertook. Consequently, significant learning about the technical and organizational aspects of collaborative monitoring occurred, and that learning has been applied to subsequent monitoring projects by the same CBF with clearer beneficial outcomes.

Finally, although our propositions focused on the learning, trust-building, application, and communication aspects of collaborative and community-based monitoring, it is important not to overlook the many projects in our study that were focused in large part on training and employing an ecological monitoring workforce. The main objectives of these projects—providing training and jobs in the woods for local people—may not have aligned with our initial propositions. Thus, we have perhaps undervalued in our analysis the importance of such projects to the larger mission of many CBF organizations: nurturing sustainable forest-based economies and communities in the rural West.

Two of the major challenges faced by CBFs doing collaborative monitoring were: (1) obtaining broad-based and sustained community participation in long-term ecological monitoring projects, and (2) determining and securing the needed level of technical assistance and science capacity to ensure the validity and credibility of CBF-led collaborative monitoring efforts. The challenges of participation in community-based monitoring are not unlike those facing all kinds of environmental volunteer organizations, and methods of attracting and sustaining volunteers have been well discussed in the literature (Byron and Curtis 2002, Whitelaw et al. 2003). The decision of whether to develop science and technical capacity internally, form partnerships with researchers or consultants who can perform these tasks for the CBF, or forgo formal data analysis and reporting, depends in part on the objective of monitoring. Among the CBFs we studied, trust building was often an important monitoring objective, and technical capacity played a role in establishing and maintaining trust and credibility, especially with partner agencies and environmental organizations. The project that had sufficient capacity to engage community members directly in data analysis felt this was an important aspect of their collaborative process.

Despite the challenges of participation and technical capacity, our findings suggest that community involvement in monitoring advanced the overall CBF goals of understanding and transforming relationships among ecosystems, communities, and local economies in several ways. First, it tangibly reconnected people with the landscape and with each other, by getting diverse community members to work and learn together on the land. Second, it facilitated single-, double-, and triple-loop learning, encouraging participants to question their assumptions and underlying norms and values through the reflective processes of adaptive management and social learning. A key question posed in the literature is how to advance social learning in collaborative forest management and other co-management settings (Buck et al. 2001, Schusler et al. 2003, Mostert et al. 2007). Our study suggests that community involvement in monitoring can be an effective mechanism to promote multiple-loop social learning. Third, collaborative monitoring helped build trust among diverse participants within CBFs, and establish the credibility of CBFs with other organizations and agencies. This trust and credibility provide an important foundation for future collaborative projects between organizations. This finding supports the few earlier studies that have documented beneficial social outcomes of collaborative monitoring, including improved relationships and trust (Kusel et al. 2000, Fernandez-Gimenez et al. 2005b). It is also in concordance with much of the literature that examines the role of trust specifically, and social capital more broadly, as an input to and product of collaborative natural resource management (Ostrom 1997, Pretty and Ward 2001, Adger 2003, Leach and P. 2005).

Finally, the intentional approach to learning espoused by over half the CBFs studied should in theory enhance the resilience of local social–ecological systems by helping communities anticipate and adapt to changing conditions, and better appreciate the complexity of linked social and ecological systems (Walker et al. 2002, Walker and Salt 2006). Taken together, the multiple and intertwined dimensions of intentional learning that these CBFs advance—adaptive management to learn about ecosystems, social and collaborative learning about the cultural and social significance of forest-based livelihoods, and critical self-reflection to advance organizational and community learning and development—can be understood as a renegotiation of the meaning of the people–land connection. As the evidence from PLP, WR, and WRTC suggested, increased understanding of ecological complexity and uncertainty gained from collaborative ecological monitoring may lead community members to question their ability to manage natural systems in any simple prescriptive manner. This deepened understanding may also reinforce the need to monitor to avoid ecological harm and its social and economic consequences, to develop locally workable and effective stewardship practices, and to continue learning about these complex and unpredictable systems.

We speculate that using a deliberative, transparent, and collaborative process to collect and interpret data makes it less likely that monitoring or assessment results will be overlooked. If this is true, collaborative monitoring may potentially empower communities and agencies to respond more quickly and flexibly to new information. The increased civic engagement, respect, trust, and appreciation of interdependent human and natural systems that collaborative monitoring fosters may instill some participants with a greater sense of civic responsibility toward their community and environment. This study illustrated the potential for CBF organizations to play a key role in facilitating adaptive management and social learning in the forests of the western USA. We do not claim that all CBFs do this. Furthermore, more study is needed to determine if CBF-facilitated adaptive management and social learning do indeed lead to tighter feedback loops within human and forest communities—and more responsive and flexible management of them—and ultimately more resilient social–ecological systems.

This study contributes to the scant research on the process and outcomes of collaborative and community-based monitoring. We acknowledge that our results represent a small and potentially unrepresentative sample of CBF groups and their ecological assessment and monitoring projects, primarily on public lands in the western USA. Future large-sample studies of CBF organizations and their monitoring activities are needed to confirm or reject these preliminary findings about the relationships between the type and stage of community involvement in monitoring, and monitoring objectives and outcomes. Although difficult to design, comparative case-control studies are needed to compare the outcomes of collaborative community-based monitoring with conventional monitoring by agencies and consultants. Despite the possible limitations of our sample, our findings illustrate why the CBFs we studied monitored and what they achieved through collaborative monitoring, as well as the constraints to community involvement in ecological monitoring. Through their collaborative and community-based monitoring, the CBF organizations we studied engaged ordinary citizens and diverse interests in examining ecological complexity and reclaiming collective responsibility for the welfare of their communities and landscapes.



RESPONSES TO THIS ARTICLE


Responses to this article are invited. If accepted for publication, your response will be hyperlinked to the article. To submit a response, follow this link. To read responses already accepted, follow this link.




ACKNOWLEDGMENTS

We would like to thank the staff and partners of the community-based organizations who worked with us to develop and implement this study, the Ford Foundation, the Aspen Institute, and the Ford CBF Research Team. We would also like the thank the anonymous reviewers who gave us helpful feedback and improved the resulting manuscript.




LITERATURE CITED


Adger, W. N. 2003. Social capital, collective action, and adaptation to climate change. Economic Geography 79:387–404.

Argyris, C., and D. Schon. 1978. Organizational learning: a theory of action perspective. Addison-Wesley, Reading, Massachusetts, USA.

Baker, M., and J. Kusel. 2003. Community forestry in the United States; learning from the past, crafting the future. Island Press, Washington, D.C., USA.

Bandura, A. 1977. Self-efficacy: toward a unifying theory of behavioral change. Psychology Review 84:191–215.

Berkes, F., and C. Folke. 1998. Linking social and ecological systems: management practices and social mechanisms for building resilience. Cambridge University Press, Cambridge, UK.

Bliss, J., G. Aplet, C. Hartzell, P. Harwood, P. Jahnige, D. Kittredge, S. Lewandowski, and M. L. Soscia. 2001. Community-based ecosystem monitoring. Journal of Sustainable Forestry 12:143–167.

Bouwen, R., and T. Taillieu. 2004. Multi-party collaboration as social learning for interdependence: developing relational knowing for sustainable natural resource management. Journal of Community and Applied Social Psychology 14:137–153.

Boylen, C. W., E. A. Howe, J. A. Bartkowski, and L. W. Eichler. 2004. Augmentation of a long-term monitoring program for Lake George, NY by citizen volunteers. Lake and Reservoir Management 20:121–129.

Buck, L., E. Wollenberg, and D. Edmunds. 2001. Social learning in the collaborative management of community forests: lessons from the field. Pages 1–19 in L. Wollenberg, D. Edmunds, J. Fox, and S. Broch, editors. Social learning in community forest management: linking concept and practice. Center for International Forestry Research and East West Center, Bogor, Indonesia.

Byron, I., and A. Curtis. 2002. Maintaining volunteer commitment to local watershed initiatives. Environmental Management 30:59–67.

Carpenter, S., B. Walker, J. M. Anderies, and N. Abel. 2001. From metaphor to measurement: resilience of what to what? Ecosystems 4:765–781.

Carr, A. J. L. 2004. Why do we all need community science. Society and Natural Resources 17:841–849.

Cifelli, R., N. Doesken, P. Kennedy, L. D. Carey, S. Rutledge, C. Gimmestad, and T. Depue. 2005. The community collaborative rain, hail, and snow network. Bulletin on the American Meteorological Society 86:1069–1077.

Fernandez-Gimenez, M. E., S. Jorstad McClaran, and G. Ruyle. 2005a. Arizona permittee and land management agency attitudes toward rangeland monitoring by permittees. Rangeland Ecology and Management 58:344–351.

Fernandez-Gimenez, M. E., G. Ruyle, and S. Jorstad McClaran. 2005b. An evaluation of Arizona Cooperative Extension’s rangeland monitoring program. Rangeland Ecology and Management 58:89–98.

Gasteyer, S., and C. B. Flora. 2000. Measuring ppm with tennis shoes: science and locally meaningful indicators of environmental quality. Society and Natural Resources 13:589–597.

Gray, G. J., M. J. Enzer, and J. Kusel, editors. 2001. Understanding community-based forest ecosystem management. Food Products Press, New York, New York, USA.

Greve, A. I., J. C. Loftis, J. B. Bronw, R. R. Buirgy, and B. Alexander. 2003. Design and implementation of a cooperative water quality monitoring program in Colorado's Big Thompson watershed. Journal of the American Water Resources Association 39:1409–1418.

Hartanto, H., M. C. B. Lorenzo, and A. L. Frio. 2002. Collective action and learning in developing a local monitoring system. International Forestry Review 4:184–195.

Holling, C. S. 1978. Adaptive environmental assessment and management. Wiley, Toronto, Ontario, Canada.

Ison, R., and D. Watson. 2007. Illuminating the possibilities of social learning in the management of Scotland’s water. Ecology and Society 12(1):21. [online] URL: http://www.ecologyandsociety.org/vol12/iss1/art21/.

Johnson, N., H. M. Ravnborg, O. Westermann, and K. Probst. 2001. User participation in watershed management and research. Water Policy 3:507–520.

Keen, M., V. A. Brown, and R. Dyball. 2005. Social learning in environmental management: towards a sustainable future. Earthscan, London, UK.

Keen, M., and S. Mahanty. 2006. Learning in sustainable natural resource management: challenges and opportunities in the Pacific. Society and Natural Resources 19:497–513.

Kolb, D. A. 1984. Experiential learning: experience as the source of learning and development. Prentice-Hall, Englewood Hills, New Jersey, USA.

Kusel, J., L. Williams, C. Danks, J. Perttu, L. Wills, D. Keith, and L. P. Group. 2000. A report on all-party monitoring and lessons learned from the pilot projects. Forest Community Research and The Pacific West National Community Forestry Center, Taylorsville, California, USA.

Leach, W., and P. Sabatier. 2005. Are trust and social capital the keys to success? Pages 223–258 in P. Sabatier, W. Focht, M. Lubell, Z. Trachtenberg, A. Vedlitz, and M. Matlock, editors. Swimming upstream: collaborative approaches to watershed management. MIT Press, Cambridge, Massachusetts, USA.

Lee, K. N. 1993. Compass and gyroscope, integrating science and politics for the environment. Island Press, Washington, D.C., USA.

Lopez, C., and G. Dates. 1998. The effects of community volunteers in assessing watershed ecosystem health. in D. Rappoport, R. Costanza, P. R. Epstein, C. Gaudet, and R. Levins, editors. Ecosystem health. Blackwell Science, Oxford, UK.

Miles, M. B., and A. M. Huberman. 1994. Qualitative data analysis, second edition. Sage Publications, Thousand Oaks, California, USA.

Moir, W. H., and W. M. Block. 2001. Adaptive management on public lands in the United States: commitment or rhetoric. Environmental Management 28:141–148.

Mostert, E., C. Pahl-Wostl, Y. Rees, B. Searle, D. Tabara, and J. Tippett. 2007. Social learning in European river-basin management: barriers and fostering mechanisms from 10 river basins. Ecology and Society 12(1):19. [online] URL: http://www.ecologyandsociety.org/vol12/iss1/art19/.

Mungai, D. N., C. K. Ong, B. Kitame, W. Elkaduwa, and R. Sakthivadivel. 2004. Lessons from two long-term hydrological studies in Kenya and Sri Lanka. Agriculture, Ecosystems and Environment 104:135–143.

Nicholson, E., J. Ryan, and D. Hodgkins. 2002. Community data—where does the value lie? Assessing confidence limits of community collected water quality data. Water Science and Technology 45:193–200.

Ostrom, E. 1997. Investing in capital, institutions, and incentives. Pages 153–181 in C. Clague, editor. Institutions and economic development: growth and governance in less-developed and post-socialist countries. John Hopkins University Press, Baltimore, Maryland, USA.

Pilz, D., E. T. Jones, and H. Ballard. 2005. Manager’s manual for participatory biological monitoring projects. Institute for Culture and Ecology, Portland, Oregon, USA.

Pinchot Institute for Conservation. 2005. Implementation of multiparty monitoring and evaluation: final perspectives on the USDA Forest Service stewardship end results contracting program. A report to the USDA Forest Service. Pinchot Institute for Conservation, Washington, D.C., USA.

Pretty, J., and H. Ward. 2001. Social capital and the environment. World Development 29:209–227.

Savage, M. 2003. Multiparty monitoring and assessment guidelines for community based restoration in southwestern ponderosa pine forests. The Forest Trust, Santa Fe, New Mexico, USA.

Schusler, T. M., D. J. Decker, and M. J. Pfeffer. 2003. Social learning for collaborative natural resource management. Society and Natural Resources 15:309–326.

Senge, P. M. 1990. The fifth discipline: the art and practice of the learning organization. Doubleday, New York, New York, USA.

Shannon, M. A., and A. R. Antypas. 1996. Civic science is democracy in action. Northwest Science 70:66–69.

Strauss, A. L., and J. Corbin. 1990. Basics of qualitative research: grounded theory procedures and techniques. Sage Publications, Newbury Park, California, USA.

Wagner, C. L., and M. E. Fernandez-Gimenez. 2008. Does community-based collaborative resource management increase social capital? Society and Natural Resources 21: in press.

Walker, B., S. Carpenter, J. M. Anderies, N. Abel, G. Cumming, M. Janssen, L. Lebel, J. Norberg, G. D. Peterson, and R. Pritchard. 2002. Resilience management in social–ecological systems: a working hypothesis for a participator approach. Conservation Ecology 6:14. [online] URL: http://www.ecologyandsociety.org/vol6/iss1/art14/.

Walker, B., and D. Salt. 2006. Resilience thinking: sustaining ecosystems and people in a changing world. Island Press, Washington, D.C., USA.

Weber, E. P. 2003. Bringing society back in. The MIT Press, Cambridge, Massachusetts, USA.

Whitelaw, G., H. Vaughan, B. Craig, and D. Atkinson. 2003. Establishing the Canadian community monitoring network. Environmental Monitoring and Assessment 88:409–418.

Withers, C. W. J., and D. A. Finnegan. 2003. Natural history societies, fieldwork and local knowledge in nineteenth-century Scotland: towards a historical geography of civic science. Cultural Geographies 10:334–353.

Woodhill, A. J. 2003. Dialogue and transboundary water resources management: towards a framework for facilitating social learning. Pages 44–59 in S. Langaas and J. G. Timmerman, editors. The role and use of environmental information in European transboundary river basin management. IWA Publishing, London, UK.


Address of Correspondent:
Maria E. Fernandez-Gimenez
Dept. of Forest, Rangeland, and Watershed Stewardship
200A Natural Resources Building
Colorado State University
Fort Collins, Colorado 80523-1472
USA
gimenez@warnercnr.colostate.edu

Home | Archives | About | Login | Submissions | Notify | Contact | Search