Ecology and SocietyEcology and Society
 E&S Home > Vol. 21, No. 4 > Art. 30
The following is the established format for referencing this article:
Cravens, A. E., and N. M. Ardoin. 2016. Negotiating credibility and legitimacy in the shadow of an authoritative data source. Ecology and Society 21(4):30.
https://doi.org/10.5751/ES-08849-210430
Research

Negotiating credibility and legitimacy in the shadow of an authoritative data source

1Gould Center for Conflict Resolution, Stanford Law School, Stanford, CA, USA, 2Graduate School of Education & Woods Institute for the Environment, Stanford University, Stanford, CA, USA

ABSTRACT

Environmental agencies designate certain datasets as “authoritative,” or official datasets for use in decision making. Although this is a common administrative term, the notion of certain sources being authoritative has received minimal attention in the social science literature. Science translates into environmental decisions when it is perceived as being salient, credible, and legitimate. But the actual process by which data come to be viewed as credible and legitimate has received little attention. Drawing on 58 semistructured interviews, we examine the mutual negotiation and social learning that occurred during the course of a planning process focused on the development of new marine protected areas in California, under the auspices of the Marine Life Protection Act Initiative. A geospatial decision support tool, MarineMap, was viewed by scientists and state agency staff as an authoritative data source. Stakeholder acceptance of certain data, however, required extended dialogue and trust building over time. Acceptance of the data and tool influenced participant views of the planning process as a whole. This case reveals that the ways in which conversations about ambiguous or missing data are conducted influence stakeholders’ trust in scientific analysis, as well as their belief in the legitimacy of decision making.
Key words: best available science; boundary object; decision support tool, MarineMap; marine protected areas; social learning

INTRODUCTION

Contradictory ideals exist in scientific decision making related to environmental management. In one, decisions are supposed to be made based on objective scientific evidence, yet this process can obscure the ways in which scientific consensus in practice emerges from negotiation (Wynne 1992, Francis et al. 2005). In another, governmental agencies increasingly use collaboration to gain insight and support from those impacted by resource-use decisions (Frame et al. 2004, Benson et al. 2013). Successful collaboration in environmental management, generally, requires integrating local understandings with views derived from standard scientific practices. These perspectives may not always be aligned (Berkes 2009, Morgan and Grant-Smith 2015). Past research and practice have suggested that science translates into environmental management when it is salient, credible, and legitimate (Cash et al. 2003), yet scientists, stakeholders, and decision makers may not perceive the salience, credibility, and legitimacy of data in the same way.

To aid decision making in participatory contexts, with a range of perspectives and stakeholders, environmental and natural resource managers are increasingly turning to information technology, which can serve as boundary objects and a bridge between scientists, stakeholders, and decision makers (Star and Griesemer 1989). Two key types of collaborative technology include the overlapping categories of (1) decision support tools (DSTs), i.e., software applications that structure a decision-making process and may have a variety of user interfaces; many employ maps to facilitate visualization, and (2) geospatial systems, i.e., software tools that display information on a map interface and allow users to toggle various data layers, e.g., Esri ArcGIS and Google Maps (Malczewski 2006, Matthies et al. 2007, Cravens 2014). Technological boundary objects provide a variety of benefits in translation, including helping explain technical information and encouraging shared understanding (Cravens 2016), yet realizing this promise requires a method for determining what information to include, especially in cases of scarce or ambiguous data. The credibility of technology tools and, by extension, the legitimacy of decision-making processes that use them, derives from agreement among stakeholders over the quality of the underlying data.

Environmental agencies (see, for example, U.S. Department of the Interior 2006) guide their staff by designating certain datasets as authoritative data sources (or, authoritative dataset in British English usage); these datasets are to be used when constructing geospatial systems and in subsequent decision making. Government agencies such as the U.S. Geological Survey and U.S. Bureau of Land Management have rules specifying the use of authoritative data sources unless a manager can prove that the dataset does not meet a particular need (USGS [date unknown]). Although widely used as an administrative term, the academic literature gives little attention to the idea of an authoritative data source. Scholars have given minimal consideration of what happens, for example, when an authoritative data source, which derives its credibility and legitimacy in one framework, usually that of the government agency, is integrated into a boundary-spanning process designed to include multiple views.

To address this gap, we investigate the process by which scientific data in a designated authoritative data source come to be seen as credible, using the case of California’s Marine Life Protection Act (MLPA) Initiative. Using mixed methods, the study examines how a particular boundary object, a DST called MarineMap, influenced participants’ views of scientific integrity. Although government employees and contractors viewed the tool as the processes’ authoritative data source, broader acceptance of scarce or uncertain data by other stakeholders required sustained dialogue. Studying MarineMap highlights the importance of developing and maintaining stakeholder trust through social learning. Although individual concern can be ignored with some minimal cost, when this does occur, apprehensions about data sufficiency or accuracy may cause mistrust among individual stakeholders, encouraging them to question the legitimacy of a process.

Theoretical framing: authoritative data sources versus negotiated science

Recently, managers and agencies have become increasingly interested in using information technology tools such as DSTs, geospatial systems, and models to aid environmental decision making, particularly when involving large amounts of technical information (Balram and Dragicevic 2006, Malczewski 2006, Matthies et al. 2007, Wright et al. 2009, Voinov and Bousquet 2010, Bourget 2011). Geospatial systems and DSTs are seen as especially promising because they help organize and visualize complex information for decision making under multiple constraints or siting in geographical space (MacEachren 2004). As a simplification of the world they represent, however, both DSTs and geospatial systems implicitly or explicitly frame information to privilege certain points of view (Cravens 2016).

Mapping scholars emphasize that most users of geospatial systems lack the experience to critically examine what a map highlights or obscures (Wood and Fels 1992). One strength of geospatial interfaces is that they make data gaps visible to all viewers. At the same time, as a result of being visually displayed, uncertain or ambiguous data may appear more certain than they actually are (MacEachren 2004). Similarly, the underlying design of geographic information systems (GIS) often is seen as having a bias toward displaying quantitative information (Sheppard 1995). As a result, more experientially based data, which are often qualitative in nature, may not be represented or may be perceived as more anecdotal. Once a map is created, however, those who did not participate in discussions to create the map tend to view it as “reality” and are unaware of the fuzziness of a given data layer. Thus, for participants in environmental decision-making processes, knowing that decisions are based on a map that matches their experience is a priority.

When the data appearing in a geospatial system are designated as the official authoritative data source by a government agency, both the messiness that may have contributed to its creation, as well as any other viewpoints, are further obscured by virtue of its being designated factually accurate. In particular, the “authoritative data” designation is juxtaposed with the tradition of social science research, which unpacks negotiations that go into creating policy, documents, and other artefacts (cf. Gieryn 1999). Researchers form a community of practice whose norms and assumptions shape their conclusions along with their analytic assessments (Carolan 2008), the institution of which is governed by peer review (Bornmann 2008). Because environmental decisions are increasingly made collaboratively with citizen participation, determining whether scientific information is of sufficient quality and quantity to make a decision is increasingly negotiated, or “coproduced,” with citizens as well as other scientists (Dunsby 2004, Roux et al. 2006).

Like other aspects negotiated collaboratively, scientific information becomes part of the social learning process that facilitates collaboration (Muro and Jeffrey 2008, Gerlak and Heikkila 2011). Social learning, also referred to as “working through” (Daniels and Walker 2001), knowledge coproduction (Roux et al. 2006, Dale and Armitage 2011), and collective learning (Heikkila and Gerlak 2013), is the process whereby participants in a collaborative decision-making process find common understanding of a problem and ways of evaluating solutions by deliberating, challenging assumptions, and reframing the problem (Pahl-Wostl 2006, Emerson et al. 2012). Factors known to facilitate social learning include open communication, diverse participation, sufficient time, constructive conflict, democratic structure, multiple sources of knowledge, and facilitated dialogue (Schusler et al. 2003). Reed et al. (2010) caution, however, that the supporting conditions of social learning or the resulting outcomes are often confused with the process of social learning. As such, the authors highlight the interaction between individual learners and their communities, defining social learning as “a change in understanding that goes beyond the individual to become situated within wider social units or communities of practice through social interactions between actors within social networks” (Reed et al. 2010). Not all social learning leads to outcomes that managers or agencies might desire or expect; certain situations that Vinke-de Kruijf et al. (2014), for example, describe as “unconstructive learning processes” may result in decreased trust or less willingness to collaborate in the future, depending on what is learned by whom.

Boundary objects, such as DSTs and geospatial systems, can play a bridging role in the process of determining what is credible or useful information (Star and Griesemer 1989, Hegger et al 2012). In participatory decision making, where agencies have delegated a portion of their decision-making authority (Emerson et al. 2012), data must be credible and legitimate to stakeholders, as well as to scientists and managers. In other words, participants in a decision-making process must learn about the content of an issue, and also about the underlying data and scientific methods upon which decision making will be based.

In this study we consider what happens in participatory decision making when a designated “authoritative” data source does not match stakeholders’ experiential knowledge. We argue that, when authoritative data sources are recognized as the outcome of a social learning process, as well as a technological object, agency staff and managers will be more aware of the process of developing credibility and legitimacy within the social setting of the participatory process. The authoritative data source can be viewed from two simultaneous perspectives: (1) the focus of participants’ learning about and negotiations over scientific credibility and trust in the data, and (2) a basis to observe how negotiation over data influences the perceived legitimacy of larger planning processes.

METHODS

Case study: MarineMap

The goal of California’s Marine Life Protection Act (MLPA), passed in 1999, was to use the best available science to develop a network of marine protected areas (MPAs) along the California coast (California Fish and Game Code 1999). “Best available science” commonly governs how much science is required for decision making, but this is a fluid, and sometimes ambiguous, legal standard; its meaning is influenced by agency practice and local circumstance (Francis et al. 2005, Gerlach et al. 2013).

The MLPA Initiative that implemented the law was a public-private partnership (Gleason et al. 2013, Kirlin et al. 2013) designed to model a participatory and transparent planning process (Sayce et al. 2013). The MLPA Initiative and implementing agencies divided the coast into four study regions (see Fig. 1) and appointed an advisory Regional Stakeholder Group (RSG) for each region. Each study region’s RSG developed proposals locating MPAs through monthly meetings over a period of a year. The 33-member North Coast RSG, for example, had 13 days of meetings during 2010; each meeting was facilitated by a professional neutral facilitator. A Science Advisory Team (SAT), comprising approximately 20 natural and social scientists, developed design guidelines, made decisions about scientific validity, and provided criteria for evaluating proposals generated by the RSG members (see Saarman et al. 2013). Figure 2 provides more detail about how the RSG members and SAT fit into the overall structure of the planning process. It is important to note that the MLPA Initiative structure (and, in the case of the SAT, the legislation itself) defined distinctive roles for participants in decision making. Although it was highly participatory, the process was not consensus based; the appointed stakeholders developed recommendations that were evaluated by the scientists, with stakeholder input into evaluation criteria. Thus the negotiations over scientific credibility described here took place within a context in which the scientists remained officially responsible for ensuring the use of the “best available science.”

In this paper, “stakeholders” refers to appointed RSG members as well as members of the public who participated in discussions related to data accuracy. “Scientists” refers to appointed members of the Science Advisory Team (SAT). We base these designations on roles played in the planning process and thus RSG members with formal scientific training were not included in the “scientist” grouping. “Participants” include everyone who played a role in the MLPA Initiative and who were subjects of this research. We realize this is a narrower definition of “stakeholder” than is often used in social science research, and we concurrently recognize that the scientists, agency employees, and contract staff who are excluded from our definition of “stakeholder” also have particular perspectives. We have chosen to use the word stakeholder in this way, however, because it retains our alignment with those involved in the decision-making process being studied. It is also consistent with the terminology of other authors who have written about the MLPA Initiative (Gleason et al. 2013, Kirlin et al. 2013, Merrifield et al. 2013, Sayce et al. 2013, Cravens 2016) and reflects the division of roles within the process. For participants in the MLPA Initiative, “stakeholder” was a category distinct from scientist and staff; it referred to the RSG members and other citizens providing comment on marine protected area locations.

A consortium of university scientists, nongovernmental organization (NGO) scientists (not the same as the SAT), and technology professionals created the geospatial DST MarineMap for use in the third and fourth study regions to aid stakeholders in negotiating the location of MPAs (see Merrifield et al. 2013). Consisting of a map-based user interface and an analytic model backend, MarineMap allowed users to propose an MPA location and receive near-real-time feedback about how their proposed geography compared with the scientific criteria the SAT would use to evaluate proposals (Saarman et al. 2013). Focusing on locations where they were proposing protection, users could toggle layers of spatial data on and off (Table 1) to view characteristics, including commercial and recreational fishery data, coastal access points, habitat data, sea floor maps, and ocean navigation charts, among others. (To protect fishermen’s proprietary information, only RSG or SAT members logged into the MarineMap application could access the fishery heat maps on the map interface. Members of the general public, or those not logged in, could access data through reports on specific MPA proposals, but could not see fishing data for the whole study region at one time.)

MarineMap was used in a variety of ways to complement participants’ different roles in the process and was widely considered by stakeholders to be an effective tool for negotiation and decision making about where to site marine protected areas (Merrifield 2013, Cravens 2016). RSG members used it to understand the geography and scientific criteria being used, identify shared or diverging interests, and jointly find solutions to negotiation challenges. Among the RSG members and in their communication with other participants, the tool facilitated the creation of a common vocabulary for participants. Scientists used MarineMap primarily for informational purposes, although by the end of the process much of the evaluation of MPAs was also being conducted using the tool. Decisions about what data to include in the tool were made by scientists, with stakeholder input.

Our study focused on MarineMap because the decision making addressed by the MLPA Initiative—siting MPAs subject to multiple constraint criteria—is similar to other environmental and planning decisions. These decisions often require relating understanding of spatial data to given design criteria. Thus, dynamics observed when using this tool are relevant to the use of technology in a variety of environmental management settings. Earlier work on MarineMap indicated that the tool’s value relied on it being perceived by users as an authoritative data source (Cravens 2016); this previous research, however, did not illuminate the social process by which users came to see MarineMap as an authoritative data source. Our research addresses that gap.

Data sources and analysis

Our analysis draws primarily on 48 full-length semistructured interviews, conducted during 2013 with process participants, including appointed RSG members (27), involved members of the public (2), appointed policy advisors on the BRTF (5), SAT members (4), and contracted staff (10). These interviews represent approximately 23% of the RSG members in the two study regions where MarineMap was used, 20% of the appointed SAT members, and nearly all key staff (as identified by snowball sampling). We chose interviewees to represent the range of interests and concerns present in the MLPA Initiative, such as recreation and commercial fishing, nonconsumptive users, tribal communities, and so on. We spoke with involved members of the public when we were able to identify them, but they are under-represented in our sample because they were difficult to locate, unlike the appointed RSG members, appointed scientists, and staff, whose names are part of the public record.

We developed the interview protocol (see Table 2) based on iterative analysis of the following: (1) exploratory interviews with 10 MLPA Initiative and agency staff; (2) an online survey of participants (n = 105); and (3) analysis of videotaped meetings of the North and South Coast study region SATs and RSGs (Fig. 1), which met from 2008 to 2009 and 2009 to 2010, respectively.

We used an open-coding process to analyze interview data, including both exploratory and full-length interviews (Strauss and Corbin 1998). Key emergent themes included MarineMap’s role as an authoritative data source; participants’ scientific literacy and understanding of metadata; perceptions of data accuracy or uncertainty; perceptions of data credibility and legitimacy; and trust in the decision-making process. Similarly, particularly important in participants’ descriptions were data related to kelp and nearshore environments. Review of documents (including meeting minutes, scientist and staff presentations, and outreach materials) provided context for analyzing interview data.

RESULTS AND DISCUSSION

We describe how scientists and staff viewed MarineMap as an authoritative data source, but a social learning process (Fig. 3) was critical to engaging stakeholders in accepting the data in the tool as the “best available” science. This learning, which led to greater perceived credibility and legitimacy of data among stakeholders, scientists, and staff, resulted in either collaborative data refining, or scientists maintaining their view of the data and stakeholders coming to accept the underlying rationale. At times, however, breakdowns in dialogue left certain stakeholders mistrustful not only of analyses based on MarineMap, but also of the overall decision-making process. In other words, through engagement with the tool and the process, those individuals did not learn about scientific data but, rather, learned to be skeptical of the decision-making process. The social learning process depicted in Figure 3 is described in detail in the sections below. All names used in quotes are pseudonyms.

MarineMap as an authoritative data source

The MLPA legislation specified that designating MPAs was to be based on the “best readily available science” (California Fish and Game Code 1999:§ 2855 (a)). Staff and scientists were explicit that “best available” meant simply “the data currently available.” A stakeholder acknowledged, “In any science-driven process, the knowledge will never be perfect. It’s not like gravity, you know?” Most stakeholders and scientists were well aware that “best available” did not mean complete, but rather meant finding data of high enough quality to accomplish the siting task.

Coupled with the firm timelines of the legislation, the best-available standard, generally, allowed the MLPA Initiative to continue progressing, even when faced with critics who claimed that not enough data were available. Even when data were imperfect or scarce, the best-available standard often allowed participants to justify action because, as one participant noted, “It’s the best we’ve got.”

Staff and scientists treated MarineMap as the authoritative data source for the MLPA Initiative. Most participants accepted this authority without question much of the time, allowing MarineMap to provide a common platform for users to understand, compare, and assess proposed MPAs (Cravens 2016). Participants in various roles also relied on MarineMap for translation and communication; in this way, stakeholders could view data and draft proposals, as well as negotiate with one another. The act of entering data into the software became a key indicator of whether they had met the scientists’ threshold for being accepted as best available science. Thus, the tool created a standard for defining which data should be included in decision making. One stakeholder who represented an environmental NGO, for example, described how she used the criteria of inclusion in MarineMap as an indicator of whether data were credible during a discussion with a fisherman: “I didn’t listen to my friend Jake telling me, ‘no, there really is hard rock right in that spot.’ [I said,] ‘That’s great Jake, I don’t see it on this map.’” Because the rock features that the fisherman was discussing did not appear in MarineMap, they essentially did not exist in the negotiating process.

However, before MarineMap could provide the functions of an authoritative data source, creating a common platform, and, thus, bounding the space in which negotiation occurred (Cravens 2016), a critical mass of stakeholders had to accept the data within it as an accurate representation of the coastal environment. One state agency employee who participated in multiple planning regions summarized the dynamic this way:

A program like MarineMap is only as good as the information that you put into it. In most areas we had lots of information, but there were some areas like in southern California where the location of rock and kelp, especially around some of the islands, we just didn’t know. Of course that [uncertainty] was reflected in MarineMap.

Participants agreed that the data in the tool created the representation of the coast that mattered for siting MPAs. Concurrently, MarineMap’s visual interface illuminated data gaps and highlighted places where the tool’s data did not match experiential knowledge some users had of the coast. Thus, arguments about scarce or uncertain data were generally expressed as concerns about the accuracy of data in MarineMap. (Many interviewees also mentioned the law’s adaptive management framework, citing it as a safety valve to facilitate midcourse corrections, should the best available data at the time of initial decision making prove to be wrong. However, others were skeptical whether it would be politically feasible to significantly alter boundaries after the siting negotiations were complete, even if the monitoring data later showed the MPAs were not working as intended.)

Individual concerns

As the authoritative source of data for the MLPA Initiative, MarineMap displayed a variety of data layers that derived from a range of original sources (such as the California Department of Fish and Game, the California Coastal Commission, the National Oceanic and Atmospheric Administration, and scientific researchers) with mixed levels of certainty and resolution. (See Table 1; also see Merrifield et al. 2013.) Scientists and staff tended to distinguish between the origins of datasets, understanding that information about provenance of data can give important context as to its quality. They also tended to look within the tool for this information about data; this is termed “metadata” by technology professionals. In contrast, most stakeholders tended to homogeneously view data that appeared within MarineMap as a true enough representation of the ocean and coast. One staff member explained that few people wanted to know more about the information’s source, or how it was collected, saying, “We got requests every now and then for a little more access into the metadata, but those were not very frequent. For most people, what was visible in MarineMap was enough.”

For certain individuals, however, underlying questions about data sufficiency or accuracy were important. Our analysis categorized individuals in three ways. First, some participants had specialized training that led them to understand the problem space by considering the origins of data. A few RSG members who worked professionally as scientists or geospatial analysts, for instance, provided detailed feedback to both the scientists and the staff about the content of data layers, as well as the calculation and evaluation methods used.

Second, stakeholders paid greater attention to how data layers had been created when MarineMap’s depiction of areas they knew intimately did not match their personal experience. One RSG member from the North Coast study region described digging into details of the scientists’ evaluation algorithm when MarineMap’s bathymetry (sea floor) data showed no rocky bottom:

Then there’s one particular area up near Redding Rock... Basically, the science advisors treated that area as if there were zero rocks because the boat hadn’t been able to go in there and map it. I thought it was...quite compelling that, when you looked at the aerial imagery, you could see lots of rock...just based upon, well, my knowledge of the area and other people, we knew there’s a lot of rock in there.

Based on experience with the area, this stakeholder used aerial imagery to correct data in MarineMap he believed were inaccurate. He argued this inaccuracy was the result of lack of seafloor mapping in areas inaccessible by boat.

Finally, information about how data were generated became important when stakeholders perceived the data to have political or strategic value in discussions. At times, this may have been a stalling tactic; one staff member described certain stakeholders who “knew just enough about metadata to question the data” in ways counterproductive to reaching agreement. However, most of the time, discussions around data certainty, sufficiency, accuracy, and provenance emerged when an individual’s personal experience and concern were persuasive enough to be considered in group discussion. A member of the facilitation team described the dynamic:

We’re basically using data layers in MarineMap, and that became the source of the data... There were times when [stakeholders] explicitly...would say, ‘Hey, we’re having a discussion around kelp, and I want to remind everybody, we really don’t know where the kelp are.’ That was an important part of the negotiations... I think it was reflected in the tool, but it’s sort of hidden unless you look for it or ask it.

Concerns about the accuracy of kelp data, thus, became important in group discussion because of the persistence of individual stakeholders.

Translating individual concern into group concern

Whether discussions about data in MarineMap mattered in the planning processes’ trajectory depended on concern translating to the group level. When an individual’s concerns were not taken up by other group members, they had little impact on the eventual decision-making process, mostly leading to concerns being abandoned by stakeholders. One southern California fisherman recalled bringing up why certain Scripps Oceanographic Institute data was not included, noting that, “It wasn’t going anywhere. At a certain point, you give up.”

At other times, individual concerns coalesced into a group-level concern (Heikkila and Gerlak 2013). When individual concerns about data sufficiency or accuracy were taken up by the larger stakeholder group, the collective could compel the staff and scientists to change direction. In the South Coast process, for instance, concerns about kelp data led to changes in how kelp was accounted for in the scientists’ evaluation.

Respondents mentioned three factors that influenced whether individual concerns about data became a group concern. The first factor was the number of people in agreement. When one person challenged MarineMap data, it was sometimes considered anecdotal, but when multiple people raised the same issue, that concern was more likely to be viewed as valid. As a result, scientists and staff seem to have been particularly likely to address concerns when they heard from multiple RSG members. Although the facilitators were careful not to let simple majority votes drive the process, they did use straw polls and similar techniques to gain a general sense of the group’s will, influencing a greater sense of legitimacy in acting on group concerns.

Second, although the facilitators structured meetings to minimize power differentials as much as possible, certain individuals played leadership roles within the stakeholder group by virtue of their personal knowledge of a specific geography, social status within a subcommunity (such as fisherman or environmental NGOs), specialized training, or even ability to use MarineMap (see description of expert users in Cravens 2016). Among these various claims to authority in negotiations, personal experience of a place was suggested as especially important (Lukacs and Ardoin 2014, Oakes et al. 2016). One NGO stakeholder explained:

The most compelling story you can possibly tell in a situation like that generally is, ‘I have fished in/used this space for the last 65 years. I’ve observed the following trends, and I have this piece of data to back me up, and therefore this place should or should not be reserved.’ No one is going to argue...about that for the most part. It’s hard to argue about that coming from a nonlocal resident perspective.

Finally, besides characteristics that gave individuals greater status within the group, concerns about data uncertainty or insufficiency were more salient to the group in areas where the questionable data influenced decision-making outcomes. The issue of kelp in the South Coast, for instance, became important not only because the data were ambiguous, but also because kelp beds were a limiting habitat type that often determined whether a given MPA met the required science criteria.

Group concern leads to collaborative data refining

When RSG members, staff, and scientists collaboratively examined scientific data and reconciled them with other forms of evidence, including stakeholders’ experiential knowledge, a process of social learning ensued. In some cases, this created an opportunity for collaborative refinement of data layers or methods for evaluating MPAs.

Joint production of knowledge that scientists and stakeholders accepted as credible required the scientists to understand their role within the MLPA in a collaborative way despite their formal responsibility for ensuring scientific rigor (Coburn 2007, Hegger et al. 2012). The SAT’s openness to learning from stakeholder knowledge facilitated the process of iterative data refinement. The overall culture of the initiative was one of learning and adapting (Gleason et al. 2013), which created a dynamic of encouraging public comment throughout the process. One staff member reflected that credibility was enhanced by constant, rather than one-off, dialogue because it made challenges expected: “[The science] can pretty easily lose credibility once someone challenges one aspect of it and their argument has credence with the rest of the group. Then, suddenly everything comes into question. Whereas, if you’re constantly having that conversation, that’s not as big of a deal.” By the third and fourth study regions, the scientists and RSG members engaged in fluid interactions that shaped the data used, including those that appeared in MarineMap. As a result, scientific authority, as reflected in MarineMap, was a process in which scientists worked actively with stakeholders to refine their respective understandings.

In the MLPA Initiative, two kinds of knowledge, scientific and local, often intersected when RSG members questioned scientific studies portraying ocean ecosystems. Science is formalized knowledge, given validity through careful data collection procedures scrutinized in the peer-review process (Bornmann 2008). By contrast, local knowledge, the knowledge held by residents of a place, which may also be termed traditional or traditional ecological knowledge (Dale and Armitage 2011), is gained in a variety of ways, including experientially, e.g., through direct experience with a resource or place, or through social learning and interaction. Observations underpinning local knowledge may be episodic and spatial scales are generally smaller (Riedlinger and Berkes 2001, Dale and Armitage 2011). One RSG member, representing nonconsumptive users, described the contrast as such:

I think it’s one of those things where there’s legitimacy on both sides. The fishermen have one experience and their experience is day-in-and-day-out...here’s what we see. The scientists are like, ‘Well, here’s a protocol we use to monitor’... There has to be a method. It can’t just be, ‘Look for all the kelp wherever you can find it and count it up.’”

Although local knowledge provided valuable perspective when data were scarce, reconciling information from scientific studies that appeared in MarineMap with knowledge from individual users was challenging at times. In these negotiations, MarineMap made the data less of a black box, which, while a powerful source of transparency, was also a challenge, as the tool made visible the tension between scientific data and local knowledge. One staff member described its effect: “One of the downsides of being empowered [with software] is to see all the warts and flaws of analysis.”

While the data in MarineMap were initially defined as authoritative by the staff and scientists, the tool was also the focus of a social learning process that allowed it to function as an authoritative data source. As one South Coast RSG member described, “[The tool created] a lot of to-and-fro among the scientists...by the end, most everybody agreed on the data that was in there.” MarineMap became a venue for finding a version of “scientific truth” with which most participants could agree. Viewing MarineMap as a site of negotiation also breaks down the dichotomy between local and scientific knowledge. Both ways of knowing, ultimately, rely upon acceptance by a community. The communities differ in their norms, including what gives someone credibility to speak with authority, e.g., time spent in experiential activities in the case of local knowledge; time spent conducting rigorous, controlled studies, in the case of scientific knowledge; but ultimately, in either case, evidence is not “true” until the community accepts it as such.

Group concern influences learning and legitimacy

In other cases, social learning resulted in education of stakeholders about the rationale underlying original data choices, rather than collaborative data refinement. When a group of RSG members raised concerns about perceived inaccuracies, or submitted “new” data for consideration, the scientists provided a scientific rationale for using the original version to make decisions and/or rejecting the suggested data as unsuitable. In those cases, RSG members were essentially challenging the scientists’ definition of what best available science meant, and the scientists did not agree. Although they might acknowledge the existing data’s imperfections, sometimes the answer to RSG members was, as one scientist summarized, “it’s the best we’ve got.” Although the data in the tool remained unchanged in these cases, social learning occurred when extended dialogue about nuances of the scientific method led to the stakeholder group as a whole accepting data that RSG members had previously challenged.

Despite being open to stakeholder feedback, under the MLPA Initiative structure, scientists remained responsible for determining whether stakeholder criticisms of data were founded. The community that included the RSG members, the public, staff, and scientists was not a flat hierarchy with equal social power; rather, the scientists retained much greater formal authority, mandate, and responsibility for data choices. Thus, the final data in MarineMap reflects the formal authority scientists maintained to overrule stakeholder objections. When scientists felt it appropriate to do so, however, they generally engaged in methodological conversations about their rationale, resulting in learning that contributed to greater perceived legitimacy of data and, ultimately, trust in decision making.

In often-extended exchanges that included nuanced details about data collection or processing, scientists and staff played simultaneous roles as gatekeepers and educators. One staff member who worked closely with the SAT team described the discussions regarding MPAs north of San Diego:

The stakeholders really wanted that area to count [as an MPA]...They challenged the numbers that we had on availability of rock there, and then actually brought in some data that they had somebody collect using some kind of remote sensing... I dove into [the methods], ‘cause I’m going, ‘We only have this in a small area, and it’s totally not matching the data that we have comprehensive[ly] across the study region.’ It was counting everything...so that, basically, pebbles were counting as rock and little algae was counting as kelp...We went through this whole, ‘Okay, what are these methods?’ We did it with the stakeholders who submitted the information, and we did it with the whole group; [we] talk[ed] about what that data meant and how it was or was not applicable. Ultimately, they came to grudgingly understand the rationale for not substituting in this very different data source to make there be enough rock there.

Through dialogue, the RSG members and scientists eventually reached agreement about which data represented the best available science. The above example illustrates the extent to which discussions focused on the details of the scientific method as the scientists, staff, and many of the stakeholders realized that how those data were collected and analyzed strongly influenced decision-making outcomes. Some RSG leaders, as well as scientists, initiated conversations about the nuances of data collection and processing. Most of the habitat data for the MLPA Initiative was provided by the California Seafloor Mapping Program (COPC 2007), which used bathymetry (underwater topography) to classify potential habitat types. Translating the mapping results to the habitat classifications used in planning, however, required the scientists to make judgement calls with which stakeholders did not always agree. One South Coast RSG member with extensive data analysis experience, for example, described his frustration around certain methodological choices of the scientists:

[Data gaps] wouldn’t be filled in with the next best data. It would just be left as a hole. There’s a couple of different ways that you can handle it. You can use the whole data, or you can use a pretty well-developed theory, called spatial autocorrelation, that says that stuff nearby is likely to be similar to what you have data on, and you fill all those holes in like that, but we didn’t do that. That caused some heartache, down south, in a couple of areas, for my group.

At best, such concerns about data accuracy led to nuanced discussion of how science and data analysis happens and what makes it valid. The Seafloor Mapping Program performed ground-truthing surveys in sample areas (e.g., Cochrane et al. 2015), but the scale of the effort and resources available meant that most substrate data were based on the remote mapping of the seafloor. In other cases, data such as kelp surveys might only exist for certain locations or at certain times of year; the timeline of the planning effort often did not allow additional data to be collected. As a result, judgments about validity often had to be made based upon how the data were collected, which made discussions about methods especially important.

One challenge was that discussions related to data accuracy and methods in MarineMap required significant time and energy. Some participants were frustrated with the amount of time devoted to these discussions because they did not perceive them as significantly impacting the final decisions. However, most scientists and staff highlighted the utility of this dialogue for improving the quality of the scientific input. Perhaps more importantly, participants from all three roles pointed out that using a transparent system such as MarineMap meant “everybody could have an opinion about whether those data were real or factual or the best available data or not, so you had to have the fights over the data that were in the system” to ensure participants felt their voices had been heard.

A staff member emphasized that those conversations were important to building credibility. She also emphasized the role MarineMap played in ensuring transparent dialogue:

There is always a point where the RSG members...have information that disagrees with the information the scientists are using [and then there is] a tendency to say, ‘The scientists haven’t been out there. They don’t know what they’re talking about.’ If that kind of rhetoric stands without a conversation that is open and transparent and listens to both sides, then you can end up with a complete lack of respect and credibility for everything that the science team is saying. MarineMap opened that conversation... I think [using MarineMap] ultimately strengthens the science. It means you have to be on your toes. You have to be ready to dive into the methods and to read this information, to respond to sometimes crazy-sounding requests from RSG members that certain pieces of information they think are important are included... Just even having those conversations openly...enhances credibility.

Unaddressed individual concern can create mistrust

Not all conversations about data resulted in constructive shared learning processes for all stakeholders. Two RSG members interviewed described feeling disillusioned, after the dialogues, with regard to the accuracy and quality of MarineMap data. In other interviews, staff members and other RSG members also reported situations in which certain stakeholders who were not interviewed indicated that they had been frustrated that their data concerns had not been addressed. (Not only data accuracy was challenged; some participants also challenged the whole decision-making framework and underlying scientific assumptions of creating protective MPAs [Osmond et al. 2010]. The analysis in this paper focuses on the discussions around data accuracy and uncertainty.) In some cases, stakeholders attempted to discuss the data quality and validity, but ended up feeling they had not been heard; in other cases, stakeholders were not willing to participate in such dialogues because they did not feel their interests would be met.

Interactions with scientists in response to individual RSG member concerns could lead to a breakdown in dialogue, with particular RSG members or constituencies frustrated that their concerns had not been addressed. One South Coast RSG member representing recreational fishermen recalled the following:

The response was, ‘No. We’re not gonna adjust MarineMap to show that.’ I mean, because our papers show that there is no surfgrass there. That’s where it came down to [staff member] Justin’s infamous quote of, ‘This is the best available bad science that we have, and we’re gonna go with it.

For this individual, the response received to concerns about surfgrass (as well as other types of data) created a perception that the overall process was based on “bad science.” Mistrust in data, and, by extension, the decision-making process, was sometimes linked with skepticism about using MarineMap, although disentangling the cause from the effect is difficult. It might be that stakeholders tried to genuinely engage, yet their experiences in the process created a sense of disillusionment; or it might be that certain individuals were predisposed to mistrust the process and, therefore, participated in ways that confirmed their own skeptical expectations. One South Coast stakeholder discussed how distrust of the underlying data eventually discouraged other stakeholders from using the tool:

The people that were so focused on [perceived inaccuracies of data in the tool] were starting to alienate other people, because it was like a broken record, where it was the same argument over and over again. As soon you came up with a MarineMap display, they would just totally tune you out and get pissed off. After a while, it was not an effective tool because it was associated with a particular argument that was wrong.

This last linkage is significant because it was not just the data, but the tool that delivered the data, that became the source of frustration (White et al. 2010). For these stakeholders, MarineMap was not a source of learning about scientific data through a social process, but rather a source as well as a focus for their disillusionment with the planning process itself. In other words, what the stakeholders learned through engagement was not to engage again. In interviews conducted after the planning process had ended, two stakeholders expressed feeling distrustful; another stakeholder reported that conversations about data had led to others feeling “jaded.” One South Coast RSG member representing recreational fishermen stated, “I thought going into this that the SAT team was functioning from a place of integrity, and had I known the outcome, I would have pursued an entirely different strategy.” Another stakeholder representing fishermen commented, “Our goal was to convince [others in the constituent group] that, if we participate fairly, the process will be okay...That turned out to be our biggest problem...ultimately, we couldn’t trust the process.” A comment from one of the initiative staff puts these interviewee comments in broader perspective: “I think there were concerns, especially by folks in the consumptive uses sector...that the data were being incorporated [into MarineMap], the layers were being used, and that there was not necessarily a sufficient discussion around data quality [of information in the tool].” Although numerically, participants with perspectives such as these were in the minority, for these individuals, concerns related to data accuracy and credibility were intimately related to perceived legitimacy of the overall decision-making process. It is impossible, however, to disentangle whether these individuals were more skeptical of this often-controversial planning process or whether events during the MLPA Initiative led to their disillusionment. Their experience suggests that discussions related to data accuracy can be either the cause, or expression of, suspicions resulting in perceptions of a process as illegitimate.

In the MLPA Initiative, these individuals were scarce enough that their mistrust appears to have had relatively little impact on the perceived success of the process as a whole. One RSG member described his view of how concerns about data impacted the MLPA Initiative: “Eventually people came to a level of comfort with, okay, this is what we have...I don’t think it was a real significant issue, at the end of the day. We’re not acting with perfect information, but this is the best available information and it is still enough to move forward.” Although decision making in the South Coast study region in particular was acrimonious at times, MPAs ultimately were sited and the law implemented, although individuals who saw decisions to site those MPAs as less legitimate may be less likely to respect the resulting boundaries (Stern 2008). Had there been more stakeholders who ended the process with concerns about the credibility of the science, however, the outcome might have been different. The State of California initially tried to implement the Marine Life Protection Act in 1999, but the process disintegrated due to fishermen’s concerns about the legitimacy of decision making made based on science into which they had not given input (Osmond et al 2010), providing a cautionary tale.

CONCLUSION: CREDIBILITY AND LEGITIMACY AS PROCESSES

MarineMap functioned as a social learning venue in which stakeholders, MLPA Initiative staff, and scientists negotiated the meaning of “best available science” when data were scarce, uncertain, or ambiguous. In the end, most participants viewed the data as credible and legitimate, whether or not the scientists ultimately changed the data in the authoritative data source to reflect the stakeholders’ diversity of views. Our analysis indicates that the act of discussion itself was more critical in the process of negotiating what was perceived to be credible and legitimate data than whether one’s point of view was adopted or incorporated in the end. By contrast, when social engagement and dialogue were lacking, those who did not feel that their voices were heard or perspectives were honored became skeptical of the credibility and/or legitimacy of the data within MarineMap; by extension, they were mistrustful of the decision-making process. Similar frustration has been observed elsewhere when participatory forums resulted in participants feeling their concerns had not been addressed (Risvoll et al. 2014).

In this study, thus, we emphasize that notions of best available science result from a process; in particular, they are the outcome of a social learning dialogue among participants. Although retrospective methods do not allow full causal understanding of why certain individuals ended the process feeling excluded from the learning community, future research using interviews at multiple time points could do so. By extension, elicitation interviews, where participants watch video of a process, narrate the dynamics, and describe their feelings, could also help elucidate why some individuals, and not others, were mistrustful of the process and outcomes (Harper 2002, Henry and Fetters 2012). In addition, comparing the dynamics uncovered in this study with dynamics in future cases could help clarify which aspects are specific to the MLPA Initiative and which represent more general trends. The MLPA Initiative was a well-resourced public-private partnership that undertook unusual efforts to ensure stakeholder concerns were incorporated into the final marine reserve network (Kirlin et al 2013, Sayce et al 2013). As such, it can be viewed as a critical case (Flyvbjerg 2006) where the extent of mistrust and dissatisfaction might be expected to be as low as in any participatory planning setting. The fact that we found certain individuals who left the process feeling unheard in this case suggests similar dynamics will likely be present in planning efforts that do not have the same resources to devote to ensuring public participation.

Despite some particular characteristics, the MLPA Initiative experience suggests several lessons for helping build a shared view of the credibility of data in future participatory planning processes using similar tools or data sources. First, agencies should realize that stakeholders may not share official views of what makes data “authoritative.” Although there may be official policies on authoritative data, in a participatory planning process, the more important goal may be a shared understanding of what is salient and credible information for making the decisions at hand. Agencies might even consider explicitly setting an intermediate, process-oriented goal (Monroe et al. 2013) to develop a shared view of credible data. Second, while the MLPA Initiative had a highly transparent and defined process for evaluating MPA proposals, the way decisions were made about which data would or would not be included in MarineMap or the timing for when new data layers would be added were less clear to many participants. Staff as well as stakeholders pointed out that being more explicit and transparent about who made these choices and how they decided might have alleviated some concerns about data accuracy or sufficiency. Third, the learning process that occurred among scientists and stakeholders required scientists to work collaboratively and to allow the public to challenge their data and conclusions, although the SAT remained responsible for final decisions about scientific evaluation. Fourth, it is important to remember that the mistrust described here was observed among a relatively small group of stakeholders. We highlight the dynamic because we believe it reveals a connection between seeing data as credible and viewing a decision-making process as legitimate. But the majority of stakeholders in the MLPA Initiative participated generatively and productively in a shared learning process about data. One staff member pointed out that the data quality was, overall, of a higher standard than he had seen “in nearly all the other processes I’ve been involved in...to me, one of the things that the MLPA Initiative did was set very high standards for information-based decision making and it was criticized, in a sense, against those standards.” Future efforts can learn from the time and energy these scientists and stakeholders spent understanding the nuances of the scientific method through dialogue, which resulted in a group understanding of data that most of them considered sufficient for making decisions to the “best available science” standard.

Our findings suggest, then, that an authoritative data source used in a participatory setting should not be considered as an object, but rather as a process. Interactions surrounding data accuracy within an authoritative data source are central to negotiating scientific authority in decision making related to environmental management. Credibility and legitimacy of data are not outcomes so much as processes of negotiation that, in turn, influence the perceived legitimacy of decision making. Conversations about data and how they are generated anchor the development of trust, which is important not only in the planning phases, but also in helping understand how those in surrounding areas will interact with protected areas once they are established (Stern 2008). Scientists, stakeholders, managers, and conveners of participatory decision-making processes form learning communities where the ways that conversations about data accuracy are conducted influence the success of the decisions being made.

RESPONSES TO THIS ARTICLE

Responses to this article are invited. If accepted for publication, your response will be hyperlinked to the article. To submit a response, follow this link. To read responses already accepted, follow this link.

ACKNOWLEDGMENTS

Thank you to Richard White, Janet Martinez, Meg Caldwell, Will McClintock, Nicola Ulibarri, Rebecca Nelson, and Dan Reineman for comments on earlier versions of the manuscript. This research was supported by Stanford University’s Emmett Interdisciplinary Program in Environment and Resources, a Stanford School of Earth Sciences McGhee Research Grant, and a Stanford Law School Goldsmith Research Grant.

LITERATURE CITED

Balram, S., and S. Dragicevic. 2006. Collaborative geographic information systems. Idea Group Publishing, Hershey, Pennsylvania, USA. http://dx.doi.org/10.4018/978-1-59140-845-1

Benson, D., A. Jordan, and L. Smith. 2013. Is environmental management really more collaborative? A comparative analysis of putative ‘paradigm shifts’ in Europe, Australia, and the United States. Environment and Planning A 45:1695-1712. http://dx.doi.org/10.1068/a45378

Berkes, F. 2009. Evolution of co-management: role of knowledge generation, bridging organizations and social learning. Journal of Environmental Management 90:1692-1702. http://dx.doi.org/10.1016/j.jenvman.2008.12.001

Bornmann, L. 2008. Scientific peer review: an analysis of the peer review process from the perspective of sociology of science theories. Human Architecture: Journal of the Sociology of Self-Knowledge 6(2):3. [online] URL: http://scholarworks.umb.edu/humanarchitecture/vol6/iss2/3

Bourget, L., editor. 2011. Converging waters: integrating collaborative modelling with participatory processes to make water resources decisions. IWR Press, Alexandria, Virginia, USA.

California Fish and Game Code. 1999. § 2850-2863. Marine Life Protection Act. California Department of Fish and Wildlife, Sacramento, California, USA.

California Ocean Protection Council (COPC). 2007. Staff recommendation: California Seafloor Mapping Program. October 25th. U.S. Geological Survey, Washington, D.C., USA. [online] URL: http://walrus.wr.usgs.gov/mapping/csmp/COPC07SeafloorMapping.pdf

Carolan, M. S. 2008. The bright- and blind-spots of science: why objective knowledge is not enough to resolve environmental controversies. Critical Sociology 34:725-740. http://dx.doi.org/10.1177/0896920508093365

Cash, D. W., W. C. Clark, F. Alcock, N. M. Dickson, N. Eckley, D. H. Guston, J. Jäger, and R. B. Mitchell. 2003. Knowledge systems for sustainable development. Proceedings of the National Academy of Sciences 100:8086-8091. http://dx.doi.org/10.1073/pnas.1231332100

Cochrane, G. R., J. T. Watt, P. Dartnell, H. G. Greene, M. D. Erdey, B. E. Dieter, N. E. Golden, S. Y. Johnson, C. A. Endris, S. R. Hartwell, R. G. Kvitek, C. W. Davenport, L. M. Krigsman, A. C. Ritchie, R. W. Sliter, D. P. Finlayson, and K. L. Maier. 2015. California State waters map series—offshore of Pigeon Point, California. Open File Report 2015-1232, pamphlet 40. U.S. Geological Survey, Washington, D.C., USA. http://dx.doi.org/10.3133/ofr20151232

Corburn, J. 2007. Community knowledge in environmental health science: co-producing policy expertise. Environmental Science & Policy 10:150-161. http://dx.doi.org/10.1016/j.envsci.2006.09.004

Cravens, A. E. 2014. Needs before tools: using technology in environmental conflict resolution. Conflict Resolution Quarterly 32(1):3-32. http://dx.doi.org/10.1002/crq.21071

Cravens, A. E. 2016. Negotiation and decision making with collaborative software: how MarineMap ‘changed the game’ in California’s Marine Life Protected Act Initiative. Environmental Management 57(2):474-497. http://dx.doi.org/10.1007/s00267-015-0615-9

Dale, A., and D. Armitage. 2011. Marine mammal co-management in Canada’s Arctic: knowledge co-production for learning and adaptive capacity. Marine Policy 35:440-449. http://dx.doi.org/10.1016/j.marpol.2010.10.019

Daniels, S. E., and G. B. Walker. 2001. Working through environmental conflict: the collaborative learning approach. Praeger, Westport, Connecticut, USA.

Dunsby, J. 2004. Measuring environmental health risks: the negotiation of a public right-to-know law. Science, Technology, & Human Values 29:269-290. http://dx.doi.org/10.1177/0162243904264482

Emerson, K., T. Nabatchi, and S. Balogh. 2012. An integrative framework for collaborative governance. Journal of Public Administration Research and Theory 22:1-29. http://dx.doi.org/10.1093/jopart/mur011

Flyvbjerg, B. 2006. Five misunderstandings about case-study research. Qualitative Inquiry 12:219-245. http://dx.doi.org/10.1177/1077800405284363

Frame, T. M., T. Gunton, and J. C. Day. 2004. The role of collaboration in environmental management: an evaluation of land and resource planning in British Columbia. Journal of Environmental Planning and Management 47:59-82. http://dx.doi.org/10.1080/0964056042000189808

Francis, T., K. Whittaker, V. Shandas, A. V. Mills, and J. K. Graybill. 2005. Incorporating science into the environmental policy process: a case study from Washington State. Ecology and Society 10(1):35. [online] URL: http://www.ecologyandsociety.org/vol10/iss1/art35/

Gerlak, A. K., and T. Heikkila. 2011. Building a theory of learning in collaboratives: evidence from the Everglades Restoration Program. Journal of Public Administration Research and Theory 21:619-644. http://dx.doi.org/10.1093/jopart/muq089

Gerlach, J. D., L. K. Williams, and C. E. Forcina. 2013. Data selection for making biodiversity management decisions: best available science and institutionalized agency norms. Administration & Society 45:213-241. http://dx.doi.org/10.1177/0095399712451886

Gieryn, T. F. 1999. Cultural boundaries of science: credibility on the line. University of Chicago Press, Chicago, Illinois, USA.

Gleason, M., E. Fox, S. Ashcraft, J. Vasques, E. Whiteman, P. Serpa, E. Saarman, M. Caldwell, A. Frimodig, M. Miller-Henson, J. Kirlin, B. Ota, E. Pope, M. Weber, and K. Wiseman. 2013. Designing a network of marine protected areas in California: achievements, costs, lessons learned, and challenges ahead. Ocean & Coastal Management 74:90-101. http://dx.doi.org/10.1016/j.ocecoaman.2012.08.013

Harper, D. 2002. Talking about pictures: a case for photo elicitation. Visual Studies 17:13-26. http://dx.doi.org/10.1080/14725860220137345

Hegger, D., M. Lamer, A. Van Zeijl-Rozema, and C. Dieperinka. 2012. Conceptualising joint knowledge production in regional climate change adaptation projects: success conditions and levers for action. Environmental Science & Policy 18:52-65. http://dx.doi.org/10.1016/j.envsci.2012.01.002

Heikkila, T., and A. K. Gerlak. 2013. Building a conceptual approach to collective learning: lessons for public policy scholars. Policy Studies Journal 41:484-512. http://dx.doi.org/10.1111/psj.12026

Henry, S. G., and M. D. Fetters. 2012. Video elicitation interviews: a qualitative research method for investigating physician-patient interactions. Annals of Family Medicine 10:118-125. http://dx.doi.org/10.1370/afm.1339

Kirlin, J., M. Caldwell, M. Gleason, M. Weber, J. Ugoretz, E. Fox, and M. Miller-Henson. 2013. California’s Marine Life Protection Act Initiative: supporting implementation of legislation establishing a statewide network of marine protected areas. Ocean & Coastal Management 74:3-13. http://dx.doi.org/10.1016/j.ocecoaman.2012.08.015

Lukacs, H. A., and N. M. Ardoin. 2014. The relationship of place re-making and watershed group participation in Appalachia. Society and Natural Resources 27(1):55-69. http://dx.doi.org/10.1080/08941920.2013.840876

MacEachren, A. M. 2004. How maps work: representation, visualization, and design. Guilford, New York, New York, USA.

Malczewski, J. 2006. GIS‐based multicriteria decision analysis: a survey of the literature. International Journal of Geographical Information Science 20:703-726. http://dx.doi.org/10.1080/13658810600661508

Matthies, M., C. Giupponi, and B. Ostendorf. 2007. Environmental decision support systems: current issues, methods and tools. Environmental Modelling & Software 22:123-127. http://dx.doi.org/10.1016/j.envsoft.2005.09.005

Merrifield, M. S., W. McClintock, C. Burt, E. Fox, P. Serpa, C. Steinback, and M. Gleason. 2013. MarineMap: a web-based platform for collaborative marine protected area planning. Ocean & Coastal Management 74:67-76. http://dx.doi.org/10.1016/j.ocecoaman.2012.06.011

Monroe, M. C., R. Plate, and A. Oxarart. 2013. Intermediate collaborative adaptive management strategies build stakeholder capacity. Ecology and Society 18(2):24. http://dx.doi.org/10.5751/ES-05444-180224

Morgan, E. A., and D. C. C. Grant-Smith. 2015. Tales of science and defiance: the case for co-learning and collaboration in bridging the science/emotion divide in water recycling debates. Journal of Environmental Planning and Management 58(10):1770-1788. http://dx.doi.org/10.1080/09640568.2014.954691

Muro, M., and P. Jeffrey. 2008. A critical review of the theory and application of social learning in participatory natural resource management processes. Journal of Environmental Planning and Management 51(3):325-344. http://dx.doi.org/10.1080/09640560801977190

Oakes, L. E., N. M. Ardoin, and E. F. Lambin. 2016. “I know, therefore I adapt?” Complexities of individual adaptation to climate-induced forest dieback in Alaska. Ecology and Society 21(2):40. http://dx.doi.org/10.5751/ES-08464-210240

Osmond, M., S. Airame, M. Caldwell, and J. Day. 2010. Lessons for marine conservation planning: a comparison of three marine protected area planning processes. Ocean & Coastal Management 53:41-51. http://dx.doi.org/10.1016/j.ocecoaman.2010.01.002

Pahl-Wostl, C. 2006. The importance of social learning in restoring the multifunctionality of rivers and floodplains. Ecology and Society 11(1):10. [online] URL: http://www.ecologyandsociety.org/vol11/iss1/art10/

Reed, M. S., A. C. Evely, G. Cundill, I. Fazey, J. Glass, A. Laing, J. Newig, B. Parrish, C. Prell, C. Raymond, and L. C. Stringer. 2010. What is social learning? Ecology and Society 15(4):r1. [online] URL: http://www.ecologyandsociety.org/vol15/iss4/resp1/

Riedlinger, D., and F. Berkes. 2001. Contributions of traditional knowledge to understanding climate change in the Canadian Arctic. Polar Record 37:315-328. http://dx.doi.org/10.1017/S0032247400017058

Risvoll, C., G. Fedreheim, A. Sandberg, and S. Burnsilver. 2014. Does pastoralists’ participation in the management of national parks in northern Norway contribute to adaptive governance? Ecology and Society 19(2):71. http://dx.doi.org/10.5751/ES-06658-190271

Roux, D. J., K. H. Rogers, H. Biggs, P. J. Ashton, and A. Sergeant. 2006. Bridging the science-management divide: moving from unidirectional knowledge transfer to knowledge interfacing and sharing. Ecology and Society 11(1):4. [online] URL: http://www.ecologyandsociety.org/vol11/iss1/art4/

Saarman, E., M. Gleason, J. Ugoretz, S. Airamé, M. Carr, E. Fox, A. Frimodig, T. Mason, and J. Vasques. 2013. The role of science in supporting marine protected area network planning and design in California. Ocean & Coastal Management 74:45-56. http://dx.doi.org/10.1016/j.ocecoaman.2012.08.021

Sayce, K., C. Shuman, D. Connor, A. Reisewitz, E. Pope, M. Miller-Henson, E. Poncelet, D. Monié, and B. Owens. 2013. Beyond traditional stakeholder engagement: public participation roles in California’s statewide marine protected area planning process. Ocean & Coastal Management 74:57-66. http://dx.doi.org/10.1016/j.ocecoaman.2012.06.012

Schusler, T. M., D. J. Decker, and M. J. Pfeffer. 2003. Social learning for collaborative natural resource management. Society & Natural Resources 16:309-326. http://dx.doi.org/10.1080/08941920390178874

Sheppard, E. 1995. GIS and society: towards a research agenda. Cartography and Geographic Information Systems 22:5-16.

Star, S. L., and J. R. Griesemer. 1989. Institutional ecology, ‘translations’ and boundary objects: amateurs and professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39. Social Studies of Science 19:387-420. http://dx.doi.org/10.1177/030631289019003001

Stern, M. J. 2008. The power of trust: toward a theory of local opposition to neighboring protected areas. Society & Natural Resources 21:859-875. http://dx.doi.org/10.1080/08941920801973763

Strauss, A., and J. M. Corbin. 1998. Basics of qualitative research: techniques and procedures for developing grounded theory. SAGE, New York, New York, USA.

U.S. Department of the Interior. 2006. Geospatial modernization blueprint. Project charter 2-10-2006. U.S. DOI, Washington, D.C., USA. [online] URL: http://www.fgdc.gov/fgdc-news/initiatives/geospatial-modernization-blueprint/documents/Geo_spatial_Blueprint_Charter%20finalv.2.pdf

U.S. Geological Survey (USGS). [date unknown]. Data acquisition methods. USGS, Reston, Virginia, USA. [online] URL: https://www2.usgs.gov/datamanagement/acquire/methods.php

Vinke-de Kruijf, J., H. Bressers, and D. C. M. Augustijn. 2014. How social learning influences further collaboration: experiences from an international collaborative water project. Ecology and Society 19(2):61. http://dx.doi.org/10.5751/ES-06540-190261

Voinov, A., and F. Bousquet. 2010. Modelling with stakeholders. Environmental Modelling & Software 25:1268-1281. http://dx.doi.org/10.1016/j.envsoft.2010.03.007

White, D. D., A. Wutich, K. L. Larson, P. Gober, T. Lant, and C. Senneville. 2010. Credibility, salience, and legitimacy of boundary objects: water managers’ assessment of a simulation model in an immersive decision theater. Science and Public Policy 37:219-232. http://dx.doi.org/10.3152/030234210X497726

Wood, D., and J. Fels. 1992. The power of maps. Guilford, New York, New York, USA.

Wright, D. J., S. L. Duncan, and D. Lach. 2009. Social power and GIS technology: a review and assessment of approaches for natural resource management. Annals of the Association of American Geographers 99:254-272. http://dx.doi.org/10.1080/00045600802686299

Wynne, B. 1992. Misunderstood misunderstanding: social identities and public uptake of science. Public Understanding of Science 1:281-304. http://dx.doi.org/10.1088/0963-6625/1/3/004

Address of Correspondent:
Amanda E. Cravens
559 Nathan Abbott Way
Stanford, California 94305
acravens@stanford.edu
Jump to top
Table1  | Table2  | Figure1  | Figure2  | Figure3