Ecology and SocietyEcology and Society
 E&S Home > Vol. 18, No. 1 > Art. 25
The following is the established format for referencing this article:
Raymond, C. M., and J. Cleary. 2013. A tool and process that facilitate community capacity building and social learning for natural resource management. Ecology and Society 18(1): 25.
http://dx.doi.org/10.5751/ES-05238-180125
Research, part of a special feature on Applying Landscape Science to Natural Resource Management

A Tool and Process that Facilitate Community Capacity Building and Social Learning for Natural Resource Management

1Institute for Land, Water and Society, Charles Sturt University, 2Enviroconnect Pty Ltd, 3Centre for Rural Health and Community Development, University of South Australia

ABSTRACT

This study presents a self-assessment tool and process that facilitate community capacity building and social learning for natural resource management. The tool and process provide opportunities for rural landholders and project teams both to self-assess their capacity to plan and deliver natural resource management (NRM) programs and to reflect on their capacities relative to other organizations and institutions that operate in their region. We first outline the tool and process and then present a critical review of the pilot in the South Australian Arid Lands NRM region, South Australia. Results indicate that participants representing local, organizational, and institutional tiers of government were able to arrive at a group consensus position on the strength, importance, and confidence of a variety of capacities for NRM categorized broadly as human, social, physical, and financial. During the process, participants learned a lot about their current capacity as well as capacity needs. Broad conclusions are discussed with reference to the iterative process for assessing and reflecting on community capacity.
Key words: adaptive capacity; co-management; community capacity; environmental management; participatory action research

INTRODUCTION

Unprecedented changes to global ecological systems have led to calls for action-oriented approaches to support the social and ecological sustainability of a rapidly changing planet (Chapin et al. 2010). Action-oriented approaches need to build individual capacities to make personal changes (Fazey et al. 2007) and provide for effective social interaction, including interactive problem solving, conflict resolution, and shared learning (Fazey et al. 2010). A variety of models exist for supporting these outputs, including participatory action research (Fals-Borda 2001), adaptive co-management (Armitage et al. 2009, Armitage et al. 2011), trans-disciplinary research (Enengel et al. 2012, Tress et al. 2006) and community-based natural resource management (Kellert et al. 2000, Blaikie 2006, Robinson 2008). Such models require a high degree of interaction and collaboration among actors operating at different scales of management and a commitment to the generation and sharing of knowledge.

Both assessment and critical reflection are also essential parts of processes to support learning and adaptive management. Biggs et al. (2011) note that assessment generally means to “evaluate or estimate the nature, quality, ability, extent, or significance of” (p. 2), whereas reflection is viewed as “a calm, lengthy, intent consideration” (p. 2). Reflection is needed to achieve transformative learning (Mezirow 1995), which is analogous to “double loop learning” where individuals reflect on the assumptions that underpin their actions (see Reed et al. 2010). Although social learning requires a level of social interaction, such interaction does not always lead to reflection of underlying values and conceptions of management. Evely et al. (2011) evaluated multiple social learning projects and found that they did not explicitly encourage reflection and deeper evaluation of underlying values and assumptions. Raymond et al. (2010) found that knowledge integration projects undertaken in the United Kingdom, the Solomon Islands, and Australia supported different levels of reflection and learning. Learning was influenced by the methods employed by the research team, as well the team’s epistemological beliefs.

Similarly, we have not witnessed a balance of both assessment and reflection in processes to build community or adaptive capacity for natural resource management (NRM). Community capacity can be defined as “the combined influence of a community’s commitment, resources, and skills that can be deployed to build on community strengths and address community problems and opportunities” (The Aspen Institute 1996:17). In contrast, adaptive capacity is best described as a dynamic social process and is concerned with how well a community exists with, or responds to change in their circumstances. Such change may be related to social upheaval, climatic impacts, economic shocks, or development processes. Adaptation can be proactive or reactive, and sometimes it takes the form of an unintentional process (Adger 2006).

Most tools to measure community capacity for NRM have used either primary data collected using telephone or mail-based surveys or secondary data to compare or contrast state-wide or regional community capacity (Thomson and Pepperdine 2003, Fenton 2004, Fenton and Rickert 2008). Similarly, most assessments of adaptive capacity use secondary data from national accounts to compare countries, with a focus on system responses (Adger and Vincent 2005, Brooks et al. 2005, Adger 2006, Smit and Wandel 2006, Eriksen and Kelly 2007), or social vulnerability (Nelson et al. 2007, Sheng et al. 2008). These tools rely on top-down assessments and do not provide avenues for bottom-up reflection by those who are involved in the assessment process.

Other researchers have recognized the importance of engaging local and regional communities in group discussions about community and adaptive capacity. Cavaye (2005) developed a community capacity assessment instrument to triangulate community capacity data collected through individual discussion, focus group discussion, and scaled responses. Cheers et al. (2005) developed an electronic community capacity assessment tool and process that linked capacity strength responses to assessment participant comments and observer comments. Brown et al. (2010) developed a process that enabled local NRM officers to self-assess their adaptive capacity at the local scale. Members of each focus group were well informed and able to make judgments about the capacity of the people they were representing in the community and were themselves long-term members of the community. However, we are concerned that these tools did not allow for facilitated self-measurement and discussion of community capacity or adaptive capacity at multiple scales of management. Learning needs to be enhanced at a range of individual and institutional scales (Fazey et al. 2005, Armitage et al. 2008, Pahl-Wostl 2009, Reed et al. 2010). Knowledge synthesis should occur across vertical (local, regional, national) and horizontal (local organization to local organization) scales (Berkes et al. 2003). In NRM, participatory approaches should recognize the multiple contexts inherent in decision making (Pero 2005, Lynam et al. 2007). Local institutions are best informed about the local level, whereas state institutions have a suite of tools and techniques relevant to the regional and national scales.

In this study, we present a tool and participatory action research (PAR) process that supports a systematic self-assessment and reflection of the perceptions of community capacity necessary for planning and delivering NRM programs across multiple scales of management. The tool and process enable groups representing local, organizational, and institutional tiers of NRM governance to self-assess the strength and importance of multiple capacities necessary to plan and implement NRM programs, as well as their level of confidence in their response. The capacity strength and importance scores can be used to prioritize investment in NRM programs, particularly to areas of low capacity. The process of discussing the relative strength and importance of different indicators within a group environment facilitates personal reflection on the social and human capitals necessary for NRM action, as well as single-loop and double-loop learning about best practice across NRM multiple tiers of governance. The tool was developed and piloted by Raymond et al. (2006) in the South Australian Arid Lands Natural Resource Management region in South Australia. We first provide a short review of participatory action research and its relevance to community capacity assessment and social learning. We then present the social indicators used to measure community capacity and the PAR process that facilitated social learning at multiple scales of management—both in terms of refining the capacity self-assessment tool and refining the regional NRM governance systems. We propose that community capacity building can be more effectively structured within a participatory framework that supports a coordinated process of measurement, social learning, and adaptation to build shared understanding across multiple capacity tiers (individual, organizational, and institutional). This participatory and multi-dimensional approach supports communication between NRM stakeholders for the development of community capacity.

THE USE OF PARTICIPATORY ACTION RESEARCH TO SUPPORT COMMUNITY CAPACITY ASSESSMENT AND SOCIAL LEARNING

Participatory action research is one approach that enables both assessment and reflection during the research process. It aims to integrate experience, action, and reflection to produce knowledge and action that is directly useful, and in the process, to effect consciousness raising (learning) that creates empowerment (Freire 1970, Reason 1994, Fals-Borda 2001). It seeks to enable those engaged in the PAR process to identify a problematic social situation, “bubbling concern” or existing phenomenon, understand it, and then take some action to rectify the problem, or change the situation. Participatory action research is a useful methodological approach to research where differing knowledge systems and associated worldviews (e.g., science and local NRM knowledge) are a feature because it emphasizes central participation in the research by people who are knowledgeable about the research topic from multiple perspectives, affected by it, and who may wish to use the research to effect change. It enables collaborative forms of inquiry as a means for gaining knowledge and applying it (Reason 1993, Kidd and Krall 2005).

Participatory action research approaches are constructivist, dialogical, and proactive, attempting to centralize participant and researcher values (Kidd and Kral 2005). Participatory action research differs significantly from some other research approaches in that it is not extractive, i.e., researchers are not “experts” who study their subjects and then go away to write their papers, but rather are co-participants—experiencing a problem situation or phenomenon in order to better understand it and to assist in changing it (Baum et al. 2006). Participatory action research as a process is very much about cycling through research, action, and reflection. In itself, it is an iterative process, which suits situations where iteration (in the form of participant research, action, and reflection) is required.

ITERATIVE PROCESS FOR ASSESSING AND REFLECTING ON COMMUNITY CAPACITY

Over a 12-month period, the project team followed an iterative PAR process for assessing and reflecting on community capacity (Fig. 1). In this study, we discuss each of the nine steps with reference to a pilot project undertaken in the South Australia Arid Lands (SAAL), and the paper is structured around these steps.

Participatory action research approaches include a reflective critical examination of actor practices that build knowledge (or meta-knowledge) about the knowledge generated within the social inquiry. Three characteristics are often used to distinguish between participatory research and other more conventional forms of research: (1) the ownership of research projects is shared; (2) the analysis of social issues occurs at the community level and is community based; and (3) research projects have an orientation toward community action (Kemmis and McTaggart 2007: 273). In the case of the NRM Capacity Assessment project, all three steps were important considerations. At the outset, the SAAL NRM Community Board was deeply involved in developing the activities that would be undertaken to develop both the tool and the processes, which would be followed in its development and testing. Several meetings occurred with the Board prior to the construction of any indicators, for example, and the Board was very clear about how they wanted sub-regional NRM groups engaged. At this point, Board members also articulated that they recognized the need to examine and potentially build community capacity within the SAAL NRM Region, but that they were unclear about what this really meant. The research team provided information about what they understood about the terms “capital” and “capacity development and building” to mean, which Board members then discussed in the context of their own perceived needs in relation to the needs of the SAAL NRM Region. In that sense, the research team first took on the practical role of “facilitators” (in bringing external knowledge resources to the group) as described by Carr and Kemmis (1983). It is also important to note at this point, that both the researchers and community participants took up different roles at different times throughout the project. For example, in this early stage of the project, the Board was clear that they had no useful starting point from which to begin developing capacity indicators, so the research team was tasked with undertaking to work on these in the first instance and to bring them back to the Board for discussion. The research team was additionally empowered by the community participants to take on a “technical” role (following Carr and Kemmis (1983)). However, over time, this “technical” role was expanded to include community participants, as they gained knowledge, reflected upon it, and were confident to apply it. This manifested, for example, in the community taking greater ownership of the development of capacity indicators and in determining which were applicable and usable and also in making decisions about which were impractical or irrelevant.

There are of course challenges associated with using PAR approaches. These include criticisms that PAR processes can be problematic in the sharing of power and that PAR approaches cannot be apolitical. However, PAR practitioners counter by arguing that the difference between more traditional forms of research and PAR is that, in other forms of research, the politics may often be submerged or undeclared in “spurious guises of objectivity” (McTaggart 1997:7) e.g., discipline-specific traditions and cultures that privilege particular forms of knowing and the ways in which this is sought and acquired (Du Bois 1983). However, PAR approaches aim to promote transparency and to make known political positions and sometimes to actively work to change them, as is seen in the work of Freire and Fals-Borda. Balancing the local with the external is about more than valuing different ways of knowing (Chakravarti 2006). For example, researchers might reify indigenous knowledge (Kalb 2006) and see and speak of it as something discrete and decontextualized, rather than something that is inherently bound and relational. Participatory approaches are about context and forefronting local conditions and knowledge (Pain 2004).

In the case of the SAAL NRM Capacity Assessment project, political positions were openly discussed. For example, the position of funding bodies was established, the community needs as articulated by the Board and the intentions of the research team were made overt. Where any of these were problematic, they were renegotiated. An integral part of this process was establishing a project steering committee composed of representatives of the project participants. This group established clear and collective goals that required considerable time and several meetings to develop. A critical element of this process was the development of trust relationships that created the goodwill necessary to proceed and that also enabled the inevitable difficult conversations around differing needs and understandings to occur.

IDENTIFY COMMUNITY CAPACITY INDICATORS AND SCORING RUBRICS


Background

Multiple forms of capital have been conceptualized and empirically measured in the social science literature. Minkler et al. (2008) present a framework for measuring community capacity in the environmental management context (building on Downing and Hudson 2001). According to their framework, capacity for environmental management is shaped by: (1) shared concerns (shared understanding of environmental issues); (2) community identity; (3) participation; (4) inclusion; (5) leadership; (6) access to accessible information; (7) skills and resources (financial, human, and social), and (8) political influence. Rural livelihoods analysis has also been used to analyze the community capacity for NRM in both developing (Ellis and Freeman 2005) and developed nations, including Australia (Nelson et al. 2005, 2010a, 2010b). The framework consists of five capitals: natural, human, social, physical, and financial (see Table 1 for definitions).

A variety of economic and biophysical assessment tools exist for measuring natural capital (Costanza and Daly 1992, Costanza et al. 1997, Wackernagel et al. 1999), which have since been adopted by government agencies to measure the state and condition of natural resources in rural South Australia. To avoid duplication of effort, the project steering committee in partnership with interview participants decided to refine the framework and remove this capital from the capacity self-assessment tool. We also acknowledge that a variety of other capitals exist, such as political (Booth and Richard 1998, Lake and Huckfeldt 1998), cultural (Dimaggio and Mohr 1985, Lareau and Weininger 2003, Vryonides 2007, Patterson 2008), and spiritual capitals (Verter 2003). However, rural landholders involved in the development and piloting of the assessment tool believed that human, social, physical, and financial capitals were most pertinent to the delivery of NRM in South Australia at multiple community tiers. Allowing participants to choose the capitals is consistent with the PAR approach.

Each capital presented in Table 1 was measured using a series of indicators. Table 2 presents the specific indicators of capacity included in the original version of the self-assessment tool, together with key sources that provided the theoretical basis for their inclusion.

IDENTIFY KEY STAKEHOLDERS

The South Australian NRM Act 2004 has brought together the management of soils, water, and pest plants and animals in one piece of legislation. Previously, these were managed under three separate Acts: the Animal and Plant Control (Agricultural Protection and Other Purposes) Act 1986; Soil Conservation and Land Care Act 1989; and Water Resources Act 1997. Under the new Act, regions are represented by community boards, which have been appointed by the SA Government. Within each region, sub-regional groups have also been established, as a means to deliver NRM outcomes at the regional scale Community capacity assessments were conducted within five sub-regions in the SA Arid Lands. The SAAL NRM Board, in partnership with the project team, identified community members to be involved in the capacity assessment within each of these sub-regions.

We acknowledge that the concept of community can be problematic in that a failure to appreciate the heterogeneity of a “community” can result in planners engaging privileged groups. This research drew upon the framework developed by Harrington et al. (2008) to identify key community types. According to that framework, these types may include: (1) communities of place (e.g., residents in the countryside); (2) communities of interest (e.g., Landcare groups); (3) communities of practice (e.g., regional NRM planning staff); and (4) communities of identity (e.g., indigenous leaders).

From this starting point, the project team sought participation from a mix of sectors, which, where possible, included:
We used snowball sampling, or chain referral, to identify individuals within each of these communities. Snowball sampling involved identifying informed people within each of the six sub-regions and seeking further nominations through these people. This technique was well suited to the project because it ensured that participants were connected to the local geographical community and could reasonably represent the views of the particular sector of that community involved in NRM issues. It also enabled quick identification of those individuals who had an interest in NRM issues across the SAAL NRM region, a vast region covering 839,000 km2 with a small population of approximately 11,000 people (South Australian Arid Lands NRM Board 2010).

PILOT CAPACITY INDICATORS AND RATING SCALES IN A WORKSHOP ENVIRONMENT

A pilot assessment workshop was facilitated by the project team with the SAAL NRM Board members. The workshop commenced with refreshments and informal interaction. Following this, the facilitators outlined the capacity self-assessment tool framework and protocols and checked understanding of the workshop process. To build familiarity with the capacity self-assessment tool and process, workshop discussion commenced with the collective indicators relative to engagement and shared values and beliefs. Generally, much discussion occurred during this phase of the process, and it was important not to rush participants because this familiarization was critical to ensuring responses reflected a broad range of concerns. Thereafter, participants responded to capacity indicators at each community tier. The process took 3.5–4 hours in total with each group, depending upon the number of participants and level of discussion around each capacity indicator. Facilitated discussion ensured that a range of views were expressed. In some instances, participants needed to be encouraged to contribute, but the creation of a comfortable, informal, and safe environment through clear facilitation promoted strong discussion.

Each indicator of capacity (Table 2) was presented to participants as a statement on a Microsoft Access® form projected on to a screen. To facilitate discussion and reflection, participants responded as a group to perceptions of the strength of each capacity indicator in their community, the perceived importance of each capacity indicator to overall NRM capacity, and the confidence with which they could respond to each indicator. Responses were automatically saved in a confidential data file for later analysis. The definitions in Table 3 guided the measurement of capacity strength, importance, and confidence.

The capacity assessment project was about examining perceptions of capacity. It may or may not be “true” that capacity is or isn’t present. The realities of assessment participants are more important. For example, there were marked differences between what one group representing the organizational tier thought about the availability of financial resources available to the Board (institutional tier) and what the Board knew to be the case. In this case, the Board clearly recognized that they needed to make their financial position (with regard to available funding) more explicit to this particular organization. It is this examination of perceptions that is especially useful in using the NRM Capacity Assessment tool.

Each indicator of capacity had sub-measures that assisted participants to respond to the indicator under analysis. Participants responded to the sub-measures on a scale where “1 = Needs Strengthening”, “2 = Basic Capacity”, “3 = Moderate Capacity,” and “4 = Strong Capacity”. For the purpose of simplicity, these sub-measures are not included in the body of this paper.

Natural resource management programs in Australia have been designed to engage with individuals and groups with different levels of agency, which has previously been defined as the individual organizational and institutional levels or tiers of decision making (see Thomson and Pepperdine 2003). For the purposes of this project, the tiers were defined in the following way:
Two levels of reflection occurred during each workshop. Firstly, participants reflected on the capacities themselves in relation to each tier considered. Through facilitated discussion prompted by the tool indicators and sub-measures, local issues related to perceptions of capacity were able to be addressed. This enabled often nebulous concepts, e.g., elements of human and social capital, to be approached through structured consideration, which in turn clarified both the concepts themselves and then participants’ perceptions of them as they related to their NRM community. Reflection was aided by the process itself, because forcing a collective response required negotiated discussion, which led to shared knowledge, collective understanding, and finally to an outcome of consensus. Facilitated discussion ensured that ranges of views were expressed. In some instances, participants needed to be encouraged to contribute, but the creation of a comfortable, informal, and safe environment through clear facilitation promoted strong discussion and ensured “quiet voices” were heard. We saw this as a critical component of the facilitation process. It is important to note here that skilled facilitation was a key component of the process, to ensure that all voices had the opportunity to contribute to the discussion and debate. In one instance, for example, during consideration of the “Networks and Relationships” capacities, there was considerable disagreement between some participants about the perceived transparency of some networking relationships within the broader region. One participant felt that the SAAL NRM Board had strong capacity to work cooperatively with organizations and institutions they work with; whereas another participant felt that the Board had only weak to moderate capacity in this area, citing examples where the Board did not work effectively with representatives from its partner agencies. Further questioning among participants uncovered additional information from which collective understanding was able to be established, consensus reached, and a group response made.

Secondly, as part of the participatory process associated with the development of the tool itself, participants reflected on the capacity indicators and sub-measures, commenting on their “user friendliness” and effectiveness in illuminating the capacity under consideration. This part of the workshop provided high levels of insight and input to the iterative development process. In one example, a participant simply said “This indicator doesn’t make sense—I don’t understand what it is trying to find out.” Following discussion in this instance and in others, participants were often able to suggest an alternative indicator or sub-measure that resonated with their local knowledge of their region and sub-region.

The two levels of reflection inherent in the pilot and iterative development of the tool are one of the key characteristics that differentiate this tool and process from other assessment tools (cf. Fenton 2004, Fenton and Rickert 2008). The reflective process and its outcomes, i.e., new knowledge generation leading to changed behaviors and outputs clearly illustrates the transformative nature of Mezirow’s concept of learning (Mezirow 1995), and later conceptualizations of double-loop learning (Reed et al. 2010).

MODIFY INDICATORS BASED UPON THE REFLECTION THAT OCCURRED AT PILOT WORKSHOP

It was important within the PAR and social learning frameworks that underpin the study to ensure participant involvement throughout all elements of the development process (see Fig. 1). As well as the participation and reflection inherent in each of the workshops, a project steering committee composed of stakeholder representatives from the five community sectors outlined above met regularly to review and reflect on the developmental process. The committee had oversight of project progress and stakeholder participation along with resource allocation. Additionally, the committee reflected on and discussed proposed modifications suggested in assessment workshops (e.g., rewording; changes to indicators; changes to the number of indicators). Review of committee notes and minutes reveal some of these reflections, e.g., this excerpt highlights considerations on changing the capacity indicator scale:

Would like to change this to a four-point scale, e.g., 1 = clear need for increased capacity; 2 = basic level of capacity in place; 3 = moderate level of capacity in place; 4 = high level of capacity in place. The reasoning for this is that it was very difficult for participants to come to a decision around some statements. It was clearly evident that while they didn’t think they were particularly strong in some areas, they were reluctant to indicate that they were “weak,” and the project team thinks that a four-point indicator scale would alleviate this reluctance and provide a more accurate indication of participant views.

Workshop participants also refined the number of scale items used to measure each capacity, For example, assessment participants noted some conceptual interactions between the sub-measures relating to quality of engagement and quality of networks and relationships. Consequently, the number of capacity indicators was reduced from 80 to 61, and some indicators were reworded to be more applicable to regional communities. This process of reflection and review at multiple levels again demonstrated the existence and value of social learning. The concerns expressed by representatives of the SAAL NRM Board were presented to the steering committee. The committee learned from the presentation that a different type of numerical rating scale was required to measure capacity strength at a regional level.

To ensure that the assessment tool was both comprehensive and workable, we limited the number of indicators to 61 and ensured that all indicators could be addressed within a 4-hour workshop period, divided into half by a meal break. Workshop participant feedback post-process revealed that the assessment tool was workable. Nonetheless, the project team acknowledged after the pilot that there is some potential to reduce the number of indicators in the tool from 61 to approximately 50 indicators. This reduction was not the result of fatigue but rather the result of some perceived overlap in statement wording.

FACILITATE THE SELF-ASSESSMENT OF CAPACITY ACROSS A WIDER SAMPLE USING THE TOOL

We facilitated the self-assessment of capacity across five NRM sub-regions in the SAAL using the process outlined in the pilot indicator section. These self-assessments used the refined rating scale and 61 statements for measuring community capacity rather than 80 statements.

In addition to the capacity self-assessment, we invited reflection on the self-assessment process. The following enlightening quotes emerged in response to the process. The process:

GENERATE A REPORT OF PARTICIPANT AND RESEARCH TEAM ASSESSMENTS AND REFLECTIONS

The tool enables the reporting of capacity strength and importance across institutional, organizational, and individual tiers. To maintain confidentiality, we only report mock data in this section. To obtain a regional report of perceptions of capacity strength and importance across report tiers, the capacity self-assessment tool performs two main arithmetic calculations for each report tier. It calculates the mean capacity strength, capacity importance, and confidence scores across n indicators within a community group. It then calculates the total mean capacity strength, capacity importance, and confidence scores across community groups that form part of the same community tier (e.g., three groups of land managers may undertake the assessment separately, and the aggregated results of the three assessments form the individual tier response). Each capacity is quantified using 2–3 indicators, each presented at institutional, organizational, and individual tiers. For example:
  1. “The NRM Board has the networks and relationships to deliver its NRM program” (institutional tier);
  2. “The organizations in this community have the networks and relationships to deliver their NRM programs” (organizational tier);
  3. “The people in this community have the networks and relationships to achieve their NRM objectives” (individual tier).
Histograms show the capacity strength score for each community tier undergoing assessment (Fig. 2). The capacity strength agreement scale was recoded into the following hierarchy:
The mock data in Fig. 2 show that the institutional tier perceived its own leadership capacity as strong, whereas the individual tier perceived the institutional tier’s leadership capacity as weak. Similar differences in perceived capacity strength are evident across governance, strategic direction, and human resources capacities. Histograms can also be generated at the sub-regional (e.g., NRM group) level. Sub-regional assessment is important for more precise identification of capacity strengths and weaknesses.

The capacity self-assessment tool also generates a form showing assessment participants’ comments, which can be exported into Microsoft Word® for further analysis. Sociologists can then link derived comments with the quantitative results for more integrated assessment of capacity strengths and weaknesses. Table 3 shows that xy NRM Board perceives the institutional tier’s engagement capacity to be moderately strong and extremely important to their overall community capacity. They were also confident in responding to the engagement indicators.

The participant comments provide further insight into the capacity strength response (Table 3). The xy Board suggests there are areas for improvement in the number of public meetings and workshops, community access to public meetings and workshops, and the ability to hold meetings via alternative technologies. Similar tables can be generated for the individual tier’s perception of the institutional tier’s capacities.

The relationship between capacity strength, importance, and confidence is shown as a matrix (Fig. 3). Capacity importance is on the x-axis, capacity strength on the y-axis, and capacity confidence is denoted by the color shade. Action codes have been assigned to each matrix cell according to the 1:1 relationship between capacity strength and importance. Colors in the matrix highlight the perceptions of the people representing that tier (see key). Blue represents the perceptions of the institutional tier, red the organizational tier, and green the individual tier. A light color shade represents low confidence (a recode of not confident and some confidence), whereas a dark color shade represents high confidence (a recode of confident and extremely confident).

The matrix shows a priority rating scale of very low priority, low priority, medium priority, high priority, and very high priority. Assignment of matrix square priorities is based on the assumption that a capacity is a very high priority to build when it is perceived as weak and very important, but a very low priority to build when it is perceived as strong and not important. Figure 3 shows that the institutional tier perceives its governance capacity to be medium priority for building (high confidence). The individual tier also perceives it to be medium priority, but it is less confident in its response. The organizational tier perceives the institutional tier’s governance capacity to be very high priority for building.

TRANSLATE CAPACITY FINDINGS INTO PRACTICE

Assist Participants to Prepare Strategies for Addressing Capacity Gaps and Perceptive Differences of Capacity

We prepared strategies and actions for addressing capacity gaps and perceptive differences of capacity for the SAAL NRM Board. The specific nature of these strategies is confidential, but they are centered around:
  1. Developing a clearly articulated community engagement strategy;
  2. Timing of consultation of local communities on NRM policy and plans;
  3. Hiring knowledge brokers and local trusted advisors to develop NRM programs that are relevant at the local scale;
  4. Reviewing the number of public meetings and the level of access to them;
  5. Targeting the promotion of on-ground works funding to specific market segments;
  6. Increasing the number of public meetings and workshops, and improving community access to them;
  7. Strengthening monitoring and evaluation programs;
  8. Supporting landholders in the writing of funding applications;
  9. Strengthening partnerships between non-Aboriginal landholders and Aboriginal people;
  10. Incorporating existing knowledge and information (for example, old soil board planning information) into the new regional plan.

Monitor and Evaluate Strategies after an Agreed Period

Monitoring and evaluation of strategies are important to determine whether they have been effective in addressing capacity gaps. If community capacity changes, the capacity assessment tool and process (Fig. 1) may need to be re-run after an agreed period. In 2009, the SAAL NRM Board expressed interest in rerunning the tool and process in their region; however, a change in institutional arrangements has impeded this project to date.

DISCUSSION AND CONCLUSIONS

The aim of this study was to present a tool and process that support a systematic self-assessment and reflection of the perceptions of community capacity necessary for planning and delivering NRM programs across multiple scales of management. The tool allows NRM communities operating at institutional, organizational, and individual tiers of management to self-assess their capacity to plan and implement NRM programs. Unlike recent capacity assessment processes (e.g., Brown et al. 2010), our process (Fig. 1) both supports the self-assessment of capacity and enables participants to identify, develop, and modify capacity indicators relevant to their local needs in the pre-assessment phase. This is critical, particularly in rural and remote regions, where one-size-fits-all approaches to sustainable development (Carson and Cleary 2010, Daniell et al. 2010) have served to disenfranchise the value of local knowledge about local social, economic, and environmental conditions (Hogan et al. 2012). The post-assessment phase enables participants to prepare strategies for addressing capacity gaps and for monitoring and evaluating these strategies over time. This holistic process enables both assessment and critical reflection, which are essential for learning and adaptive management (Biggs et al. 2011, Evely et al. 2011). It does not pretend to be an objective, external assessment, but rather a reflexive and iterative process for considering human and social capitals that are specifically relevant to the management of natural resources in a particular place at a given time.

It is this approach to integrating different knowledge types and local realities into NRM decision making that makes the tool an important contribution to melding top-down and bottom-up planning processes and supporting the subsequent social learning that ensues. For example, the capacity responses from the NRM Board and NRM groups (representing the institutional tier) are viewed in unison with the capacity responses of community groups representing the organizational tier and land managers representing the individual tier. This kind of assessment and reflection supports double-loop learning as described by Reed et al. (2010) in that the deliberation on capacity indicators forced workshop participants to reflect on their attitudes toward different capacity strengths and their level of importance.

The tool and process also have a monitoring and evaluation function. They provide a systematic way of identifying those capacities that may need to be strengthened within a NRM group region. This enables the development of project briefs in consultation with local community groups to address capacity gaps. It also provides a method to track changes in perceived community capacity over time, which can inform the success of NRM programs at state or regional scales. The capacity assessment could be repeated at regular intervals, for example, on a 5-year basis. Some capacities may have strengthened, whereas others may have declined, enabling more targeted delivery of NRM programs. Again, this is critical in rural and especially remote regions. In remote regions, which tend to have smaller populations, even seemingly small demographic changes can have significant impact on the human and social capital available, so being able to monitor and respond to population mobility, even micro-mobility, is important (Carson et al. 2011).

Within the participatory framework in which the study was situated, it is important to also reflect on our own experiences as “researcher-facilitators.” We discovered that individuals have different understandings of capacity, depending upon their level of involvement in formalized government processes. Those with more formalized knowledge tended to perceive community capacity from a “quantitative, service-provider” perspective of “how strong is the capacity” and what programs could be implemented to address capacity gaps, whereas local actors tended to focus on the qualitative benefits afforded by the tool and process, including the social learning and capacity building that resulted from bringing diverse stakeholders together in order to reach a consensus position on capacity strength and importance. Both perspectives have an important role in transdisciplinary planning, and it is this that we believe differentiates this tool from others that have been developed for the purpose of assessing or examining capacity. In terms of the participatory process, we are able to see the benefits of centralizing researcher and participant values, e.g., the importance to participants that their local knowledge of their social, economic, and environmental situations was recognized. In the current Australian political climate, where recognition of the conditions of “localism” is a highly favored ideal (see, e.g., Crean 2011) in relation to the sustainable development of regional Australia, tools that can work at the local or micro scale are becoming more relevant.

The capacity assessment tool had some obvious limitations. Firstly, some of the capacity indicators and sub-measures were very similarly worded, creating the perception of measurement overlap. We suggest reducing the number of indicator sub-measures to balance time of data collection with monitoring and evaluation objectives. Secondly, although we argue that one of the strengths of the tool and process is their adaptability for local conditions, this flexibility may also add to the cost of each iteration for each different setting. Economies of scale may not be possible in any attempt at broad-scale application. However, the value of specific information that enables the tailoring of programs and the targeting of resources more effectively is likely to outweigh these higher implementation costs. Thirdly, there is a risk that casual observers of the tool and process might not recognize that they are assessing perceptions of capacity, rather than any objective measure of capacity. The risk here is that if multiple NRM regions were to use the tool and any subsequent capacity reports generated from each region were to be compared by an external agency, incorrect conclusions and inferences may be drawn from such comparisons. It is important to recognize therefore, that the tool and process are specifically designed for endogenous rather than exogenous application. It is as much about building capacity as assessing it. To achieve more objective measures of capacity that are comparable across regions will require the application of different processes and tools. Finally, although this tool and process were developed in a NRM context, we believe there is also scope to link the existing indicators on community capacity for NRM to adaptive capacity related to climate change, community development, or regional development. Work on this process in the context of discerning and building adaptive capacity has begun, and one of the authors is actively engaged in a project that is determining indicators for the social and economic well-being of rural communities.

Future Directions

Globally, there have been a number of advancements in the nature and structure of capacity frameworks. Frameworks have moved from an assessment of secondary census data to discursive processes that enable rural landholders, among other local actors, to self-assess their capacity. Comparatively few tools enable self-assessment across multiple community tiers, including institutional, organizational, and individual. This tool addressed this gap. However, assessment frameworks still need substantial work to enable a comprehensive and reliable assessment of capacity across groups. Firstly, capacity frameworks need to move beyond capacity strength and importance assessment to capacity program identification. Our tool has matrices to show the relationship between capacity strength and capacity importance; however, it does not indicate the types of programs that could be implemented to increase the strength of capacities deemed important to the community. One option is to develop a generic, but comprehensive list of potential strategies to address capacity gaps.

More work needs to be directed to the development of coherent sub-scales of community capacity. Although a number of tools are able to measure self-reports of overall social, natural, or built capital, there are few tools available for assessing the different elements of each capacity in a way that is both valid and reliable. Our work presents an important first step in this direction; however, piloting revealed some conceptual overlaps that need to be addressed. For example, assessment participants noted some conceptual interactions between the sub-measures relating to quality of engagement and quality of networks and relationships. Both refer to trust, transparency, and inclusiveness. Removing such overlap will improve the internal reliability of capacity assessment findings.

The organizational tier in NRM needs to be reconceptualized. Post-process discussions with regional community members revealed that our definition of organization (i.e., “non-statutory groups which have a common NRM Interest and meet regularly, such as Landcare groups”) may be too narrow. It could be widened to include those statutory bodies whose statutory authority is vested in legislation other than the Natural Resources Management Act (2004), but who nonetheless have a role to play in NRM. For example, the Pastoral Board and local government.

We are not aware of any self-assessment tools that examine the effect of the assessment facilitator on capacity responses. Facilitator verbal and non-verbal communication may influence how assessment participants respond to the capacity indicators included in the NRM tool. Future studies could examine the capacity responses collected through a facilitated and non-facilitated assessment process to identify any facilitator bias.

RESPONSES TO THIS ARTICLE

Responses to this article are invited. If accepted for publication, your response will be hyperlinked to the article. To submit a response, follow this link. To read responses already accepted, follow this link.

ACKNOWLEDGMENTS

We would like to thank Dr Karen Cosgrove and representatives from the South Australian Arid Lands NRM Board and the then Department of Water, Land and Biodiversity Conservation and Primary Industries and Resources (SA) for their valuable insights into the development of the community capacity assessment tool and process. We specifically would like to thank Greg Cock, Russell Flavel, and John Gavin for their wisdom and guidance.

LITERATURE CITED

Adger, W. N. 2006. Vulnerability. Global Environmental Change 16:268–281.

Adger, W. N., and K. Vincent. 2005. Uncertainty in adaptive capacity. Comptes Rendus Geoscience 337:399–410. http://dx.doi.org/10.1016/j.crte.2004.11.004

Aitken, L. 2001. Social and community dimensions of natural resource management. Consortium for Integrated Resource Management, Brisbane, Australia.

Armitage, D., F. Berkes, A. Dale, E. Kocho-Schellenberg, and E. Patton. 2011. Co-management and the co-production of knowledge: learning to adapt in Canada’s Arctic. Global Environmental Change 21:995–1004. http://dx.doi.org/10.1016/j.gloenvcha.2011.04.006

Armitage, D., M. Marschke, and R. Plummer. 2008. Adaptive co-management and the paradox of learning. Global Environmental Change 18:86–98. http://dx.doi.org/10.1016/j.gloenvcha.2007.07.002

Armitage, D. R., R. Plummer, F. Berkes, R. I. Arthur, A. T. Charles, I. J. Davidson-Hunt, A. P. Diduck, N. C. Doubleday, D. S. Johnson, M. Marschke, P. McConney, E. W. Pinkerton, and E. K. Wollenberg. 2009. Adaptive co-management for social–ecological complexity. Frontiers in Ecology and the Environment 7:95–102. http://dx.doi.org/10.1890/070089

Aslin, H. J., and V. A. Brown. 2004. Towards whole of community engagement: a practical toolkit. M.urray-Darling Basin Commission, Canberra, Australia. [online] URL: http://adl.brs.gov.au/brsShop/data/PC12804.pdf

Baum, F., C. McDougall, and D. Smith. 2006. Continuing professional education, glossary: participatory action research. Journal of Epidemiological Community Health 60:854–857. http://dx.doi.org/10.1136/jech.2004.028662

Berkes, F., J. Colding, and C. Folke. 2003. Navigating social–ecological systems: building resilience for complexity and change. Cambridge University Press, Cambridge, UK. http://dx.doi.org/10.1017/CBO9780511541957

Biggs, H., C. Breen, R. Slotow, S. Freitag, and M. Hockings. 2011. How assessment and reflection relate to more effective learning in adaptive management. Koedoe 53(2): 1001. doi 10.4102/koedoe.v53i2.1001 [online] URL: http://www.koedoe.co.za/index.php/koedoe/article/view/1001/1251

Black, A., and P. Hughes. 2001. The identification and analysis of indicators of community strength and outcomes. Department of Family and Community Services, Canberra, Australia.

Blaikie, P. 2006. Is small really beautiful? Community-based natural resource management in Malawi and Botswana. World Development 34:1942–1957. http://dx.doi.org/10.1016/j.worlddev.2005.11.023

Booth, J. A., and P. B. Richard. 1998. Civil society, political capital, and democratization in Central America. Journal of Politics 60:780–800. http://dx.doi.org/10.2307/2647648

Brooks, N., W. N. Adger, and P. M. Kelly. 2005. The determinants of vulnerability and adaptive capacity at the national level and the implications for adaptation. Global Environmental Change–Human and Policy Dimensions 15:151–163. http://dx.doi.org/10.1016/j.gloenvcha.2004.12.006

Brown, P. R., R. Nelson, B. Jacobs, P. Kokic, J. Tracey, M. Ahmed, and P. DeVoil. 2010. Enabling natural resource managers to self-assess their adaptive capacity. Agricultural Systems 103:562–568. http://dx.doi.org/10.1016/j.agsy.2010.06.004

Campbell, A. 2006. The Australian Natural Resource Management Knowledge System. Land and Water Australia, Canberra, Australia.

Carr, W., and S. Kemmis. 1983. Becoming critical: knowing through action research. Deakin University Press, Victoria, Australia.

Carson, D., and J. Cleary. 2010. Virtual realities: how remote dwelling populations become more remote over time despite technological improvements. Sustainability 2(5):1282–1296. http://dx.doi.org/10.3390/su2051282

Carson, D., P. Ensign, R. O. Rasmussen, and A. Taylor. 2011. Perspectives on demography at the edge. Pages 3–21 in D. Carson, R. O. Rasmussen, P. Ensign, L. Huskey, and A. Taylor, editors. Demography at the edge. Remote human populations in developed nations. Ashgate, London, UK.

Cavaye, J. 2005. Capacity assessment methodology for NRM regional arrangements: a guide to using the capacity assessment tool. Cavaye Community Development, Toowoomba, Australia.

Chakravarti, D. 2006. Voices unheard: the psychology of consumption in poverty and development. Journal of Consumer Psychology 16(4):363–76. http://dx.doi.org/10.1207/s15327663jcp1604_8

Chapin, F. S., S. R. Carpenter, G. P. Kofinas, C. Folke, N. Abel, W. C. Clark, P. Olsson, D. M. S. Smith, B. Walker, O. R. Young, F. Berkes, R. Biggs, J. M. Grove, R. L. Naylor, E. Pinkerton, W. Steffen, and F. J. Swanson. 2010. Ecosystem stewardship: sustainability strategies for a rapidly changing planet. Trends in Ecology and Evolution 25:241–249. http://dx.doi.org/10.1016/j.tree.2009.10.008

Cheers, B., M. Kruger, and H. Trigg. 2005. Community capacity audit technical report. University of South Australia, Whyalla, Australia.

Costanza, R., and H. E. Daly. 1992. Natural capital and sustainable development. Conservation Biology 6:37–46. http://dx.doi.org/10.1046/j.1523-1739.1992.610037.x

Costanza, R., R. d’Arge, R. de Groot, S. Farber, M. Grasso, B. Hannon, K. Limburg, S. Naeem, R. V. O’Neill, J. Paruelo, R. G. Raskin, P. Sutton, and M. van den Belt. 1997. The value of the world’s ecosystem services and natural capital. Nature 387:253–260. http://dx.doi.org/10.1038/387253a0

Crean, S. 2011. Driving regional economic development through localism. Address to Regional Development Australia National Forum, Canberra. [online] URL: www.rdamidnorthcoast.org.au/content/driving-regional-economic-development-through-localism-hon-simon-crean

Daniell, K., A. Hogan, and J. Cleary. 2010. Breaking down the “one-size-fits-all” approach to rural and regional policy in the face of climate change: enhancing policy initiatives through multi-level governance. Climate Change Adaptation and Governance Workshop, 16–19 November 2010, Sydney, Australia. NCCARF Social, Economic and Institutional Dimensions Network. University of New South Wales, Sydney, Australia.

Dimaggio, P., and J. Mohr. 1985. Cultural capital, educational-attainment, and marital selection. American Journal of Sociology 90:1231–1261. http://dx.doi.org/10.1086/228209

Downing, M., and M. Hudson. 2001. Community capacity building. Paper presented at the WM’01 Conference, 25 February–1 March 2001, Tucson, Arizona, USA.

Du Bois B. 1983. Passionate scholarship: Notes on values, knowing and method in feminist social science. In: Bowles G, Klein RD, (eds.) Theories of Women’s Studies. Boston, Mass: Routledge 105–116.

Ellis, F., and H. Freeman. 2005. Conceptual framework and overview of themes. Pages 3–15 in F. Ellis and H. Freeman, editors. Rural livelihoods and poverty reduction polices. Routledge, London, UK and New York, New York, USA.

Enengel, B., A. Muhar, M. Penker, B. Freyer, S. Drlik, and R. Ritter. 2012. Co-production of knowledge in transdisciplinary doctoral theses on landscape development—an analysis of actor roles and knowledge types in different research phases. Landscape and Urban Planning 105:106–117. http://dx.doi.org/10.1016/j.landurbplan.2011.12.004

Eriksen, S. H., and P. M. Kelly. 2007. Developing credible vulnerability indicators for climate adaptation policy assessment. Mitigation and Adaptation Strategies for Global Change 12:495–524. http://dx.doi.org/10.1007/s11027-006-3460-6

Evely, A. C., M. Pinard, M. S. Reed, and I. Fazey. 2011. High levels of participation in conservation projects enhance learning. Conservation Letters 4:116–126. http://dx.doi.org/10.1111/j.1755-263X.2010.00152.x

Fals-Borda, O. 2001. Participatory (action) research in social theory: origins and challenges. Pages 27–37 in P. Reason and H. Bradbury, editors. Handbook of action research. Sage, Thousand Oaks, California, USA.

Fazey, I., J. A. Fazey, and D. M. A. Fazey. 2005. Learning more effectively from experience. Ecology and Society 10(2): 4. [online] URL: http://www.ecologyandsociety.org/vol10/iss2/art4/

Fazey, I., J. A. Fazey, J. Fischer, K. Sherren, J. Warren, R. F. Noss, and S. R. Dovers. 2007. Adaptive capacity and learning to learn as leverage for social-ecological resilience. Frontiers in Ecology and the Environment 5:375–380. http://dx.doi.org/10.1890/1540-9295(2007)5[375:ACALTL]2.0.CO;2

Fazey, I., J. G. P. Gamarra, J. Fischer, M. S. Reed, L. C. Stringer, and M. Christie. 2010. Adaptation strategies for reducing vulnerability to future environmental change. Frontiers in Ecology and the Environment 8:414–422. http://dx.doi.org/10.1890/080215

Fenton, M. 2004. Socio-economic indicators for NRM (project a1.1) indicators of capacity, performance and change in regional NRM bodies. Natural Heritage Trust, Canberra, Australia.

Fenton, M., and A. Rickert. 2008. A national baseline of the social and institutional foundations of NRM programs. National Land and Water Resources Audit, Canberra, ACT, Australia.

Freire, P. 1970. Pedagogy of the oppressed. Herder and Herder, New York, New York, USA.

Harrington, C., A. Curtis, and R. Black. 2008. Locating communities in natural resource management. Journal of Environmental Policy and Planning 10:199–215. http://dx.doi.org/10.1080/15239080801928469

Hogan, A., J. Cleary, S. Lockie, K. Daniells, and M. Hickman. 2012. Localism and the socio-economic viability of country Australia. In Scoping a vision for the future of rural and regional Australia. Collection of papers presented at the Third Sustaining Rural Communities Conference: Empowering Regional People, 19–20 April 2012, Narrabri, New South Wales, Australia. ANU ePress, Canberra, Australia.

International Panel on Climate Change (IPCC). 2001. Climate change 2001: impacts, adaptation and vulnerability. IPCC, Geneva, Switzerland. http://dx.doi.org/10.1017/CBO9780511546013

Kalb, D. 2006. The uses of local knowledge. Pages 579–659 in R. E. Goodin and C. Tilly, editors. The Oxford handbook of contextual political analysis. Oxford University Press, Oxford, UK http://dx.doi.org/10.1093/oxfordhb/9780199270439.003.0031

Kellert, S. R., J. N. Mehta, S. A. Ebbin, and L. L. Lichtenfeld. 2000. Community natural resource management: promise, rhetoric, and reality. Society and Natural Resources 13:705–715. http://dx.doi.org/10.1080/089419200750035575

Kemmis, S., and R. McTaggart. 2007. Participatory action research: communicative action and the public sphere. Pages 271–330 in N. Denzin and Y. Lincoln, editors. Strategies of qualitative enquiry. Sage, Thousand Oaks, California, USA.

Kidd, S., and M. Kral 2005. Practicing participatory action research. Journal of Counseling Psychology 52(2):187–195. http://dx.doi.org/10.1037/0022-0167.52.2.187

Lake, R. L., and R. Huckfeldt. 1998. Social capital, social networks, and political participation. Political Psychology 19:567–584. http://dx.doi.org/10.1111/0162-895X.00118

Lareau, A., and E. B. Weininger. 2003. Cultural capital in educational research: a critical assessment. Theory and Society 32:567–606. http://dx.doi.org/10.1023/B:RYSO.0000004951.04408.b0

Lynam, T., W. de Jong, D. Sheil, T. Kusumanto, and K. Evans. 2007. A review of tools for incorporating community knowledge, preferences, and values into decision making in natural resources management. Ecology and Society 12(1): 5. [online] URL: http://www.ecologyandsociety.org/vol12/iss1/art5/

McTaggart, R. 1997. Participatory action research: international contexts and consequences. State University of New York Press, Albany, New York, USA.

Mezirow, J. 1995. Transformation theory of adult learning. Pages 39–70 in M. R. Welton, editor. In defense of the lifeworld: critical perspectives on adult learning. State University of New York Press, Albany, New York, USA.

Minkler, M., V. Breckwich Vásquez, M. Tajik, and D. Petersen. 2008. Promoting environmental justice through community-based participatory research: the role of community and partnership capacity. Health Education and Behavior 35(1):119–137. http://dx.doi.org/10.1177/1090198106287692

Nelson, R., P. R. Brown, T. Darbas, P. Kokic, and K. Cody. 2007. The potential to map the adaptive capacity of Australian land managers for NRM policy using ABS data. CSIRO, Australian Bureau of Agricultural and Resource Economics, prepared for the National Land and Water Resources Audit, Canberra, Australia.

Nelson, R., P. Kokic, S. Crimp, P. Martin, H. Meinke, S. M. Howden, P. de Voil, and U. Nidumolu. 2010a. The vulnerability of Australian rural communities to climate variability and change: Part II - Integrating impacts with adaptive capacity. Environmental Science and Policy 13:18–27. http://dx.doi.org/10.1016/j.envsci.2009.09.007

Nelson, R., P. Kokic, S. Crimp, H. Meinke, and S. M. Howden. 2010b. The vulnerability of Australian rural communities to climate variability and change: Part I - Conceptualising and measuring vulnerability. Environmental Science and Policy 13:8–17. http://dx.doi.org/10.1016/j.envsci.2009.09.006

Nelson, R., P. Kokic, L. Elliston, and J. King. 2005. Structural adjustment. A vulnerability index for Australian broadacre agriculture. Australian Commodities 12:171–179.

Pahl-Wostl, C. 2009. A conceptual framework for analysing adaptive capacity and multi-level learning processes in resource governance regimes. Global Environmental Change 19:354–365. http://dx.doi.org/10.1016/j.gloenvcha.2009.06.001

Pain, R. 2004. Social geography: participatory research. Progress in Human Geography 28:652–663. http://dx.doi.org/10.1191/0309132504ph511pr

Patterson, C. 2008. Cultural capital and place: Coles Bay and the Freycinet Peninsula, Tasmania. Geographical Research 46:350–360. http://dx.doi.org/10.1111/j.1745-5871.2008.00528.x

Pepperdine, S. 2001. Social indicators of rural community sustainability: an example from the Woady Yaloak Catchment. Pages 125–130 in M. F. Rogers and Y. M. J. Collins, editors. The future of Australia’s country towns. Centre for Sustainable Regional Communities, LaTrobe University, Bendigo, Australia.

Pero, L. 2005. From governance rhetoric to practical reality: making community-based natural resource management decision-making work. Griffith Journal of the Environment 1:1–30.

Primary Industries and Resources South Australia (PIRSA). 2004. Primary Industries and Resources South Australia capabilities dictionary. PIRSA, Adelaide, Australia.

Raymond, C. M,, J. Cleary, and K. Cosgrove. 2006. A community capacity assessment tool and process for natural resource management. DWLBC Report 2006/35, Government of South Australia, through Department of Water, Land and Biodiversity Conservation, Adelaide, Australia.

Raymond, C. M., I. Fazey, M. S. Reed, L. C. Stringer, G. M. Robinson,and A. C. Evely. 2010. Integrating local and scientific knowledge for environmental management. Journal of Environmental Management 91:1766–1777. http://dx.doi.org/10.1016/j.jenvman.2010.03.023

Reason, P. 1993. Sitting between appreciation and disappointment: a critique of the special edition of Human Relations on action research. Human Relations 46(10):1253–1270. http://dx.doi.org/10.1177/001872679304601007

Reason, P. 1994. Human inquiry as discipline and practice. Pages 40–56 in P. Reason, editor. Participation in human inquiry. Sage, Thousand Oaks, California, USA.

Reed, M., A. Evely, G. Cundill, I. Fazey, J. Glass, A. Laing, J. Newig, B. Parrish, C. Prell, C. Raymond, and L. Stringer. 2010. What is social learning? Ecology and Society 15(4): r1. [online] URL: http://www.ecologyandsociety.org/vol15/iss4/resp1/

Robinson, G. 2008. Participation and stewardship: indicators of sustainability in two Canadian “environmental” programmes. Pages 561–581 in G. M. Robison, editor. Sustainable rural systems: sustainable agriculture and rural communities. Ashgate, Basingstoke, UK and Burlington, Vermont, USA.

Ross, H. 1999. Social R & D for sustainable natural resource management in rural Australia: issues for LWRRDC. Social, Economic, Legal, Policy and Institutional R & D for Natural Resource Management. Land and Water Resources Research and Development Corporation, Canberra, Australia.

Sheng, E., K. Nossal, S. Zhao, P. Kokic, and R. Nelson. 2008. Exploring the feasibility of an adaptive capacity index using ABS data. ABARE and CSIRO Report for the National Land and Water Resources Audit, Canberra, Australia.

Smit, B., and J. Wandel. 2006. Adaptation, adaptive capacity and vulnerability. Global Environmental Change-Human and Policy Dimensions 16:282–292. http://dx.doi.org/10.1016/j.gloenvcha.2006.03.008

South Australian Arid Lands Natural Resources Management Board (SAALNRM Board). 2010. Regional Natural Resources Management Plan for the SA Arid Lands Natural Resources Management Region. SAALNRM Board, Adelaide, Australia.

Taylor, B., S. Lockie, A. Dale, R. Bischof, G. Lawrence, M. Fenton, and S. Coakes. 2000. Capacity of farmers and other land managers to implement change. National Land and Water Resources Audit and National Heritage Trust, Canberra, Australia.

The Aspen Institute. 1996. Measuring community capacity building: a workbook in progress for rural communities. The Aspen Institute, Washington, D.C., USA. [online] URL: http://www.aspeninstitute.org/sites/default/files/content/docs/csg/Measuring_Community_Capactiy_Building.pdf

Thomson, D., and S. Pepperdine. 2003. Assessing community capacity for riparian restoration. Land and Water Australia, Canberra, Australia.

Tress, B., G. Tress, and G. Fry. 2006. Defining concepts and the process of knowledge production in integrative research. Pages 13–26 in B. Tress, G. Tress, G. Fry, and P. Opdam, editors. From landscape research to landscape planning. Springer, Amsterdam, The Netherlands.

United Nations. 2007. Indicators of sustainable development: guidelines and methodologies. Third edition. United Nations, New York, New York, USA.

Verter, B. 2003. Spiritual capital: theorizing religion with Bourdieu against Bourdieu. Sociological Theory 21:150–174.

Vryonides, M. 2007. Social and cultural capital in educational research: issues of operationalisation and measurement. British Educational Research Journal 33:867–885. http://dx.doi.org/10.1080/01411920701657009

Webb, T., and A. Curtis. 2002. Mapping regional capacity. Bureau of Rural Sciences, Canberra, Australia.

Wackernagel, M., L. Onisto, P. Bello, A. C. Linares, I. S. L. Falfan, J. M. Garcia, A. I. S. Guerrero, and C. S. Guerrero. 1999. National natural capital accounting with the ecological footprint concept. Ecological Economics 29:375–390. http://dx.doi.org/10.1016/S0921-8009(98)90063-5

Address of Correspondent:
Christopher M. Raymond
PO Box 190
STIRLING, SA
Australia
5152
chris.raymond@enviroconnect.com.au
Jump to top
Table1  | Table2  | Table3  | Figure1  | Figure2  | Figure3