Ecology and SocietyEcology and Society
 E&S Home > Vol. 26, No. 1 > Art. 2
The following is the established format for referencing this article:
Payne, P. R., W. H. Kaye-Blake, A. Kelsey, M. Brown, and M. T. Niles. 2021. Measuring rural community resilience: case studies in New Zealand and Vermont, USA. Ecology and Society 26(1):2.
https://doi.org/10.5751/ES-12026-260102
Research

Measuring rural community resilience: case studies in New Zealand and Vermont, USA

1Farm Systems and Environment Team, AgResearch, Ruakura Research Centre, 2New Zealand Institute of Economic Research, 3Center for Rural Studies, University of Vermont, 4Farm Systems and Environment Team, AgResearch, Grasslands, 5Food Systems Program, Department of Nutrition and Food Sciences, Gund Institute for Environment, University of Vermont

ABSTRACT

To date, methods for assessing community resilience have focused predominantly on disaster recovery. Those that do focus on broader social-ecological and psychological contexts tend to be qualitative and have not been validated at the community scale. This situation reveals a need for quantitative measurement tools for assessing community resilience to slow-moving change such as rural depopulation or climate change. Our research provides a proof of concept across two diverse contexts, New Zealand and Vermont, USA, that community resilience can be quantified and broken down into dimensions of resilience. Using mixed methods, we assessed how eight communities across two countries perceive resilience and compared their perceptions with indicators of resilience in the form of official statistics. Vermonters generally perceived their communities as more resilient than did New Zealanders, and reported different dimensions of resilience as drivers of overall perceptions of resilience. Although institutional resilience was a driver of overall resilience in both countries, social and cultural dimensions of resilience were also drivers in New Zealand, whereas economic and environmental dimensions were drivers of overall resilience in Vermont. Resilience indicators were found to be weakly related or unrelated to community perceptions of resilience. This result suggests that the proposed method for measuring resilience can be used across contexts, but that there is not one type of resilience that is the key to higher levels of overall resilience. It also suggests that the two proxy measures of resilience, i.e., community perceptions and indicators, do not provide a consistent picture of resilience, raising the question of which is a more accurate measure.
Key words: community; framework; indicators; resilience; rural; thresholds

INTRODUCTION

Over the past decade, there has been a growing number of tools designed for community resilience assessment, which attempt to measure community resilience in a consistent and structured way (Sharifi 2016). Across disciplines, however, definitions, methods, and approaches vary, resulting in a lack of consensus about acceptable (let alone best practice) methods for measuring community resilience (Robinson and Carson 2016, Sharifi 2016, Chuang et al. 2018). Furthermore, it is argued that those tools that do exist do not adequately reflect the dynamic, multifaceted, nested nature of psychological and social-ecological systems (Chuang et al. 2018).

There are currently several common ways to measure community resilience (Chuang et al. 2018). First, there is the use of quantitative indicators as metrics for resilience at a given place-based scale (Wilson 2012). For example, at the town level, statistics such as population change, median income, and access to healthcare services can be aggregated and used as a metric for resilience. This method has the benefit of allowing comparisons of a place over time; however, it is not very useful for considering resilience thresholds, interactions across scales, and components within scales (Cutter et al. 2014, Chuang et al. 2018). There is also a question as to how accurate indicators are as proxy measures for the constructs they are trying to capture, when there is inherent subjectivity in which indicators are selected and how each is weighted (Wilson 2012, Cutter et al. 2014, Payne et al. 2019). Nevertheless, indicators remain a critical tool for establishing a base picture of resilience to reduce complexity for decision-making and guide prioritization of resources (Wilson 2012).

Second, participatory methods have been used to extract narratives from key stakeholders, which are then used to describe system changes (e.g. Sendzimir et al. 2008). Although this method provides a richer, fuller picture than the first method, it does not allow the quantification of community resilience or comparison over time. It also involves subjective meaning-making, in contrast to a process that can be extrapolated to different contexts.

Finally, some attempts at measurement have gone beyond the “place” scale to consider how nonlinear aspects of a system affect connected systems. For example, they examine how policy decisions or market changes can affect farms, and therefore, their communities’ resilience (e.g., Adger et al. 2009). These approaches acknowledge complexity but are difficult to apply across contexts to provide a consistent measure because they tend to involve situation-specific analyses. As Cutter (2016:743) aptly describes, “the devil is always in the details... there is no panacea or one size fits all tool to measure resilience, due to the range of actions, environments, purposes and disciplines involved.”

In the focus on resilience for international development, both quantitative and qualitative approaches are commonly considered (Barrett and Constas 2014, Constas et al. 2014) across multiple scales and interactions (Constas et al. 2014). However, the international development of resilience also struggles with issues of how to measure, monitor, and evaluate resilience, especially regarding interventions that are intended to alleviate poverty and other stressors (Béné et al. 2015). Such challenges are particularly important in light of the continuing emergence of social-ecological stressors and shocks.

Our research acknowledges a gap in the literature: the underdevelopment of quantitative processes for measuring community resilience, particularly in contexts outside of natural disaster and recovery, and across multiple types of resilience (Chuang et al. 2018). Even within the disaster and recovery context, there is no broad model of resilience that has been tested empirically at the community level (Cutter et al. 2008). This gap includes the absence of a method for identifying resilience thresholds or “tipping points” and of a means for testing whether proxy measures of resilience are accurate representations. Here, we seek to provide proof of concept, in two different countries, that community resilience can be measured in a quantitative way, assisted by a qualitative information gathering process.

Defining key concepts

To measure the resilience of a community, it is critical to define the term community. Community is a highly contested term in the literature (Mulligan et al. 2016, Sharifi 2016); for the purposes of this research, we adopt a pragmatic definition that considers both spatial aspects and social interaction. Here, community is defined as the social system interactions that occur within a defined location (Cutter et al. 2008, Wilson 2010). The spatial component of this definition is critical because it allows the use of data on resilience within a space and provides a boundary (Wilson 2010, Robinson and Carson 2016). It is acknowledged, however, that this boundary is fluid and may change over time (Mulligan et al. 2016). Social interaction is also critical for a community; communities are embedded, operate at multiple scales, and a single person may be a member of multiple communities (Mulligan et al. 2016). In this way, if a person resides outside of a small town but uses the town for amenities, facilities, and socializing, that person is included in the definition of the town’s community.

It is also critical to define resilience because definitions vary both within and among disciplines (Sharifi 2016). Here, we define resilience as “the ability of groups of communities to cope with external stresses and disturbances, as a result of social, political, and environmental change” (Adger 2000:347). Our focus is not on resilience in the context of natural disasters and recovery but on slow-moving change such as rural depopulation, climate change, and policy changes (Wilson 2012). That is, the focus is on the psychological strand of community resilience as opposed to the social-ecological. Because of the lack of literature in this particular area, however, we also consider some literature from the disaster and recovery space.

As with community, resilience is not static but can change; it is not isolated at one scale but is nested (Darnhofer 2014, Robinson and Carson 2016, Constas et al. 2014). Resilience is also multifaceted and includes a range of dimensions that have been variously termed as factors, capitals, resources, and strengths in the literature (Jordan and Javernick-Will 2012, Matarrita-Cascante et al. 2017). These dimensions can be described at a general level, such as social, economic, institutional, and cultural dimensions of resilience, or alternatively, they can be described in more detail, such as social capital, economic development, and community competence (Norris et al. 2008). The problem with being specific about these dimensions or components of resilience is that there is contention among models and schools of thought as to which dimensions are correct. Specificity is also a barrier to the model’s use in different contexts such as between the disaster-and-recovery context and slow-moving change that affects community resilience.

Finally, there are different approaches to maintaining resilience, such as “bouncing back” or “bouncing forward” (Robinson and Carson 2016). In the international development literature, resilience is frequently considered as absorptive, adaptive, or transformative capacity (Constas et al. 2014). However, given our focus in two high-income countries, we use the concepts of bouncing back and bouncing forward. Bouncing back means that a community attempts to return to its previous state before the disturbance occurred. For community members, bouncing back can refer to elements of self-identity and continuity with the past. In contrast, bouncing forward means that a community attempts to adapt to the change and shift into a new state after the disturbance (Matarrita-Cascante et al. 2017). This notion does not assume that bouncing forward is better than bouncing back, but rather that there are alternative approaches to maintaining resilience in the face of a shock to the system (Wilson 2012).

Why measure resilience?

There are manifold reasons for measuring resilience, which vary according to context. First, resilience measurement can be used to benchmark a community, either against itself over time or against other communities that might be experiencing similar issues. Benchmarking can assist with tracking a community’s resilience trajectory over time and also with measuring the effect of activities designed to improve resilience (Cutter 2016, Matarrita-Cascante et al. 2017, Sharifi 2016). Doing a 'stocktake' of resilience can also assist with helping decision-makers to prioritise areas for spending, when there are limited resources (Steiner and Markantoni 2014). Measuring a community's resilience and feeding back the results in an accessible way can also empower community members to take action to improve their own resilience, such that measuring resilience can be an intervention to improve resilience, within itself (Sharifi 2016). There is debate in the literature about the extent to which resilience should be measured, if at all. We acknowledge this debate, especially that reducing resilience to a single indicator or metric ignores the complexity of resilience within systems and communities (Quinlan et al. 2016). Our approach was designed to consider explicitly the range of different types of resilience and to understand the extent to which indicators of these resilience components compare with community perspectives. As a result, we believe that our work adds to the body of research related to assessing the complexity of resilience and the capacity of metrics and indicators to capture adequately this multiple systems and states of resilience.

This research

Our research provides proof of concept, across two diverse yet comparable countries, that community resilience can be quantified. We adopted a mixed-methods approach that acknowledges and explores the uniqueness of each community while also providing a quantification of each community’s resilience (as advocated by Wilson 2012). That is, we acknowledge the value of both emic and etic perspectives, assuming that there is value in the use of indicators, as well as in understanding the complex nature of the community narrative that can be told from within a community (Robinson and Carson 2016). We apply the resilience framework of Fielke et al. (2017), which builds upon previous research, including the community capitals framework (Emery et al. 2006) to explore concepts of resilience (Fig. 1). The resilience framework examines five individual dimensions of resilience, i.e., social, cultural, environmental, institutional, and economic, as well as external factors or drivers affecting communities (Table 1). The resilience framework (Fig. 1) assumes three important things about community resilience, which we test further in this research: (1) resilience is quantifiable, both at the individual dimension level and overall; (2) overall resilience is a combination of the multiple dimensions of resilience (a hypothesis we test); and (3) resilience dimensions have thresholds. Although we do not explore the concept of thresholds or minimum necessary levels of resilience for each dimension explicitly, it is an important concept for future work.

METHODS

Aims

We explore the conceptualization and measurement of community resilience as evidenced by community members, as well as its relationship to official statistics (hereafter “indicators”) for a given town. This work supports and expands on a proof of concept provided by Payne et al. (2019) in which research in New Zealand found that it is possible to measure community resilience directly, and doing so highlights important considerations about the use of indicators for resilience. Through a comparative study, we further explore this initial finding and a number of other key questions, including:

  1. How do community members perceive the resilience of their community across multiple dimensions and overall?
  2. Do community members’ ratings on dimensions of resilience correlate with their overall ratings of resilience?
  3. Do community members’ ratings on dimensions of resilience correlate with indicators of resilience (official statistics)?
  4. Do community members’ ratings of overall resilience correlate with indicators of resilience (official statistics)?

Locations

We performed a comparative study between four towns in New Zealand and four towns in Vermont, USA. These regions were chosen for comparison given that both have significant rural populations and rely on rural industries for gross domestic product (GDP) and livelihoods. Both countries are therefore experiencing similar pressures in their rural regions: the necessity to keep them viable because they play a critical role socially and economically, but a lack of funding to sustain the infrastructure and facilities to service remote areas (University of Vermont Extension and Vermont Housing and Conservation Board 2018, Brown et al. 2019).

In New Zealand, the urban vs. rural distinction can be considered a continuum (Statisics New Zealand 2004), but 72% of the population lives in the main cities and another 14.6% resides in other urban areas (Cochrane and Maré 2017). Less than 2% of land is in urban use (Horizons Regional Council 2013, 2014, New Zealand Ministry for the Environment 2015, Waikato Regional Council 2015, Statistics New Zealand 2020), whereas 45% of the total land area was in production agriculture in 2016 (New Zealand Ministry for the Environment and Statistics New Zealand 2018). The rural economy is important for New Zealand: the food and primary sector generates approximately 10.6% of the country’s GDP and 54% of its total exports (New Zealand Ministry for Primary Industries 2017).

In Vermont, 61.1% of the population lives in rural areas, making Vermont the second most rural state in the United States (United States Census Bureau 2010). A total of 21.2% of the Vermont landscape is agricultural, and 76% is forested (Farmland Information Center 2017, Morin et al. 2017). Agriculture, forestry, and fishing contribute < 2% of the state’s GDP, although this figure does not account for subsequent processing of primary products through food manufacturing (Altendorfer et al. 2010) or for tourism that is generated because of agriculture, forestry, and the rural landscape. Given the significant population in rural areas as well as the importance of rural towns for supporting agricultural and forested lands and economies, we focus on understanding how communities in rural regions perceive resilience in their communities.

Four small towns were chosen in each location (Fig. 2). For New Zealand, two towns were located in the Waikato region and two in the Manawatu region of the North Island. For Vermont, one town was selected from each of four counties: Windham County, Chittenden County, Orange County, and Orleans County. The towns were selected according to a range of criteria. First, small towns in each location were identified according to the criteria that they have a population between 4500 and 10,000 people (Q. Howard, unpublished report, https://www.planning.org.nz/Attachment?Action=Download&Attachment_id=3160). Second, all towns that did not have town-level data (indicators such as population change, median household income, unemployment rate) available were removed. These statistics were critical for the methodology and to allow comparison across the four towns in New Zealand and the four towns in Vermont. Finally, towns were selected to ensure geographic distribution (inclusion of several different regions) and variability in the indicator data.

Indicator data

Resilience indicators were compiled for each town. These indicators were official statistics, typically collected through government sources. For New Zealand, statistics were gathered from the 2013 New Zealand Census of Population and Dwellings through Statistics New Zealand at the Territorial Authority, District, and Regional Council levels (Statistics New Zealand 2013a,b). Data were also gathered from regional council websites and reports and the Ministry for the Environment (Horizons Regional Council 2013, 2014, New Zealand Ministry for the Environment 2015, Waikato Regional Council 2015). Data for Vermont were primarily obtained from the U.S. Census Bureau’s American Community Survey 5-Year Data (2009–2017) at the town and county levels (United States Census Bureau 2017). Data were also gathered from the Vermont Secretary of State and from the Wildlands and Woodlands 2017 Report produced by Harvard Forest (Foster et al. 2017, Vermont Secretary of State 2018). The indicator data were collected to align with the five dimensions of resilience proposed in the framework, i.e., social, cultural, institutional, environmental, and economic, at the lowest spatial level possible for the towns (Table 2).

Participant recruitment and compensation

Workshop participants in New Zealand were recruited through two primary means: by using “community champions” and by advertising targeted at the general community. Champions were sought out through existing contacts known to the researchers or by contacting local agencies that were embedded in the community such as community housing. Champions were provided a brief of the research and asked to assist with circulating the brief to their contacts, in addition to providing recommendations for how to recruit within the given community. The workshops were then advertised more broadly, via radio, newspaper, social media, paper flyers in shop windows, and asking local organizations if they would circulate the brief. At least two advertising methods were used in each location.

Participants in Vermont were recruited through Front Porch Forum, a listserve in Vermont that has individual community forums, through e-mails and announcements to town clerks and elected officials. In both locations, recruitment was targeted at community members who either lived in the town or lived outside the town but used the town’s facilities and amenities and socialized in the town (Recker 2009). Participants were all > 18 years of age, and criteria for Vermont residents included having lived in the town for at least one year prior to the research being undertaken.

Participants in New Zealand were provided with morning or afternoon tea and a $20 fuel or supermarket voucher to thank them for their time. Participants in Vermont were provided with dinner and a $50 voucher for their time. Participants in Vermont were required to travel during the middle of winter, sometimes during heavy snowfall, to attend. The difference in compensation rate for New Zealand and Vermont participants was also guided by recommendations provided by the associated ethics committee who reviewed each application and reviewed the amount of compensation.

Ethics

A separate ethical approval process was undertaken for each location. In New Zealand, an application was made to the AgResearch Human Ethics Committee, who reviewed the conditions of participation and approved the research. In Vermont, Institutional Review Board approval was sought through the University of Vermont prior to recruiting subjects or beginning research (study approval 00000057). All participants were informed that participation was entirely voluntary and their data would remain confidential, with all identifiable descriptors removed from individual responses.

Workshop methods

Workshops lasted between 2 and 2.5 h each and consisted of a series of activities, both individual and in groups, to explore concepts of resilience (Fig. 3). The workshops began with an explanation of what the work was trying to achieve, followed by several basic icebreaker activities, including introducing someone else at the table. Participants were also asked to write down one word to describe their town, and these ideas were discussed briefly with the group by the facilitators. Participants were then provided with an overview explanation of the resilience framework (Fig. 1) and what each resilience dimension included.

Next, participants were asked to identify the positives (“What aspects of [town] make you happy to live here?”) and the negatives (“What issues need addressing in [town]?”) of their town. This activity was completed with half of the room at each time, with facilitators milling around and pinning ideas up on the board. All suggestions were then themed by the facilitators (e.g., into issues relating to transport, housing quality, or health services) and presented to the group. This served to familiarize the group with the ideas provided by their community and to provide participants with an opportunity to add any ideas they felt had been missed.

The subsequent activity involved a community quiz. Participants were divided into groups of three to six and answered questions about their towns that were based on the indicator data (Table 2). The purpose of this activity was to familiarize community members with indicator data about their town and to promote discussion about the data, including underestimation or overestimation of statistics.

The final activity involved individual participants rating their community’s resilience on the five resilience dimensions and overall. Ratings were made on a scale of 0 to 10, where 0 represented “not at all resilient”, and 10 represented “very resilient”. Participants were also asked to provide demographic details, including gender, age, ethnicity, number of years spent in the town, occupation, and organizational affiliation (if any). Group discussion occurred throughout the workshops. For the Vermont workshops, the discussion was audio recorded and transcribed for analysis (not analyzed here).

We used a community engagement process for several reasons. First, the purpose of the study was to assess how communities’ own perceptions of resilience compared with the chosen resilience indicators. Second, there are many benefits of community engagement, including improving community members’ understanding of resilience and creating a platform for knowledge and experience sharing (Pfefferbaum et al. 2015).

Limitations of data

Three key issues were identified for the indicator data collected for our study. The first issue was variability in the frequency at which the indicator data were collected. Some data are from 2013, the most recent New Zealand census, whereas other data are from 2017. The Vermont data, obtained from the American Community Survey, are “5-year data”. According to U.S. Census Bureau, “The 5-year estimates from the ACS [American Community Survey] are ‘period’ estimates that represent data collected over a period of time. The primary advantage of using multiyear estimates is the increased statistical reliability of the data for less populated areas and small population subgroups.” However, this definition means that the indicators do not provide a consistent measure of the towns at any single point in time (as in Payne et al. 2019). Second, there were differences in the scales at which the indicator data were collected, with some New Zealand data collected at the region level, and some Vermont data collected at the state level (in particular, the environmental indicator data). This difference means that some indicator data could not be used to compare towns within the same area (for example, the environmental data for Vermont cannot be used as an indicator at the town level). Finally, there were some issues with consistency and comparability of the indicator data across the two locations. Where possible, consistent indicators were used; however, they were not always available, and those that were available were sometimes collected at a different scale (e.g., town vs. region) or for a different time period. Although not ideal, these differences do not affect the ability of our research to answer the core questions, which involved analysis within each location (in particular, assessment of whether indicator data matched community members’ ratings of resilience).

Data sorting and analysis

We compared data from two sources: community resilience ratings collected through the workshops, and official statistics used as indicators for resilience. For the community resilience ratings, six scores were collected per participant: one score on each of the five resilience dimensions from the framework (social, economic, cultural, institutional, and environmental) and a single rating of the overall resilience of the town. The question design presented the participant with a linear scale for each response rating, with the whole numbers 0 to 10 written beneath a line and endpoint hash marks. The participant was asked to make a mark on the line that reflected their opinion of resilience. These community resilience ratings were collated in Excel.

Analyses were conducted using the statistical software R (R Core Team 2017), including multiple packages. Packages loaded for the analysis included readxl, readr, plyr, dplyr, stringr, reshape2, tidyr, RColorBrewer, ggplot2, stargazer (Hlavac 2018), and sandwich. We first checked for differences in ratings across several demographic categories, using t-tests to compare means in community resilience ratings across gender (male vs. female), age (younger vs. older than 65 years of age), and length of tenure in the community (shorter vs. longer than 10 years). Second, we estimated linear regression models using overall resilience ratings as the dependent variable and the resilience dimension ratings as the independent variables. The models were ordinary least squares (OLS) regressions estimated using the linear model (“lm”) function in the base R program. The equation was: Overall rating = ∑βd(Dimension rating), where d indexes the five dimensions of resilience. This analysis investigated the relationship between the perceptions of overall resilience and the individual dimensions and, in particular, sought to identify which dimensions had greater effects on the overall perception of community resilience (e.g., whether social resilience was driving overall resilience ratings). In addition, regressions were estimated using the sum of the resilience ratings as an independent variable, a test for the idea that dimensions are treated as compensatory. Compensatory dimensions is the idea that a greater score on one type of resilience can make up for a lesser score on another type of resilience such that overall resilience is similar.

The next part of the analysis investigated the links between indicators (official statistics) and community resilience ratings. We used linear regressions to investigate the effect of individual indicators on the related resilience dimension using OLS regressions. The models were: Dimension rating = ∑βi(Dimension indicator), where i indexes the indicators relevant to the resilience dimension. The social dimension ratings were modeled against population change, the proportion of population that is of working age, the proportion with secondary education, the proportion with tertiary education, the proportion with access to a telephone, and the proportion with access to the internet. The economic dimension ratings were tested against distance to a city, unemployment rate, median income, and indices of industrial and occupational diversity. Cultural ratings were modeled as functions of religious participation and proportion of the population born overseas. Institutional ratings were modeled against distance to a city and voter turnout. We report the models that had significant parameter estimates; the other estimated relationships were statistically nonsignificant. Some of the models presented are nonsignificant but are included for comparison between the single-dimension models and the overall resilience models. Indicators for social, economic, cultural, and institutional dimensions were modeled (environmental resilience indicators were not modeled because they were oriented at the regional or state scale rather than the town scale). After the individual ratings were modeled, the effects of individual indicators on the ratings for overall resilience were modeled.

RESULTS

Across the two countries, a total of 96 community members participated in the research: 51 people in New Zealand, and 45 people in Vermont. Workshop numbers varied from 8 to 15 people in New Zealand, and 7 to 14 people in Vermont. Participants at the workshops represented a diverse array of ages and had spent a varied amount of time in their communities (Table 3). On average, New Zealand participants had spent 31.6 years in their community, whereas Vermont participants had spent 20.8 years in their community.

Community perceptions of resilience (resilience dimensions and overall)

The first research question was: How do community members perceive the resilience of their community across multiple dimensions and overall? We explored how communities perceived their town’s resilience across the five dimensions of resilience and overall. In all categories, Vermonters generally perceived their communities as more resilient, both overall and on individual dimensions, than did New Zealanders (Fig. 4, Table 4). In New Zealand, the overall resilience score was 5.97 (range 5.38–6.39), whereas in Vermont, it was 6.59 (range 5.42–7.00), on a scale of 0 to 10. Both New Zealand and Vermont communities perceived economic resilience to be the lowest among the five resilience dimensions (Vermont mean = 4.81, range 3.35–5.64; New Zealand mean = 4.22, range 3.38–4.86). Conversely, in New Zealand, cultural resilience was deemed to be the highest among all resilience components (mean = 6.18, range 5.75–7.04), whereas in Vermont, environmental resilience was deemed to be highest (mean = 6.91, range 6.00–7.71).

The community resilience ratings were also analyzed for differences across the demographic categories. Sex as a binary variable (male, female) did not show any significant relationship with resilience ratings. However, both age and length of time in the town (tenure) as binary variables were significantly related to community ratings (Tables 5 and 6). The social resilience ratings were related to both demographic variables: older people and people who had lived in the towns for longer rated the town’s social resilience as significantly higher. In addition, the older residents rated overall resilience higher, and those with longer tenure rated economic resilience higher. The other dimension ratings had the same pattern: older people and those with longer tenure gave higher ratings; however, these differences were not statistically significant.

Relationship of dimensions of resilience to overall resilience

The second research question was: Do community members’ ratings on the dimensions of resilience correlate with their overall ratings of resilience? We investigated how community members’ perceptions of dimensions of resilience influenced perceptions of overall ratings of resilience, with the expectation that they would be significantly related. The combined model with data from both countries (N = 95) found that community resilience ratings on all dimensions contributed significantly to overall perceptions of resilience (Table 6). The dimensions with the highest effects on perceptions of overall resilience were social and institutional dimensions. The economic dimension had the smallest effect of the five dimensions.

The same model was estimated for each country individually, and differences were evident between Vermont and New Zealand. In Vermont, community members’ ratings on the economic, institutional, and environmental dimensions of resilience were significantly related to community members’ overall ratings of resilience. The parameter for cultural resilience was nearly as large as the parameter for economic resilience, but large variation in community members’ ratings meant that the parameter was not significant. In New Zealand, community members’ ratings on the social, cultural, and institutional dimensions of resilience were significantly related to their overall ratings of resilience. The estimated parameter for the economic dimension was essentially zero, and the environmental parameter was small and statistically nonsignificant. The modeling results suggest that community members’ ratings of the multiple dimensions of resilience do affect their overall ratings of resilience, and that there are differences in the relationship across the dimensions and between the two countries (Table 7).

Another question with regard to the dimensions was the extent to which they could compensate for each other. A participant might have perceived that a community was strong on one dimension but weaker in another, but that the one made up for or compensated for the other. One way to test for compensation was to compare a model that treated the dimensions separately with one that combined them into a single score, with the overall ratings as the dependent variable. The fit of both models was essentially the same, and all parameters were statistically significant across both models (Table 8). The results suggest that both ways of interpreting the data are correct: each dimension contributed to overall resilience, some a bit more and some a bit less, but they also collectively contributed to resilience in a compensatory way.

The third research question concerned the relationships between perceptions of resilience (the ratings) and possible indicators of resilience: Do community members’ ratings on the dimensions of resilience correlate with indicators of resilience (official statistics)? Regression models were estimated to test several relationships, where each dimension of resilience was paired with the indicators hypothesized to be associated with it. These models found few relationships between indicators and participants’ perceptions of resilience for each dimension (Table 9). Under the social resilience dimension, a few indicators were significantly related to social resilience ratings. Population change, the proportion of the population with tertiary education, and telephone access were all significantly related to social dimension ratings. However, none of the linear models explained a large part of the variation in the ratings. An interesting result was that telephone access was negatively related to social ratings, although, again, the model explains only approximately 10% of the variation in ratings. Two other models are included for comparison to the models for overall ratings, but are not significant in themselves (Table 9). The results are interesting for the nonsignificant results: indicators that have been suggested or used as proxy measures for resilience and related constructs (e.g., unemployment rate) were not significantly or reliably related to participants’ perceptions of community resilience.

The final research question moved from the individual dimensions of resilience to community members’ overall perceptions of resilience and asked: Do community members’ ratings of overall resilience correlate with indicators of resilience (official statistics)? There was some consistency with the dimensional models, with the same pattern of size and significance for the social and overall ratings (Table 10). Population change and tertiary education were predictive of both social ratings and overall resilience ratings. Other indicators were significant for one but not the other. Secondary education was not a significant predictor of social ratings, but was significant for the overall ratings. In contrast, telephone access was related to social ratings but not to the overall ratings. The proportion of the population born overseas was significantly related to the overall ratings, but not to the cultural ratings. Importantly, all of the model fit statistics were low. Finally, it is important to acknowledge the relationships that were nonsignificant. No indicators predicted either economic or institutional ratings: median income and unemployment did not predict participants’ economic ratings, and voter turnout did not predict institutional ratings.

DISCUSSION

At a broad scale, our findings suggest that the resilience framework used here (Fielke et al. 2017) can be applied across countries. This result answers the critique that existing broad resilience conceptual frameworks may not work across different settings (Robinson and Carson 2016). Second, our findings suggest that there is not one (or multiple) dimension of resilience that drives communities’ perceptions of overall resilience, either within a location (New Zealand vs. Vermont) or between locations. Community members of each town report different strengths and different drivers of overall resilience. Third, there appear to be two proxy measures of resilience: community perceptions and indicators in the form of official statistics. It is not clear which is a better measure for a community, and these two measures are not consistently or strongly related to one another. In the remaining discussion, we examine the results for each research question.

Community perceptions of resilience

Vermont participants rated their resilience as higher than New Zealand residents, both across individual resilience dimensions and overall. In New Zealand, cultural resilience was rated strongest, whereas in Vermont, environmental resilience was rated strongest. This result is interesting in that different communities perceive themselves as being more or less resilient in different areas, but also that there is not a consistent relationship between indicators that are viewed most positively and indicators that are actually driving overall resilience ratings. That is, New Zealand participants’ overall resilience ratings were driven by social, cultural, and institutional dimensions of resilience, whereas Vermont participants’ ratings were driven by economic, institutional, and environmental dimensions. This result demonstrates that there are different drivers in different locations (i.e., there is no key dimension of resilience that can improve overall perceptions of resilience) and that community members do not simply put on their “rose-tinted glasses” and see the strongest resilience dimensions as driving overall resilience. Similarly, community members do not see the weakening of one type of resilience as the determinant for overall resilience. This result may be because the dimension of resilience that has been rated as lower (e.g., economic) has not fallen below a critical threshold (Wilson 2012). Regardless, this finding certainly suggests complexity in how community members conceptualize their town’s resilience as an interplay between context-specific variables.

One important theory in the literature rests on the premise that social resilience (variously termed social capital, social learning) is a critical factor underpinning community resilience (Wilson 2012). This school of thought sees social resilience as a key determinant of adaptive capacity, the ability of communities to learn and develop new resilience trajectories and ultimately take control of their own resilience pathway (Chaskin 2008). When defined in the literature, this notion of social resilience or social capital extends to include organizational structures, social control, access to opportunities and institutions, and structures that connect the community (Chaskin 2008, Davidson 2010, Wilson 2012). Essentially, these definitions encompass critical components of the institutional dimension of resilience outlined in the resilience framework. Given this idea, it is perhaps unsurprising that institutional resilience is found to drive overall resilience ratings in both locations; nevertheless, it remains unexplained why there are further different drivers in each local context.

The fact that communities in both locations rate their economic resilience as lowest may be explained in that both countries are highly reliant on the primary industries, both for their GDP but also for providing the livelihoods of rural community members. This result may mean overdependence on a “monocultural economic base” (Bardhan 2006, Cutter et al. 2010, Wilson 2012). Even for those operating multifunctional farm systems, pressures about environment sustainability, regulatory and policy changes, and consumer demands are affecting the viability of existing farming practices (Brown et al. 2019; M. T. Niles, unpublished report, https://docs.wixstatic.com/ugd/64f510_876da5aced994329a359ecc5b4247577.pdf), which may be contributing to perceptions of reduced economic resilience in these rural communities.

After considering which individual resilience dimensions drive overall ratings of resilience, another second critical concept we aimed to examine was how the resilience dimensions related to overall resilience. That is, are they five separate constructs that each contribute in a linear way toward perceptions of overall resilience, or do they collectively contribute to resilience in a compensatory way in which more of one can make up for less of another? The literature suggests a high degree of interdependence between resilience types, whereby changes in one dimension of resilience affect changes in another (Kinzig et al. 2006, Wilson 2012). We found that each of the resilience dimensions does in fact provide information about overall resilience, but simultaneously, if scores on a particular resilience dimension are low, overall resilience ratings are reliant on other dimensions of resilience to determine the overall level of resilience. Essentially, this result suggests that if a community does not perceive strengths in one type of resilience, it falls back on another type. This is somewhat contrary to Wilson (2012:34), who suggests that “a small change in one of the capitals [dimensions] can propel a community towards strengthened or weakened resilience”. Instead, it appears that scores on the other resilience dimensions may dampen the effect of high or low resilience on a particular dimension.

Correlation between community perceptions and official statistics

The fact that there is no statistically significant relationship between community members’ ratings of resilience and most of the selected indicators of resilience (and that those that do exist are weak) may suggest several ideas. It could result from the small number of community members who participated in the research, so that the sample is not large enough to provide enough power for analyses or has not provided representative data, and therefore, responses are not strongly or (in most cases) significantly related to indicator data. This concern is reasonable, given that the sample represents a very small proportion of each community.

The absence of a strong and consistent relationship between community members’ perceptions of resilience and indicators may also be partially explained by the fact that “factors defining resilience come clustered together” (Wilson 2012:33), whereby higher scores on economic indicators can affect social resilience or lower scores on environmental resilience might affect institutional resilience. This result would suggest a nonlinear relationship between indicators and resilience, or resilience overall (Barrett and Constas 2014), which our analysis may not have detected. Thus, it may be more productive for communities to create an “index of resilience” from a composite of indicators (e.g., see Cutter et al. 2010) and to assess whether it is significantly related to communities’ perceptions of their resilience.

Despite these alternative explanations for our findings, it is also worth considering that the data are indicative of community resilience and the relationship could be detected using our analysis methods. This idea raises a critical question: Which of these metrics (community perceptions vs. official statistics) is a better measure of a community’s resilience? One way of examining this question is to consider that a community’s resilience is determined by how the community responds to a shock or change (Wilson 2012, Robinson and Carson 2016). This idea means that, ultimately, it is about people and their ability to draw on social capital resources. Thus, it is almost irrelevant what indicators say about a community; it is about how communities respond in real life, which is likely to be more closely related to their perceptions of their community than selected indicators. This concept is much more difficult to assess outside of the context of disaster resilience, however, because in a disaster, indicators may be more linearly related to ability to recover. For example, the proportion of community members who have access to a phone is likely to be strongly related to giving people advance warning of a disaster and coordinating rescue and recovery. In contrast, it is difficult to determine the relationship between access to a phone and ability of a town to recover from phenomena such as climate change or rural depopulation.

Nevertheless, there are many strengths of using an indicator-based approach, particularly when those indicators are constructed into a compositive resilience index using a structured process, as has been done in the disaster resilience space (Cutter et al. 2010). This approach allows the calculation of an index for different scales and across locations and allows measurement over time, and as such, provides a baseline measure. This function is a strength for use within the public policy sector and for deciding where to allocate resources using an objective approach. It would be much more difficult (if not impossible) to collect regular, representative, and widespread perceptions of community resilience.

Wilson (2012) negotiates a “middle ground”, necessitating that all quantitative assessments of community resilience (such as indicators) should be used with at least one other measure. This may be a useful approach, calculating baseline resilience index scores for communities within a given location and then “ground-truthing” them with supplementary data collected from communities themselves. As Cutter et al. (2010) put it, indicators may be the first “‘broad brush’ of the patterns of disaster resilience”, and this situation may also be the case with resilience to slow-moving change.

For communities, and for the researchers who may work with communities to study resilience, our work suggests several important practical takeaway messages. First, the best method may be to define resilience to include multiple components and then measure those multiple components. There was no single dimension of resilience that drove an overall perception of community resilience. Our results further suggest that when one aspect of resilience is lower, higher resilience in other areas may make up for that in an overall resilience perception. Thus, focusing on only on dimension of resilience could result in obtaining an inaccurate overall community perception of resilience. Second, indicators of community resilience based on government data may not accurately reflect on-the-ground perceptions from community members. Although indicators can be useful to provide a general comparison on some metrics across communities, they do not appear to correlate well with what community members think of their own communities. The scale of these indicators is also critical to consider, especially because many indicators are not measured at local community scales, which may further impede the capacity of indicators to reflect community perceptions. Finally, our results suggest that researchers need “to get their hands dirty”. That is, if they want to understand community resilience, they need to talk with the people in that community because they may not obtain an accurate picture from indicators on paper. An on-the-ground approach also provides context for quantitative metrics, allowing community members to provide insight into their answers and enrich quantitative data with qualitative data.

Limitations

A key limitation of this research was the number of local community members who participated. The level of participation was determined by the chosen format for data collection (i.e., workshops, which prohibited having > 15 participants) and by the number of community members who would attend. Participant numbers therefore affect the representativeness of the data collected, which is a common issue experienced in resilience measurement that incorporates community engagement (Wilson 2012). We aimed to maximize the representativeness of the sample by ensuring that recruitment was targeted at a diverse range of audiences (age, occupations, and ethnic groups). Participant numbers also affected the statistical power, which explain why some findings were insignificant or weaker than expected. Another limitation is the range of data available for our quantitative analysis at a community level. Although we found some indicators that were localized to a specific town, others were regional. We acknowledge that this mixture of scales presents challenges with interpretation of our results, especially the regression analyses, where the individual town may not have the same data as a regional level, so we caution against interpretation of regional indicators to a local level. However, the lack of data available at the community level further highlights the need to improve data collection to facilitate more fine-scale analyses at community or even subcommunity levels.

Areas for future research

Future research may consider the development of a method to scale the process of measuring resilience using community perceptions, as demonstrated here. A limitation of the current research is that workshops require preparation and engagement and are therefore prohibitive in terms of cost and require achieving engagement from a wide-ranging and representative group of community members. Further testing with a larger sample would permit a more powerful analysis of the relationship between community members’ ratings and proposed resilience indicators.

It would also be useful to consider testing which measure of resilience, i.e., community members’ ratings vs. indicators, is a more accurate predictor of actual community resilience over time. As Wilson (2012) explains, the only accurate measure of resilience is made over time. We know that these two data sources provide different results, but ultimately, which is a more accurate proxy?

RESPONSES TO THIS ARTICLE

Responses to this article are invited. If accepted for publication, your response will be hyperlinked to the article. To submit a response, follow this link. To read responses already accepted, follow this link.

ACKNOWLEDGMENTS

We thank the many community members and stakeholders in the study areas who participated or who assisted with recruitment of community members.

DATA AVAILABILITY

The data that support the findings of this study are available on request from the corresponding author (MTN). The data are not publicly available because they contain information that could compromise the privacy of the research participants and the sensitivity of the town geographic locations. Human subjects approval for this work in New Zealand was obtained from the social research ethical review process at AgResearch Limited, which is within the guidelines of the Code of Ethics developed by the New Zealand Association of Social Science Researchers. The human subjects approval for this work in Vermont was obtained from the Institutional Review Board at the University of Vermont.

LITERATURE CITED

Adger, W. N. 2000. Social and ecological resilience: Are they related? Progress in Human Geography 24(3):347-364. https://doi.org/10.1191/030913200701540465

Adger, W. N., H. Eakin, and A. Winkels. 2009. Nested and teleconnected vulnerabilities to environmental change. Frontiers in Ecology and the Environment 7(3):150-157. https://doi.org/10.1890/070148

Altendorfer, I., D. Holland, A. Isaacson, and A. Gierzynski. 2010. Economic impact of agriculture in Vermont. Vermont Legislative Research Service, University of Vermont, Burlington, Vermont, USA. [online] URL: https://www.uvm.edu/~vlrs/Agriculture/agric%20econ%20impact.pdf

Bardhan, P. 2006. Globalization and rural poverty. World Development 34(8):1393-1404. https://doi.org/10.1016/j.worlddev.2005.10.010

Barrett, C. B., and M. A. Constas. 2014. Toward a theory of resilience for international development applications. Proceedings of the National Academy of Sciences 111(40):14625-14630. https://doi.org/10.1073/pnas.1320880111

Béné, C., T. Frankenberger, and S. Nelson. 2015. Design, monitoring and evaluation of resilience interventions: conceptual and empirical considerations. IDS Working Paper 459. Institute of Development Studies, London, UK. [online] URL: https://www.ids.ac.uk/publications/design-monitoring-and-evaluation-of-resilience-interventions-conceptual-and-empirical-considerations/

Brown, M., B. Kaye-Blake, and P. Payne, editors. 2019. Heartland strong: how rural New Zealand can change and thrive. Massey University Press: Auckland, New Zealand.

Chaskin, R. J. 2008. Resilience, community, and resilient communities: conditioning contexts and collective action. Child Care in Practice 14(1):65-74. https://doi.org/10.1080/13575270701733724

Chuang, W. C., A. Garmestani, T. N. Eason, T. L. Spanbauer, H. B. Fried-Peterson, C. P. Roberts, S. M. Sundstrom, J. L. Burnett, D. G. Angeler, B. C. Chaffin, L. Gunderson, D. Twidwell, and C. R. Allen. 2018. Enhancing quantitative approaches for assessing community resilience. Journal of Environmental Management 213:353-362. https://doi.org/10.1016/j.jenvman.2018.01.083

Cochrane, W., and D. Maré. 2017. Urban influence and population change in New Zealand. Policy Quarterly 13(Supplementary): 61-71. https://doi.org/10.26686/pq.v13i0.4556

Constas, M. A., T. R. Frankenberger, and J. Hoddinott. 2014. Resilience measurement principles: toward an agenda for measurement design. Technical Series 1. Food Security Information Network, Food and Agriculture Organization, and World Food Programme, Rome, Italy. [online] URL: https://www.fsnnetwork.org/resource/resilience-measurement-principles-toward-agenda-measurement-design

Cutter, S. L., L. Barnes, M. Berry, C. Burton, E. Evans, E. Tate, and J. Webb. 2008. A place-based model for understanding community resilience to natural disasters. Global Environmental Change 18(4):598-606. https://doi.org/10.1016/j.gloenvcha.2008.07.013

Cutter, S. L., C. G. Burton, and C. T. Emrich. 2010. Disaster resilience indicators for benchmarking baseline conditions. Journal of Homeland Security and Emergency Management 7(1):51. https://doi.org/10.2202/1547-7355.1732

Cutter, S. L., K. D. Ash, and C. T. Emrich. 2014. The geographies of community disaster resilience. Global Environmental Change 29:65-77. https://doi.org/10.1016/j.gloenvcha.2014.08.005

Cutter, S. L. 2016. The landscape of disaster resilience indicators in the USA. National Hazards 80:741-758. https://doi.org/10.1007/s11069-015-1993-2

Darnhofer, I. 2014. Resilience and why it matters for farm management. European Review of Agricultural Economics 41(3):461-484. https://doi.org/10.1093/erae/jbu012

Davidson, D. J. 2010. The applicability of the concept of resilience to social systems: some sources of optimism and nagging doubts. Society and Natural Resources 23(12):1135-1149. https://doi.org/10.1080/08941921003652940

Emery, M., S. Fey, and C. Flora. 2006. Using community capitals to develop assets for positive community change. CD Practice 13:1-19. [online] URL: http://srdc.msstate.edu/fop/levelthree/trainarc/socialcapital/communitycapitalstodevelopassets-emeryfeyflora2006.pdf

Farmland Information Center. 2017. Vermont data and statistics: census of agriculture. United States Department of Agriculture and American Farmland Trust, Washington, D.C., USA. [online] URL: https://www.farmlandinfo.org/statistics/vermont

Fielke, S., W. Kaye-Blake, W. Smith, and R. Vibart. 2017. Operationalising rural community resilience: framing indicators for measurement. Report for Resilient Rural Communities Programme. AgResearch, Hamilton, New Zealand.

Foster, D., K. F. Lambert, D. Kittredge, B. Donahue, C. Hart, W. Labich, S. Meyer, J. Thompson, M. Buchanan, J. Levitt, R. Perschel, K. Ross, G. Elkins, C. Daigle, B. Hall, E. Faison, A. D’Amato, R. Forman, P. Del Tredici, L. Irland, B. Colburn, D. Orwig, J. Aber, A. Berger, C. Driscoll, W. Keeton, R. Lilieholm, N. Pederson, A. Ellison, M. Hunter, and T. Fahey. 2017. Wildlands and woodlands, farmlands and communities: broading the vision for New England. Harvard Forest, Harvard University, Petersham, Massachusetts, USA. [online] URL: https://www.wildlandsandwoodlands.org/sites/default/files/Wildlands%20and%20Woodlands%202017%20Report.pdf

Hlavac, M. 2018. Stargazer: well-formatted regression and summary statistics tables. R package version 5.2.2. [online] URL: https://CRAN.R-project.org/package=stargazer

Horizons Regional Council. 2013. 2013 state of environment. Horizons Regional Council, Palmerston North, New Zealand. [online] URL: https://www.horizons.govt.nz/CMSPages/GetFile.aspx?guid=725c8a67-ff40-4962-b728-62430f38e82c&disposition=attachment

Horizons Regional Council. 2014. One plan: the consolidated regional policy statement, regional plan and regional coastal plan for the Manawatu-Wanganui region. Horizons Regional Council, Palmerston North, New Zealand. [online] URL: https://www.horizons.govt.nz/CMSPages/GetFile.aspx?guid=ad4efdf3-9447-45a3-93ca-951136c7f3b3

Jordan, E., and A. Javernick-Will. 2012. Measuring community resilience and recovery: a content analysis of indicators. Pages 2190-2199 in H. Cai, A. Kandil, M. Hastak, an P. S. Dunston, editors. Construction research congress 2012: construction challenges in a flat world. American Society of Civil Engineers, Reston, Virginia, USA. https://doi.org/10.1061/9780784412329.220

Kinzig, A. P., P. Ryan, M. Etienne, H. Allison, T. Elmqvist, and B. H. Walker. 2006. Resilience and regime shifts: assessing cascading effects. Ecology and Society 11(1):20. https://doi.org/10.5751/ES-01678-110120

Matarrita-Cascante, D., B. Trejos, H. Qin, D. Joo, and S. Debner. 2017. Conceptualizing community resilience: revisiting conceptual distinctions. Community Development 48(1):105-123. https://doi.org/10.1080/15575330.2016.1248458

Morin, R. S., G. M. Domke, B. F. Walters, and S. Wilmot. 2017. Forests of Vermont, 2016. Resource Update FS-119. United States Department of Agriculture Forest Service, Northern Research Station, Newtown Square, Pennsylvania, USA. [online] URL: https://www.fs.fed.us/nrs/pubs/ru/ru_fs119.pdf

Mulligan, M., W. Steele, L. Rickards, and H. Fünfgeld. 2016. Keywords in planning: What do we mean by ‘community resilience’? International Planning Studies 21(4):348-361. https://doi.org/10.1080/13563475.2016.1155974

New Zealand Ministry for Primary Industries. 2017. Briefing for incoming Ministers. Ministry for Primary Industries, Wellington, New Zealand. [online] URL: https://www.beehive.govt.nz/sites/default/files/2017-12/Ministry%20for%20Primary%20Industries.pdf

New Zealand Ministry for the Environment. 2015. Estimated long-term soil erosion: average volume of soil erosion, by region, 2012. New Zealand Ministry for the Environment, Wellington, New Zealand. [online] URL: https://data.mfe.govt.nz/table/52483-estimated-long-term-soil-erosion-average-volume-of-soil-erosion-by-region-2012/history/

New Zealand Ministry for the Environment and Statistics New Zealand. 2018. New Zealand’s environmental reporting series: our land 2018. Ministry for the Environment and Statistics New Zealand, Wellington, New Zealand. [online] URL: https://www.mfe.govt.nz/sites/default/files/media/RMA/Our-land-201-final.pdf

Norris, F. H., S. P. Stevens, B. Pfefferbaum, K. F. Wyche, and R. L. Pfefferbaum. 2008. Community resilience as a metaphor, theory, set of capacities, and strategy for disaster readiness. American Journal of Community Psychology 41:127-150. https://doi.org/10.1007/s10464-007-9156-6

Payne, P. R., W. H. Kaye-Blake, K. A. Stirrat, R. A. Ellison, M. J. Smith, and M. Brown. 2019. Identifying resilience dimensions and thresholds: evidence from four rural communities in New Zealand. Resilience 7(2):149-171. https://doi.org/10.1080/21693293.2018.1545339

Pfefferbaum, B., R. L. Pfefferbaum, and R. L. Van Horn. 2014. Community resilience interventions: participatory, assessment-based, action-oriented processes. American Behavioral Scientist 59(2):238-253. https://doi.org/10.1177/0002764214550298

Quinlan, A. E., M. Berbés-Blázquez, L. J. Haider, and G. D. Peterson. 2016. Measuring and assessing resilience: broadening understanding through multiple disciplinary perspectives. Journal of Applied Ecology 53(3):677-687. https://doi.org/10.1111/1365-2664.12550

R Core Team. 2017. R: a language and environment for statistical computing. Version 3.3.3. R Foundation for Statistical Computing, Vienna, Austria.

Recker, N. L. 2009. Resilience in small towns: an analysis of economic shocks, social capital, and quality of life. Dissertation. Iowa State University, Ames, Iowa, USA. https://doi.org/10.31274/etd-180810-1675

Robinson, G. M., and D. A. Carson. 2016. Resilient communities: transitions, pathways and resourcefulness. Geographical Journal 182(2):114-122. https://doi.org/10.1111/geoj.12144

Sendzimir, J., P. Magnuszewski, Z. Flachner, P. Balogh, G. Molnar, A. Sarvari, and Z. Nagy 2007. Assessing the resilience of a river management regime: informal learning in a shadow network in the Tisza River basin. Ecology and Society 13(1):11. [online] URL: http://www.ecologyandsociety.org/vol13/iss1/art11/

Sharifi, A. 2016. A critical review of selected tools for assessing community resilience. Ecological Indicators 69:629-647. https://doi.org/10.1016/j.ecolind.2016.05.023

Statistics New Zealand. 2004. New Zealand: an urban/rural profile. Statistics New Zealand, Wellington, New Zealand. [online] URL: http://infoshare.stats.govt.nz/~/media/Statistics/browse-categories/maps-and-geography/geographic-areas/urban-rural-profile/maps/nz-urban-rural-profile-report.pdf

Statistics New Zealand. 2013a. 2013 census meshblock data set. Statistics New Zealand, Wellington, New Zealand.

Statistics New Zealand. 2013b. 2013 census quickstats about a place. Statistics New Zealand, Wellington, New Zealand. [online] URL: http://infoshare.stats.govt.nz/Census/2013-census/profile-and-summary-reports/quickstats-about-a-place.aspx#14181&gsc.tab=0

Statistics New Zealand. 2020. Urban rural 2020 (generalised). Statistics New Zealand, Wellington, New Zealand. [online] URL: https://datafinder.stats.govt.nz/layer/104269-urban-rural-2020-generalised/webservices/

Steiner, A., and M. Markantoni. 2014. Unpacking community resilience through capacity for change. Community Development Journal 49(3):407-425. https://doi.org/10.1093/cdj/bst042

United States Census Bureau. 2010. American factfinder: urban and rural total population Vermont State. United States Census Bureau, Washington, D.C., USA. [online] URL: https://data.census.gov/cedsci/table?q=urban%20rural%20population&g=0400000US50.050000&tid=DECENNIALSF12010.H2&hidePreview=false

United States Census Bureau. 2017. U.S. Census Bureau’s American community survey 5-year data (2009–2017), at the town and country level. United States Census Bureau, Washington, D.C., USA. [online] URL: https://www.census.gov/acs/www/data/data-tables-and-tools/subject-tables/

University of Vermont Extension and Vermont Housing and Conservation Board. 2018. A 2018 exploration of the future of Vermont agriculture: ideas to seed a conversation and a call to action. University of Vermont Extension, Burlington, Vermont, USA. [online] URL: https://www.uvm.edu/sites/default/files/media/Future-of-VT-Ag-Report-2018-Final_5.pdf

Vermont Secretary of State. 2018. 2018 general election results (official): 2018 GE voter turnout (excel). Vermont Secretary of State, Burlington, Vermont, USA. [online] URL: https://sos.vermont.gov/media/mzrbaxb5/2018-ge-voterturnout.xlsx

Waikato Regional Council. 2015. Waikato progress indications: report cards: indigenous vegetation. Waikato Regional Council, Hamilton, New Zealand. [online] URL: https://www.waikatoregion.govt.nz/community/waikato-progress-indicators-tupuranga-waikato/indigenous-vegetation/

Wilson, G. 2010. Multifunctional ‘quality’ and rural community resilience. Transactions of the Institute of British Geographers 35(3):364-381. [online] URL: https://www.jstor.org/stable/40890993

Wilson, G. A. 2012. Community resilience and environmental transitions. Routledge, London, UK. https://doi.org/10.4324/9780203144916

Address of Correspondent:
Meredith T. Niles
109 Carrigan Drive
350 Carrigan Wing
University of Vermont
Burlington, VT 05405 USA
mtniles@uvm.edu
Jump to top
Table1  | Table2  | Table3  | Table4  | Table5  | Table6  | Table7  | Table8  | Table9  | Table10  | Figure1  | Figure2  | Figure3  | Figure4