Ecology and SocietyEcology and Society
 E&S Home > Vol. 21, No. 3 > Art. 17
The following is the established format for referencing this article:
Moon, K., T. D. Brewer, S. R. Januchowski-Hartley, V. M. Adams, and D. A. Blackman. 2016. A guideline to improve qualitative social science publishing in ecology and conservation journals. Ecology and Society 21(3):17.
http://dx.doi.org/10.5751/ES-08663-210317
Insight

A guideline to improve qualitative social science publishing in ecology and conservation journals

1Institute for Applied Ecology, University of Canberra, 2School of Business, University of New South Wales, 3Northern Institute, Charles Darwin University, Darwin, Northern Territory, Australia, 4Australian Institute of Marine Science, Arafura Timor Research Facility, Brinkin, Northern Territory, Australia, 5Laboratoire Evolution et Diversité Biologique, Université Paul Sabatier, Toulouse, France, 6University of Queensland, School of Biological Sciences, Brisbane, Queensland, Australia

ABSTRACT

A rise in qualitative social science manuscripts published in ecology and conservation journals speaks to the growing awareness of the importance of the human dimension in maintaining and improving Earth’s ecosystems. Given the rise in the quantity of qualitative social science research published in ecology and conservation journals, it is worthwhile quantifying the extent to which this research is meeting established criteria for research design, conduct, and interpretation. Through a comprehensive review of this literature, we aimed to gather and assess data on the nature and extent of information presented on research design published qualitative research articles, which could be used to judge research quality. Our review was based on 146 studies from across nine ecology and conservation journals. We reviewed and summarized elements of quality that could be used by reviewers and readers to evaluate qualitative research (dependability, credibility, confirmability, and transferability); assessed the prevalence of these elements in research published in ecology and conservation journals; and explored the implications of sound qualitative research reporting for applying research findings. We found that dependability and credibility were reasonably well reported, albeit poorly evolved in relation to critical aspects of qualitative social science such as methodology and triangulation, including reflexivity. Confirmability was, on average, inadequately accounted for, particularly with respect to researchers’ ontology, epistemology, or philosophical perspective and their choice of methodology. Transferability was often poorly developed in terms of triangulation methods and the suitability of the sample for answering the research question/s. Based on these findings, we provide a guideline that may be used to evaluate qualitative research presented in ecology and conservation journals to help secure the role of qualitative research and its application to decision making.
Key words: case study; confirmability; credibility; dependability; methods; transferability

INTRODUCTION

Researchers are becoming increasingly aware of the importance of integrating social and ecological sciences to improve the ongoing, long-term sustainable use of natural resources. Integration of these two disciplines has resulted in a rise in both quantitative and qualitative social science research being published in ecology and conservation journals (Cox 2015; Fig. A1.1). The growth of qualitative social science research (herein referred to as qualitative research) can be attributed to an increasing recognition of its value in seeking to define and understand complexity rather than to reduce it (Creswell 2009); expanding the range of research questions that can be asked (Prokopy 2011); providing an in-depth understanding of phenomena, including the role of politics and power relations (Belsky 2004, Creswell 2009); understanding human behavior in support of decision making and policy processes (Cowling 2014); illuminating the social, historical, cultural, political, and economic factors that affect action (Sayre 2004); developing and testing social theories and hypotheses (Bryman 2008); and revealing the importance of contextual understandings and individual experiences (Creswell 2009, Moon and Blackman 2014).

Qualitative research is defined by the philosophical nature of the inquiry, that is, the ontologies, epistemologies, and methodologies that researchers adopt during the design of their research projects, and the associated assumptions they make when collecting, analyzing, and interpreting their data (Khagram et al. 2010). Authors must provide sufficient information on these aspects of their research design to enable readers to determine its quality, namely, the dependability, credibility, confirmability, and transferability of the research.

An increasing number of authors, however, are voicing concerns about the quality of social research being published, including qualitative research. Assessing the quality of social research can be difficult or impossible when authors do not provide sufficient information on their research design and data interpretation (St. John et al. 2014). When the quality of research cannot be established, the application of the findings will not necessarily lead to anticipated outcomes (Fox et al. 2006). This consequence is particularly concerning when research has potential application in real-world social-ecological systems, where poor quality research can lead to irreversible social or ecological damage, or loss of species (Cifdaloz et al. 2010). In environmental management and economics (Naidoo et al. 2006) and quantitative environmental social science (Cox 2015) concerns have resulted in reviews and recommendations to improve the quality of published research. Given the rise in qualitative research being published in ecology and conservation journals (Fig. A1.1) we considered it timely to conduct a review and develop a guideline for qualitative research in this field.

Thus, the objectives of this study were the following: (1) to review and summarize four elements of quality that can enable reviewers and end-users to evaluate qualitative research (dependability, credibility, confirmability, and transferability); (2) to assess the prevalence of these elements in research published in the ecology and conservation literature; and (3) to provide a guideline to improve qualitative research reporting to increase its quality and usefulness for application in social-ecological systems.

ELEMENTS OF QUALITY IN QUALITATIVE RESEARCH

In conducting this review, we adopted the position of Hammersley (2007), who explains that assessing the quality of both qualitative and quantitative research is largely a matter of implicit judgment, guided by methodological principles. Formulations of criteria to assess quality arise from the process of judgment, which is applied selectively on the basis of the objectives of the research and the knowledge claims that are made (Hammersley 2007). The use of criteria to judge qualitative research, he argues, is not designed to produce a universal set of rules for assessment, but rather to consider how they could relate to each other in different contexts, and how they are used in the development of research practice. He cautions that criteria are not a substitute for sound judgment, in research or practice, but suggests their use is to establish agreement about judgments of good and poor quality research.

We use Guba’s (1981) elements of quality criteria for naturalistic inquiry that are commonly applied in social sciences to assess the trustworthiness and transparency of qualitative research. They correspond closely with criteria commonly used to assess quantitative research: reliability - dependability; internal validity - credibility; objectivity - confirmability; external validity and generalizability - transferability (Lincoln and Guba 1985, Sandelowski 1986, Cutcliffe and McKenna 1999, Streubert 2007). As such, they will hold a level of familiarity to natural scientists, who often engage with, and even review, social research. Therefore, reviewing research against these criteria can assist both natural and social scientists in understanding exactly how the research was conducted and how the new knowledge was generated (Hammersley 2007).

We stress that the criteria we used in our review are not intended to be a definitive list for assessing the quality of qualitative research, but as a starting point, and flexible guide, for scientists and practitioners to engage with and judge qualitative research (Hammersley 2007). Further, we did not judge the manuscripts in our review, as good or bad, against these criteria. Rather, we examined whether the manuscripts provided adequate information in relation to each criteria to enable a reader to make an informed judgment about the quality of the research, in terms of what was done, why it was done, and why it was appropriate for the specific context of the research. Below, we relate the elements of quality to policy and practice as a way to demonstrate their applied relevance, noting that theoretical and methodological relevance is also important.

Dependability

How can one determine whether the findings of an inquiry would be consistently repeated if the inquiry were replicated with the same (or similar) subjects (respondents) in the same (or similar) context? (Guba 1981:80).

Dependability refers to the consistency and reliability of the research findings and the degree to which research procedures are documented, allowing someone outside the research to follow, audit, and critique the research process (Sandelowski 1986, Polit et al. 2006, Streubert 2007). As a quality measure, dependability is particularly relevant to ecological and conservation science applications that are in the early stages of testing findings in multiple contexts to increase the confidence in the evidence (Adams et al. 2014). Detailed coverage of the methodology and methods employed allows the reader to assess the extent to which appropriate research practices have been followed (Shenton 2004). Researchers should document research design and implementation, including the methodology and methods, the details of data collection (e.g., field notes, memos, the researcher’s reflexivity journal), and reflective appraisal of the project (Shenton 2004, Polit et al. 2006, Streubert 2007). Reflexivity, i.e., a self-assessment of subjectivity, can reduce bias (when appropriate to do so) and increase dependability by increasing transparency of the research process (Guba 1981, Malterud 2001, D’Cruz et al. 2007, Tong et al. 2007).

Credibility

How can one establish confidence in the “truth” of the findings of a particular inquiry for the subjects (respondents) with which and the context in which the inquiry was carried out? (Guba 1981:79).

Credibility refers to the degree to which the research represents the actual meanings of the research participants, or the “truth value” (Lincoln and Guba 1985). The credibility of research findings that are used to make policy recommendations is particularly important for ecosystem management; assessing the extent to which the reader believes the recommendations are credible has implications for the expected success of implementation. When evaluating qualitative research, credibility stems from the intended research purposes, and credible research decisions are those that are consistent with the researchers’ purpose (Patton 2002), requiring researchers and practitioners to think critically and contextually when judging methodological decision making. Credibility can be demonstrated through strategies such as data and method triangulation (use of multiple sources of data and/or methods; Padgett 2008); peer debriefing (sharing questions about the research process and/or findings with peers who provide an additional perspective on analysis and interpretation); and member checking (returning findings to participants to determine if the findings reflect their experiences; Creswell and Miller 2000, Padgett 2008). Both credibility and dependability relate to all aspects of the research design, including the focus of the research, the context, participant selection, data collection, and the amount of data collected, all of which influence how accurately the research question/s can be answered (Graneheim and Lundman 2004).

Confirmability

How can one establish the degree to which the findings of an inquiry are a function solely of the subjects (respondents) and conditions of the inquiry and not of the biases, motivations, interests, perspectives and so on of the inquirer? (Guba 1981:80).

To achieve confirmability, researchers must demonstrate that the results are clearly linked to the conclusions in a way that can be followed and, as a process, replicated. Its relevance to application is similar to credibility, where confirmability has particular implications for studies that provide policy recommendations. In qualitative research, the philosophical and epistemological position of the research will be determined by both the problem and the predisposition of the researcher, in terms of their way of categorizing “truth,” for example (Moon and Blackman 2014). Thus, the researcher needs to report on the steps taken both to manage and reflect on the effects of their philosophical or experiential preferences and, where necessary, i.e. according to the ontological and epistemological position of the research, to ensure the results are based on the experiences and preferences of the research participants (subjects, respondents) rather than those of the researcher. Miles and Huberman (1994) highlight that reporting on researcher predisposition, beliefs, and assumptions, i.e. ontology and epistemology, is a major criteria of confirmability and should be clearly reported on in qualitative research. Such reflexivity does not necessarily demonstrate a removal of bias, but does help explain how the researcher’s position can manifest in the research findings while still yielding useful insights. By providing a detailed methodological description, the researcher enables the reader to determine confirmability, showing how the data, and constructs and theories emerging from it, can be accepted (Shenton 2004).

Transferability

How can one determine the degree to which the findings of a particular inquiry may have applicability in other contexts or with other subjects (respondents)? (Guba 1981:79-80).

Transferability, a type of external validity, refers to the degree to which the phenomenon or findings described in one study are applicable or useful to theory, practice, and future research (Lincoln and Guba 1985), that is, the transferability of the research findings to other contexts. Transferability can be critical to the application of research findings because policy and management can rely on data, conclusions, and recommendations from a single or small number of research projects, often relying on evidence from a range of contexts that can be different to the one in which applications will be made. Thus, it is crucial that researchers clearly state the extent to which findings may or may not be relevant to other contexts. From a positivist perspective (see Methods below), transferability concerns relate to the extent to which the results of particular research program can be extrapolated, with confidence, to a wider population (Shenton 2004). Qualitative research studies, however, are not typically generalizable according to quantitative standards, because qualitative research findings often relate to a single or small number of environments or individuals (Maxwell 1992, Flyvbjerg 2006). Consequently, the number of research participants in qualitative research is often smaller than quantitative studies, and the exhaustive nature of each case becomes more important than the number of participants (Polkinghorne 2005). Often, it is not possible, or desirable, to demonstrate that findings or conclusions from qualitative research are applicable to other situations or populations (Shenton 2004, Drury et al. 2011). Instead, the purpose can be to identify, and begin to explain, phenomena where a lack of clarity prevents it from being, as yet, clearly defined. The phenomena will often appear anomalous, requiring research to understand it that can enable researchers to generate hypotheses about it or understand the multiple perspectives that define it (Jones 1995, Denzin and Lincoln 2011). For example, a case may be chosen to show a problem with currently accepted norms (e.g., Flyvbjerg 2006); in this case the transferability comes from developing new conceptualizations of the phenomenon where at least one example of difference exists. The methodology and analysis of the qualitative research must show why the research can be clearly related (transferred) to the original theory.

METHODS

Journal search

We conducted an exploratory search on Web of Science to identify relevant ecology and conservation journals publishing qualitative social research. We used search criteria “social” and “qualitative,” and “conservation” or “environment” and then further refined the returned articles by science category to only include “environmental studies,” “environmental sciences,” “ecology,” “biodiversity conservation,” “biology.” Our search returned articles in 28 journals, which we then reviewed for scope (i.e., whether the scope of each journal included social research), and impact factor (i.e., whether journal had sufficient impact, judged as having an impact factor greater than two). Our final list contained 11 journals that clearly stated, in their scope, that they accepted social science research and the journal had an impact factor greater than two (see Table A1.1).

Manuscript search

Through our exploratory search of journals we found a marked increase in publications with the terms “social” and “qualitative” over the last five years (see Fig. A1.1). Consequently, we restricted our review to articles published in the selected journals between January 2009 and September 2014, providing access to the greatest number of potentially relevant research manuscripts. We excluded early view manuscripts to ensure consistent selection of manuscripts across journals because this feature is not present across all journals. See Appendix 1 for full details of manuscript search, including search terms, exclusion criteria, and how we avoided representation bias.

Development of criteria to assess quality

The criteria (Table 1) were developed through two scoping phases. First, one author reviewed 25 manuscripts from two journals, according to the manuscript selection detailed above. Second, all 11 journals were randomly assigned to 4 of the authors of this manuscript, who completed a review of 3 relevant manuscripts from each journal, using the criteria established during the first scoping phase. Based on discussions of this second phase, modifications and additions to the original set of criteria were made, including the introduction of additional criteria derived from a thorough search of the literature to ensure that we had a comprehensive set of criteria by which to assess each manuscript. Once both scoping phases were complete, a final worksheet, including definitions, was created for recording the data relating to the criteria (Table 1). On the basis of the scoping exercise, we chose to include research methodologies, data collection methods, and triangulation methods in our review. We excluded data analysis methods from the review, e.g., coding, primarily because of the lack of detail provided in manuscripts, making it difficult to record methods in a manner amenable to our review approach. For example, it was not uncommon to read simply “data were analyzed,” or “We used NVivo to analyze the data.” Cells in the worksheet were left blank if the author/s of the manuscripts did not provide details corresponding with the criteria. Each manuscript included in our assessment was assessed against this final list of criteria. In the event that a manuscript contained detail on the research position (i.e., epistemology), methodology, and methods that were not included as part of the final set of criteria, free-form text was recorded (see Table A1.3).

A note on philosophical positions

A researcher’s philosophical position, i.e., their ontology, epistemology, philosophical perspective, reveals their beliefs about what they can acquire knowledge about and how they set out to understand reality and thus frame their research (Moon and Blackman 2014). By stating their philosophy, researchers enable readers to understand the position from which the research was undertaken and, thus, the suitability of the study intent, methodology, methods, and data interpretation, e.g., did the researcher consider they were detached from their subject/s or not?; was the researcher coming from an emancipatory position: were they acting as an advocate or did they apply a Marxist lens to the research? (Fossey et al. 2002, Evely et al. 2008).

Stating the underlying philosophy of the research can, however, open the door to criticism and disagreement about legitimate approaches to inquiry, potentially limiting the scope of qualitative research published in ecology and conservation journals (Campbell 2005, Brosius 2006, Castree et al. 2014). To illustrate, the prevailing philosophical paradigm in ecology and conservation science has been (post) positivism, which is suitable for understanding the natural world and assumes that “reality” can become known through quantitative empirical observation (Evely et al. 2008). To contrast, interpretivism, a philosophical paradigm often found in qualitative research (Newman and Benz 1998, Khagram et al. 2010), assumes that multiple realities can exist as a function of cultural, historical, and contextual human interpretation (Moon and Blackman 2014). Different philosophies create different assumptions. Generally speaking, positivists study the natural world, while interpretivists study the human world, although we note that qualitative social research can be positivist. Each approach requires a “different logic of research procedure, one that reflects the distinctiveness of humans as against the natural order” (Bryman 2008:15). Disagreement can arise over research procedures because “reliability and validity are tools of an essentially positivist nature” and so unconscious or assumed expectations about how research is conducted can become a barrier to accommodating a plurality of philosophical positions (Watling 1995, cited in Simco and Warin 1997:670, Campbell 2005). For example, “the replicability of a qualitative study [e.g. positivist philosophy] cannot be formulated as a problem of reliability, and the accuracy of a qualitative interpretation [e.g. interpretivist philosophy] cannot be compared to the explanatory power of a statistical model” (Stenius et al. 2008:84).

To classify the philosophical position of the studies reviewed, we used the ontologies, epistemologies, and philosophical paradigms presented in the guide by Moon and Blackman (2014), which they identified as the most relevant to ecology and conservation. As with all of our criteria, we recorded additional philosophical positions found in studies that were not included in our list, as free-form text. Full results are presented in Appendix 1.

A note on methodology

From our scoping phases, we identified five broad qualitative research methodologies (McCaslin and Scott 2003, Creswell 2009): ethnography, phenomenological, grounded theory, narrative, and case study. To assist in interpreting our results and discussion, we provide a brief definition and example of each here. Ethnographic research explores cultural groups in their natural setting over a period of time, i.e., study of a culture (e.g., remote Spanish farmer adaptation to rural economic and societal change, Paniagua 2013). Phenomenological research seeks to identify the participant-described essence of human experience of a phenomenon, i.e., study of a shared lived experience (e.g., ecological and socio-cultural meanings of a constructed landscape in Uzbekistan, Oberkircher et al. 2011). Grounded theory derives a theory of a process, interaction, or action that is grounded in the views and experiences of the participants, i.e., theory of a phenomenon (e.g., how social context influences mapping regimes and contributes to different conceptions of area classes and the consequent effects on map consistency and comparability, Straume 2014). Narrative research involves studying one or more individuals (or organizations) and their stories, i.e., study of how people create meaning (of a lived experience) as a narrative (e.g., assumptions of nature and the human-nature relationship represented in International Union for Conservation of Nature documents and whether they match the reality of a complex and globalizing world, Beumer and Martens 2013). Case studies explore in depth a program, event, process, or activity, of one or more people, and are typically bound by selected variables, i.e., study of an event (e.g., perceptions of conservation corruption and noncompliance in regional Madagascar, Gore et al. 2013). Although these five methodologies are not definitive (see, for example, Ragin and Becker 1992), the articles surveyed did map into these broad categories.

Data analysis

All data collected from the manuscript reviews were compiled and spot-checked for accuracy and consistency among reviewers. Both descriptive and inferential statistics (Independent sample T-test using IBM SPSS Statistics 21) are presented across a range of variables relating to the types of qualitative research designs published in ecology and conservation journals and the level of detail describing the research context and methods. Data on participant numbers was log10 transformed to normalize the distribution prior to inferential analysis.

RESULTS

A total of 335 manuscripts were reviewed. From this set, 116 were relevant. The remaining 219 manuscripts were not included for a number of reasons (see Table A1.2). Of the 116 manuscripts, 28 had multiple qualitative research phases, or multiple groups of participants. We classed such incidences as unique studies, resulting in a total analyzed sample of 146 studies. A significant number of samples did not include data across the criteria so sample size in any single descriptive or inferential statistic will not necessarily reflect the entire sample of 146. No study presented information relating to all the criteria against which they were assessed.

One hundred and forty-two of the studies stated one or more knowledge gaps and the study intent or aims; the remaining 4 presented ambiguous text (Fig. 1A). Only 46 studies stated their philosophical position, i.e., ontology, epistemology, philosophical perspective (see Moon and Blackman 2014). Of these, 26 studies had an ambiguous position statement, while 20 studies had a clearly stated position (Fig. 1B). Of the 5% of stated positions, the majority was constructivism (5), and the remaining were advocacy/participatory (1), interpretivism (2), phenomenology (1) and positivism (1). Three of the constructivism studies were stated in one manuscript.

Seventy-three studies did not have a defined methodology. Where a methodology was stated, the predominant methodology was case study research (59). Although some researchers consider case study research to be a methodology (Creswell 2009, Yin 2009), others do not, and argue that it is simply a choice of what is to be studied (Stake 2005). We consider case study research to be a methodology when the selection of a case or cases assists researchers to provide a rationale for their research design, for example, when a researcher needs to define the boundary of a “case,” justify why he or she is interested in studying a particular case or cases, and/or make choices about what types of data they will need to collect to answer their research question. Less commonly stated methodologies were ethnography (7), grounded theory (5), narrative research (4), and phenomenological analysis (1). Four studies stated two research methodologies. Seventy-one studies did not define the boundaries of their study (Yin 2009). For those that did explain the boundaries, 30 studies had three distinct boundaries, 34 had two boundaries, and 11 had one boundary. The most commonly stated boundary was geographical (46), followed by organizational (32), social (30), goal/objective (24), ecological (11), industry (9), culture (8), and time (8). Studies were conducted across a total of 59 nations and a range of scales and systems. The most frequently cited nation was the United States of America, while Europe was the most frequently cited region. The predominant socio-political level was regional/state and most of the research explored terrestrial systems (Table 2).

Twenty-six studies did not state a recruitment (or sampling) strategy. Of the 120 studies that did state a recruitment strategy, 60 stated a single strategy, 52 stated two strategies, and eight stated three strategies. The dominant recruitment strategies were purposive (63) and snowball sampling (30; Table 3).

One hundred and twenty-eight studies stated the number of participants involved in the research (Table A1.4). When separated according to data type, the mean participant numbers were significantly higher for mixed methods (mean = 103.1) studies compared to qualitative only studies (mean = 38.22; t= 3.136; p= 0.004; Table A1.4). Seventeen studies that involved working groups, group discussions, meeting or focus groups, had a mean number of 45 participants (min = 14; max = 100; SD = 26.29). An additional seven studies did not include the number of group participants, but instead stated the total number of focus groups.

One hundred and thirteen studies described the population in clear terms, while nine studies did not, and 24 studies provided an ambiguous description. Twenty-two studies stated a response rate. Of those, the mean response rate was 72% (min = 8%; max = 100%; SD = 24%). Fifteen studies assessed the representativeness of the participants relative to the population, although we note that this information is not always necessary to present. Sixty-nine studies did not assess the representativeness of the participants, again we note this information is not always relevant, and a further 62 studies provided ambiguous detail. Nineteen studies stated a population number, with a mean of 424 (min = 8; max = 3032; SD = 824). Only the studies that stated a discrete population number are included in this estimate to avoid uncertainty, i.e., ranges are not included.

Seventy-three studies (50%) did not discuss the suitability of their recruitment strategy relative to the study intent (Fig. 1C). One hundred and twenty-two studies stated a single data collection method, 11 stated two methods, eight stated three methods, and two stated four methods; three studies did not state a method. Ninety studies employed semistructured interviews as a method of data collection (Table 4).

The predominant data type collected was qualitative (100 studies); mixed data, i.e., qualitative and quantitative, was collected in 31 of the studies. For 15 of the studies, the data type was ambiguous. Eighty-five studies used open questions, 26 studies used both open and closed questions, and 35 studies were too ambiguous to be assigned a question type. One hundred and eight studies did not mention the use of specific data gathering tools, e.g., field notes, memos, or transcripts, while 26 did, and nine provided ambiguous statements (Fig. 1D).

One hundred and four studies explained the processes they had followed to ensure their research procedures were appropriate and replicable or that the data was reliable. The most prevalent form of validity employed was convergent validity (38); in addition to the main data collection method, authors also employed: document analysis (17), participant observation (10), field tour (4), informal conversations (3), multiple interviewers (2), and cross-evaluation (2). Nine studies reported multiple methods. One study reported face validity, while others detailed scoping (29), pilot testing (13), and pretesting (3) methods. Other methods included prolonged engagement (5), member checking (5), and peer debriefing (1).

Eighty-two studies did not comment on whether the data derived from their study was transferable to other contexts (theoretically or empirically generalizable), while 18 stated that the data was transferrable, 10 stated that it was not, and 36 provided ambiguous statements (Fig. 1E). One hundred and thirty-nine studies did not include a reflexive assessment of subjectivity (Fig. 1F).

DISCUSSION

Overall, our review of the elements of quality published in qualitative ecology and conservation research indicates that dependability and credibility were reasonably well reported in relation to describing participation and methods used, but poorly evolved in relation to methodology and triangulation, including reflexivity (Table 1). Confirmability was, on average, poorly accounted for in social science published in ecology and conservation journals; problem statements tended to be clear and well-defined, however the researchers’ philosophical position, i.e., ontology, epistemology, philosophical perspective, was notably absent; explicit mention of methodology was missing from half of the reviewed studies; and details on triangulation was limited (Fig. 1). Transferability was well documented across descriptions of participants but poorly developed in terms of triangulation methods and the suitability of the participants in answering the research question/s. Our findings point to opportunities for improving how qualitative research is reported in ecology and conservation journals to increase the ability of reviewers, readers, and end-users of the research to judge its quality and apply new knowledge.

We discuss these findings in the context of existing recommendations for authors, reviewers, and end-users of qualitative research to provide a discipline-specific guideline for qualitative research. We also offer some commentary on the implications of our findings for the quality of social research more broadly. We anticipate that our findings and proposed guideline, in the form of questions to ask when developing and reviewing qualitative research (Table 5), will be useful in three ways. First, researchers wanting to publish qualitative research relevant to social-ecological systems and conservation will be able to ensure they provide sufficient information for editors to judge the quality of their research. Second, editorial teams will know what else to ask for if they need further information to assess the quality of manuscripts during the review process. Third, publication of a broader spectrum of epistemologies and research methodologies could be supported, informing ecological conservation management and policy more effectively. We begin our discussion with some reflections on dependability and credibility criteria, and then come to focus the discussion on the confirmability and transferability of research, which represent the criteria that were least reported on.

Dependability and credibility

The most commonly reported criteria corresponded with elements of dependability and credibility, e.g., recruitment, number of participants, population description, data collection methods, triangulation (Table 1). These elements reflect notions of replicability and “truth,” which are rooted within a positivist tradition (Winter 2000, Golafshani 2003). The predominant occurrence of these criteria in ecology and conservation journals could reflect preferences for reporting on the “validity” of qualitative research, potentially indicating a bias toward more quantitative notions of quality. In quantitative research, validity is associated with “universal laws, evidence, objectivity, truth, actuality, deduction, reason, fact and mathematical data” (Winter 2000:7). A bias toward quantitative notions of quality in ecology and conservation journals could explain the predominance of certain types of research methodologies, e.g., case study, and could result in misplaced application of research strategies, e.g., extrapolation, and misrepresented aims and methods (Fazey et al. 2006, Drury et al. 2011). Dependability and credibility criteria are necessary to report on in both quantitative and qualitative research, but certainly for qualitative research criteria of confirmability and transferability are essential to judge the quality of research.

Confirmability

Arguably, stating one’s philosophical position is the most important requirement of social research: it defines the relationship between the researcher and their subject/s. Yet, the majority of studies did not detail the philosophical position of the research. Knowing the position of the researcher is essential in confirming the extent to which research findings are intended to be a function of the subjects or the researcher themselves (Guba 1981). It is really only possible to confirm the research approach and interpretation of the findings when the researcher has clearly stated their philosophical position.

To illustrate, social research conducted in ecology and conservation sciences is often philosophically oriented toward advocacy (Roebuck and Phifer 1999), or more broadly, critical theory (Moon and Blackman 2014). This position is often underpinned by a normative agenda, where the researchers seek to change some element/s of the system toward some ideal system or model. When researchers do not make their agenda clear, e.g., a feminist might want to change a patriarchal culture within a logging community (Moon and Blackman 2014), it becomes impossible to assess and confirm the credibility, dependability, and transferability of their research approach (Horton et al. 2016, Roebuck and Phifer 1999).

We offer that authors are not stating their philosophical position for at least one of three main reasons. First, they might not expect that stating their position is necessary because the ontological, epistemological, and philosophical elements of social research are not often published in ecology and conservation journals, perhaps indicating little expectation or desire for this content (St. John et al. 2014). Second, because of the multidisciplinary nature of many of the journals, authors might be concerned that their manuscript will be unfairly critiqued during the peer-review process by researchers who hold different philosophical positions, e.g., a critical realist ecologist reviewing relativist social science research, and who do not agree with, or possibly understand, alternative positions (Fox et al. 2006). Third, they might not be aware of the implications of their position because they have not trained sufficiently in the social sciences to understand the philosophical principles and theoretical assumptions of the discipline, and thus their research (Drury et al. 2011, St. John et al. 2014). The consequence is a potential weakening of the methodological rigor of the discipline and a bias in the content of published social science research. These claims could also apply to the inadequate detail presented for data analysis methods.

As with the philosophical position, the majority of studies did not define their methodology. Although most of the reported methodologies were case studies, we found that 73 studies did not provide the full description of the methodology and methods. Details of research methodology allows readers to understand the alternatives the researcher had and the reasons for the choices they made with regards to methodology; the source and legitimacy of the data used and how it was employed to develop the findings; and the research process itself, should others wish to replicate their methods in a different context (Maxwell 1992, Mays and Pope 1995, Devers and Frankel 2001).

Of potential concern, and maybe a reason for the lack of studies detailing their methodology, is the possibility that some researchers are not clear on the difference between methodology and methods. Methods are well understood in ecology; they represent the detailed techniques, protocols, and procedures used to collect and analyze data. Methodologies are, perhaps, more relevant to social sciences where they provide a rationale explaining the choice of methods and analysis and why they are best suited to answering the research question/s (Crotty 1998, McCaslin and Scott 2003, Creswell 2009, Denzin and Lincoln 2011). Methodologies typically detail what form of reality is being assumed, such as objective or socially constructed; what knowledge outcome the authors are seeking, including causes, understanding, emancipation, or deconstruction; whether the research is experimental or naturalistic, e.g., phenomenological, anthropological, or ethnographic; and whether the researcher is detached from or immersed within the research setting (Firestone 1987, Moon and Blackman 2014). Such details are often not necessary to explain in the natural sciences because of the subject matter, i.e., the natural world. Detailing the rationale for undertaking social science, however, provides the reader with a basis to evaluate whether the form of data collected was most appropriate, i.e., qualitative or quantitative or both, and the relative objectivity or subjectivity of the research (Newman and Benz 1998; Table 5). If researchers are not clear on the difference between methodology and methods, it suggests that an increasing number of natural scientists could be engaging in social science without adequate training (Drury et al. 2011, St. John et al. 2014).

To illustrate potential confusion over methodology and methods, the vast majority of studies we reviewed stated that they used a case study, yet many did not provide appropriate detail to determine whether this was their methodology, or simply an example of “what” was being studied. It appeared that in many instances, the case was used according to Stake’s (2005) definition of a case study, not as a methodology, but as a focus on, or interest in, a particular case or cases. If researchers adopt this position, then they must explain their actual methodology. If, on the other hand, the case study is the methodology (as explained by Creswell 2009, Yin 2009), then the authors need to distinguish how the research represents a “real-life phenomenon that has some concrete manifestation” (Yin 2014:34). Authors can explain the manifestation by stating the case study boundaries, which could include criteria such as spatial or temporal, procedural, social groupings, or organizational choices (Table 5). Providing this information will assist with clarifying who is “inside” and who is “outside” the case, and where the case begins and ends (Yin 2014). This information can also assist with determining why the case served as an appropriate example: was it intrinsic, i.e., generating an understanding of particular case, instrumental, i.e., providing insight into a topic or enabling generalizations, or collective, i.e., investigating a phenomenon, population, or condition (Stake 2005)? Adequately explaining case study boundaries could also allow for effective comparisons to be made (Baxter and Jack 2008, Yin 2014). We encourage authors to provide as much information as possible when detailing their boundaries, beyond the simple boundaries, e.g., ecological, social, that we identified and reviewed, including how the case relates to the methodological and epistemological positions of the researchers, as well as theoretical constructs or empirical units, and whether it emerges from the data collection process or is defined prior to data collection (see Ragin and Becker 1992, for additional details). We note that the simplistic boundaries that we derived from the studies and used in the review are a limitation of our review of case study methodologies. When researchers start to explain their methodology, it will become clear that publishing a broad range of social research methodologies increases the capacity of the discipline to understand the multidimensionality of the social world at the range of scales, and within the range of contexts, that social phenomena manifest.

We encourage authors to report on their ontology, epistemology, and philosophical positions, as well as to detail their methodology, to increase readers’ ability to judge the quality of their social research. We encourage journal editors to support authors in providing these details, possibly outlining expectations in author guidelines; to ensure sufficient space is available to report on these elements of research design in the main manuscript, potentially with a broader discussion in appendices or supplementary materials where necessary; and to shepherd manuscripts that provide these details through the review process. Shepherding would include thoughtful consideration of reviewer critiques of manuscripts that do not employ common methodological or philosophical approaches.

Transferability

Authors typically provided details on transferability, including an outline of how they identified and chose their participants. These details can assist with judging the transferability of the research findings, or the relevance of the research context. Some authors argue that the reader ultimately undertakes the assessment of whether the findings are transferable to another context (Krefting 1991, Graneheim and Lundman 2004). Authors can assist in this process by providing the anticipated range and limitations for the application of the findings (Malterud 2001).

The dominant purposive and snowball sampling, and less common theoretical sampling (Glaser and Strauss 1965), strategies identified in our review do not seek to generate a representative or typical sample or collection of participants. Instead, they aim to maximize the diversity of important and relevant information collected. Authors should explain how their recruitment strategy enabled them to determine how they maximized the diversity of information, e.g., by asking interview respondents to recommend a participant who has very different views to them (Guba 1981; Table 5).

Authors were less likely to provide response rates, the total number of people in the population, and sample representativeness of the population, although we note that this information does not necessarily need to be reported on for qualitative research. It is usually helpful to know, however, how the participants provided the requisite, relevant data for the research question/s being studied.

To increase transferability to overcome any concerns related to situational uniqueness and noncomparability, recruitment strategies can be combined with thick descriptive data (Guba 1981). Transferability can also be strengthened by providing a clear and detailed description of the context and culture, selection and characteristics of participants and data collection and analysis (Graneheim and Lundman 2004).

Authors typically presented the data collection methods, the data type, and question type, although the level of detail provided was variable across the assessment criteria, potentially limiting readers’ judgments of research quality (Graneheim and Lundman 2004). Of concern for meaningful interpretation of social research was the lack of information on sample suitability, how the findings could be applied, i.e., theoretically or empirically, and how authors validated their data, limiting both the confirmability and transferability of findings. By providing sufficient detail to demonstrate the trustworthiness and rigor of their data collection and analysis methods, authors can increase the transparency of their research (Table 5).

CONCLUSION

Qualitative research has an ever-increasing role to play in understanding the relationships between people and the natural world. By using the criteria set out by Guba (1981) to review published qualitative studies in ecology and conservation journals, our review suggests that much of the published qualitative literature lacks sufficient detail for a comprehensive assessment of the quality and transferability of the research. These conclusions have implications both for the advancement of quality social science research in ecology and conservation journals as well as for the application of research to on-ground conservation and environmental management programs. In particular, quality social research can be compromised when, as a research community, attention is not paid to its philosophical foundations and theoretical assumptions. Failing to pay attention to these aspects of the discipline can undermine the essence of inquiry, thereby limiting the efficacy of application.

The guideline we present can be used to draw attention to the considerations and decisions that must be made when conducting qualitative social research, and to assess the quality of the research. For researchers who are not trained in the social sciences, we hope that these questions will prompt them to consider working with a trained social scientist to increase the quality of their research, or to seek the appropriate level of training to conduct social research independently (see also St. John et al. 2014 for recommendations). For researchers who become frustrated that their qualitative research is misunderstood or unfairly judged, we hope that the guideline will assist them in providing sufficient detail so that either social or natural scientists can judge its quality to allow a fair review. We hope that reviews and guidelines, like this one, will contribute to the ongoing improvement of qualitative social science to increase our understanding of social-ecological systems and overcome social-ecological problems.

RESPONSES TO THIS ARTICLE

Responses to this article are invited. If accepted for publication, your response will be hyperlinked to the article. To submit a response, follow this link. To read responses already accepted, follow this link.

ACKNOWLEDGMENTS

We would like to thank D. Marsh for reviewing this article.

LITERATURE CITED

Adams, V. M., E. T. Game, and M. Bode. 2014. Synthesis and review: delivering on conservation promises: the challenges of managing and measuring conservation outcomes. Environmental Research Letters 9:085002. http://dx.doi.org/10.1088/1748-9326/9/8/085002

Baxter, P., and S. Jack. 2008. Qualitative case study methodology: study design and implementation for novice researchers. Qualitative Report 13:544-559.

Belsky, J. 2004. Contributions of qualitative research to understanding the politics of community ecotourism. Pages 273-291 in J. Phillimore and L. Goodson, editors. Qualitative research in tourism: ontologies, epistemologies and methodologies. Routledge, New York, New York, USA.

Beumer, C., and P. Martens. 2013. IUCN and perspectives on biodiversity conservation in a changing world. Biodiversity and Conservation 22:3105-3120. http://dx.doi.org/10.1007/s10531-013-0573-6

Brosius, J. P. 2006. Common ground between anthropology and conservation biology. Conservation Biology 20:683-685. http://dx.doi.org/10.1111/j.1523-1739.2006.00463.x

Bryman, A. 2008. Social research methods. Oxford University Press, Oxford, UK.

Campbell, L. M. 2005. Overcoming obstacles to interdisciplinary research. Conservation Biology 19:574-577. http://dx.doi.org/10.1111/j.1523-1739.2005.00058.x

Castree, N., W. M. Adams, J. Barry, D. Brockington, B. Büscher, E. Corbera, D. Demeritt, R. Duffy, U. Felt, K. Neves, P. Newell, L. Pellizzoni, K. Rigby, P. Robbins, L. Robin, D. B. Rose, A. Ross, D. Schlosberg, S. Sörlin, P. West, M. Whitehead, and B. Wynne. 2014. Changing the intellectual climate. Nature Climate Change 4:763-768. http://dx.doi.org/10.1038/nclimate2339

Cifdaloz, O., A. Regmi, J. M. Anderies, and A. A. Rodriguez. 2010. Robustness, vulnerability, and adaptive capacity in small-scale social-ecological systems: the Pumpa Irrigation System in Nepal. Ecology and Society 15(3):39. [online] URL: http://www.ecologyandsociety.org/vol15/iss3/art39/

Cowling, R. M. 2014. Let’s get serious about human behavior and conservation. Conservation Letters 7:147-148. http://dx.doi.org/10.1111/conl.12106

Cox, M. 2015. A basic guide for empirical environmental social science. Ecology and Society 20(1):63. http://dx.doi.org/10.5751/es-07400-200163

Creswell, J. W. 2009. Research design: qualitative, quantitative, and mixed methods approaches. SAGE, Los Angeles, California, USA.

Creswell, J. W., and D. L. Miller. 2000. Determining validity in qualitative inquiry. Theory Into Practice 39:124-130. http://dx.doi.org/10.1207/s15430421tip3903_2

Crotty, M. 1998. The foundations of social research: meaning and perspectives in the research process. SAGE, London, UK.

Cutcliffe, J. R., and H. P. McKenna. 1999. Establishing the credibility of qualitative research findings: the plot thickens. Journal of Advanced Nursing 30:374-380. http://dx.doi.org/10.1046/j.1365-2648.1999.01090.x

D’Cruz, H., P. Gillingham, and S. Melendez. 2007. Reflexivity, its meanings and relevance for social work: a critical review of the literature. British Journal of Social Work 37:73-90. http://dx.doi.org/10.1093/bjsw/bcl001

Denzin, N. K., and Y. S. Lincoln, editors. 2011. The SAGE handbook of qualitative research SAGE, Thousand Oaks, California, USA.

Devers, K. J., and R. M. Frankel. 2001. Getting qualitative research published. Education for Health (Abingdon, England) 14:109-117.

Drury, R., K. Homewood, and S. Randall. 2011. Less is more: the potential of qualitative approaches in conservation research. Animal Conservation 14:18-24. http://dx.doi.org/10.1111/j.1469-1795.2010.00375.x

Evely, A. C., I. Fazey, M. Pinard, and X. Lambin. 2008. The influence of philosophical perspectives in integrative research: a conservation case study in the Cairngorms National Park. Ecology and Society 13(2):52. [online] URL: http://www.ecologyandsociety.org/vol13/iss2/art52/

Fazey, I., J. A. Fazey, J. G. Salisbury, D. B. Lindenmayer, and S. Dovers. 2006. The nature and role of experiential knowledge for environmental conservation. Environmental Conservation 33:1-10. http://dx.doi.org/10.1017/S037689290600275X

Firestone, W. A. 1987. Meaning in method: the rhetoric of quantitative and qualitative research. Educational Researcher 16:16-21. http://dx.doi.org/10.3102/0013189X016007016

Flyvbjerg, B. 2006. Five misunderstandings about case-study research. Qualitative Inquiry 12:219-245. http://dx.doi.org/10.1177/1077800405284363

Fossey, E., C. Harvey, F. McDermott, and L. Davidson. 2002. Understanding and evaluating qualitative research. Australian and New Zealand Journal of Psychiatry 36:717-732. http://dx.doi.org/10.1046/j.1440-1614.2002.01100.x

Fox, H. E., C. Christian, J. C. Nordby, O. R. W. Pergams, G. D. Peterson, and C. R. Pyke. 2006. Perceived barriers to integrating social science and conservation. Conservation Biology 20:1817-1820. http://dx.doi.org/10.1111/j.1523-1739.2006.00598.x

Glaser, B. G., and A. L. Strauss. 1965. The discovery of grounded theory. Aldine, Chicago, Illinois, USA.

Golafshani, N. 2003. Understanding reliability and validity in qualitative research. Qualitative Report 8:597-607.

Gore, M. L., J. Ratsimbazafy, and M. L. Lute. 2013. Rethinking corruption in conservation crime: insights from Madagascar. Conservation Letters 6:430-438. http://dx.doi.org/10.1111/conl.12032

Graneheim, U. H., and B. Lundman. 2004. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Education Today 24:105-112. http://dx.doi.org/10.1016/j.nedt.2003.10.001

Guba, E. G. 1981. Criteria for assessing the trustworthiness of naturalistic inquiries. Educational Technology Research and Development 29:75-91.

Hammersley, M. 2007. The issue of quality in qualitative research. International Journal of Research & Method in Education 30:287-305. http://dx.doi.org/10.1080/17437270701614782

Horton, C. C., T. R. Peterson, P. Banerjee, and M. J. Peterson. 2016. Credibility and advocacy in conservation science. Conservation Biology 30:23-32. http://dx.doi.org/10.1111/cobi.12558

Jones, R. 1995. Why do qualitative research? British Medical Journal 311:2. http://dx.doi.org/10.1136/bmj.311.6996.2

Khagram, S., K. A. Nicholas, D. M. Bever, J. Warren, E. H. Richards, K. Oleson, J. Kitzes, R. Katz, R. Hwang, R. Goldman, J. Funk, and K. A. Brauman. 2010. Thinking about knowing: conceptual foundations for interdisciplinary environmental research. Environmental Conservation 37:388-397. http://dx.doi.org/10.1017/S0376892910000809

Krefting, L. 1991. Rigor in qualitative research: the assessment of trustworthiness. American Journal of Occupational Therapy 45:214-222. http://dx.doi.org/10.5014/ajot.45.3.214

Lincoln, Y. S., and E. G. Guba. 1985. Naturalistic inquiry. SAGE, Newbury Park, California, USA.

Malterud, K. 2001. Qualitative research: standards, challenges, and guidelines. Lancet 358:483-488. http://dx.doi.org/10.1016/S0140-6736(01)05627-6

Maxwell, J. 1992. Understanding and validity in qualitative research. Harvard Educational Review 62:279-301. http://dx.doi.org/10.17763/haer.62.3.8323320856251826

Mays, N., and C. Pope. 1995. Qualitative research: rigour and qualitative research. British Medical Journal 311:109-112. http://dx.doi.org/10.1136/bmj.311.6997.109

McCaslin, M. L., and K. W. Scott. 2003. The five-question method for framing a qualitative research study. Qualitative Report 8:447-461.

Miles, M. B., and A. M. Huberman. 1994. Qualitative data analysis: an expanded sourcebook. SAGE, Thousand Oaks, California, USA.

Moon, K., and D. Blackman. 2014. A guide to understanding social science research for natural scientists. Conservation Biology 28:1167-1177. http://dx.doi.org/10.1111/cobi.12326

Naidoo, R., A. Balmford, P. J. Ferraro, S. Polasky, T. H. Ricketts, and M. Rouget. 2006. Integrating economic costs into conservation planning. Trends in Ecology & Evolution 21:681-687. http://dx.doi.org/10.1016/j.tree.2006.10.003

Newman, I., and C. R. Benz. 1998. Qualitative-quantitative research methodology: exploring the interactive continuum. Southern Illinois University Press, Carbondale, Illinois, USA.

Oberkircher, L., M. Shanafield, B. Ismailova, and L. Saito. 2011. Ecosystem and social construction: an interdisciplinary case study of the Shurkul Lake landscape in Khorezm, Uzbekistan. Ecology and Society 16(4):20. http://dx.doi.org/10.5751/es-04511-160420

Padgett, D. K. 2008. Qualitative methods in social work research. SAGE, Thousand Oaks, California, USA.

Paniagua, A. 2013. Farmers in remote rural areas: the worth of permanence in the place. Land Use Policy 35:1-7. http://dx.doi.org/10.1016/j.landusepol.2013.04.017

Patton, M. Q. 2002. Qualitative research & evaluation methods SAGE, Thousand Oaks, California, USA.

Polit, D., F. C. T. Beck, and B. P. Hungler. 2006. Essentials of nursing research: methods, appraisal, and utilization. Lippincott, New York, New York, USA.

Polkinghorne, D. E. 2005. Language and meaning: data collection in qualitative research. Journal of Counseling Psychology 52:137-145. http://dx.doi.org/10.1037/0022-0167.52.2.137

Prokopy, L. S. 2011. Agricultural human dimensions research: the role of qualitative research methods. Journal of Soil and Water Conservation 66:9A-12A. http://dx.doi.org/10.2489/jswc.66.1.9a

Ragin, C. C., and H. S. Becker, editors. 1992. What is a case? Exploring the foundations of social inquiry. Cambridge University Press, Cambridge, UK.

Roebuck, P., and P. Phifer 1999. The persistence of positivism in conservation biology. Conservation Biology 13:444-446. http://dx.doi.org/10.1046/j.1523-1739.1999.013002444.x

Sandelowski, M. 1986. The problem of rigor in qualitative research. Advances in Nursing Science 8:27-37. http://dx.doi.org/10.1097/00012272-198604000-00005

Sayre, N. F. 2004. Viewpoint: the need for qualitative research to understand ranch management. Journal of Range Management 57:668-674. http://dx.doi.org/10.2111/1551-5028(2004)057[0668:VTNFQR]2.0.CO;2

Shenton, A. K. 2004. Strategies for ensuring trustworthiness in qualitative research projects. Education for Information 22:63-75.

Simco, N., and J. Warin. 1997. Validity in image-based research: an elaborated illustration of the issues. British Educational Research Journal 23:661-672. http://dx.doi.org/10.1080/0141192970230508

Stake, R. E. 2005. Qualitative case studies. Pages 433-466 in N. K. Denzin and Y. S. Lincoln, editors. The SAGE handbook of qualitative research. Third edition. SAGE, Thousand Oaks, California, USA.

Stenius, K., K. Mäkelä, M. Miovsky, and R. Gabrhelik. 2008. How to write publishable qualitative research. Pages 82-97 in T. F. Babor, K. Stenius, S. Savva, and J. O’Reilly, editors. Publishing addiction science: a guide for the perplexed. International Society of Addiction Journal Editors, London, UK.

St. John, F. A. V., A. M. Keane, J. P. G. Jones, and E. J. Milner-Gulland. 2014. FORUM: Robust study design is as important on the social as it is on the ecological side of applied ecological research. Journal of Applied Ecology 51:1479-1485. http://dx.doi.org/10.1111/1365-2664.12352

Straume, K. 2014. The social construction of a land cover map and its implications for Geographical Information Systems (GIS) as a management tool. Land Use Policy 39:44-53. http://dx.doi.org/10.1016/j.landusepol.2014.03.007

Streubert, H. J. 2007. Designing data generation and management strategies. Pages 33-56 in H. J. Streubert and D. R. Carpenter, editors. Qualitative research in nursing: advancing the humanistic imperative. Third edition. Lippincott Williams & Wilkins, Philadelphia, Pennsylvania, USA.

Tong, A., P. Sainsbury, and J. Craig. 2007. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care 19:349-357. http://dx.doi.org/10.1093/intqhc/mzm042

Winter, G. 2000. A comparative discussion of the notion of ‘validity’ in qualitative and quantitative research. Qualitative Report 4:1-14.

Yin, R. K. 2009. Case study research: design and methods. Fourth edition. SAGE, Thousand Oaks, California, USA.

Yin, R. K. 2014. Case study research: design and methods. Fifth edition. SAGE, Thousand Oaks, California, USA.

Address of Correspondent:
Katie Moon
Institute for Applied Ecology
University of Canberra, Bruce
ACT 2601 Australia
katieamoon@gmail.com
Jump to top
Table1  | Table2  | Table3  | Table4  | Table5  | Figure1  | Appendix1