Ecology and SocietyEcology and Society
 E&S Home > Vol. 19, No. 3 > Art. 47
The following is the established format for referencing this article:
Triste, L., F. Marchand, L. Debruyne, M. Meul, and L. Lauwers. 2014. Reflection on the development process of a sustainability assessment tool: learning from a Flemish case. Ecology and Society 19(3): 47.
http://dx.doi.org/10.5751/ES-06789-190347
Research, part of a special feature on Multicriteria Assessment of Food System Sustainability

Reflection on the development process of a sustainability assessment tool: learning from a Flemish case

1Institute for Agricultural and Fisheries Research (ILVO), 2University College Ghent, 3Department of Agricultural Economics, Ghent University

ABSTRACT

Adoption of sustainability assessment tools in agricultural practice is often disappointing. One of the critical success factors for adoption is the tool development process. Because scientific attention to these development processes and insights about them are rather limited, we aimed to foster the scientific debate on this topic. This was done by reflecting on the development process of a Flemish sustainability assessment tool, MOTIFS. MOTIFS was developed with the aim of becoming widely adopted by farmers and farm advisors, but this result was not achieved. Our reflection process showed success factors favoring and barriers hindering tool adoption. These were grouped into three clusters of lessons learned for sound tool development: (1) institutional embeddedness, (2) ownership, and (3) tool functions. This clustering allowed us to formulate actions for researchers on the following aspects: (1) learning from stakeholders and end users, (2) providing coaching for appropriate tool use, and (3) structuring development of different tool types and exploring spin-offs from existing tools. We hope these normative results evoke other researchers to feed a debate on understanding tool development.
Key words: reflection; stakeholder participation; sustainability assessment tool; tool development process

INTRODUCTION

Over past decades, many sustainability assessment tools have been developed for agriculture to assist stakeholders in identifying and evaluating sustainable development. Sustainability assessment is viewed as a significant aid in the transition toward sustainable development (Pope et al. 2004). Assessment tools have been developed for various levels of the food production system; i.e., farm, regional, national, or global (Bockstaller et al. 2009, Binder et al. 2010). Because of the complexity of food production systems, different types of tools have been developed (de Ridder et al. 2007, Binder et al. 2010, Van Passel and Meul 2012), ranging from indicator sets (e.g., Girardin et al. 2000, Rigby et al. 2001, Zahm et al. 2008, Grenz et al. 2011) to simulation models (e.g., Cerf et al. 2012, Van Meensel et al. 2012). These tools make complex sustainability issues more tangible, and therefore support decision-making at the previously mentioned levels (e.g., Halberg et al. 2005, Castoldi and Bechini 2010, Van Passel and Meul 2012). However, concerns arise about translating this potential into actual use by intended users (McCown 2001, Woodward et al. 2008, Díez and McIntosh 2009, De Mey et al. 2011, Cerf et al. 2012).

One critical success factor is the research and development process that ultimately leads to the tool (Weaver and Rotmans 2006, Reed 2008, Bell et al. 2012, Pülzl et al. 2012). Many of these processes are iterative learning processes with nonlinear links between goal definition, design, and implementation (Woodward et al. 2008). Insufficient involvement of stakeholders and end users during the development process can lead to failure in the practical use of the tools (Woodward et al. 2008, Cerf et al. 2012, Prost et al. 2012, Van Meensel et al. 2012). This is especially applicable to sustainability, a context-bounded concept that needs to be interpreted and implemented by a range of stakeholders (Weaver and Rotmans 2006). Despite recommendations in literature about best practices, insight is still lacking about how participants engage in participatory processes and how this affects the process’ outcomes; e.g., tools (Bell et al. 2012). The lack of scientific attention to the design methodology of tools is a possible reason (Prost et al. 2012).

Scientific identification of factors that hinder tool adoption in actual practice is challenging. Difficulties stem not only from the complex concept of sustainability but also from the lack of literature about the development process of sustainability assessment tools. Because of the multiple facets inherent to the development process, many process factors are expected to influence tool adoption. Structured reflection on the process is therefore necessary to increase insight into determining factors. Blackstock et al. (2007) suggested performing reflection with a team of evaluators that includes those who are involved in the development process and those who are not. The first contribute to understanding insider information about the process; the latter tackle the risk of being interpretive and self-referential while performing the reflection.

Our aim is to perform a reflection on the development process of a sustainability assessment tool that was not adopted for practical use to the desired extent. We chose MOTIFS, “Monitoring tool for integrated farm sustainability”(Meul et al. 2008). The development process of MOTIFS is soundly documented in the literature and well known by three authors of this paper who were involved in the development process. Earlier tool evaluations by Meul et al. (2009), Campens et al. (2010), and De Mey et al. (2011) already suggested improvement strategies for MOTIFS and its implementation. However, these did not result in the intended and general adoption by farmers or farm advisors. The goal of this reflection is to foster a scientific debate on the development and implementation of sustainability assessment tools by identifying characteristics that either stimulate (i.e., success factors) or hinder (i.e., barriers) the general adoption by the intended end users. These insights must contribute to our learning for future tool development.

We briefly introduce the MOTIFS case and discuss our methodological approach for the reflection process. We then examine the results of this reflection process in light of barriers and success factors that influenced the general adoption of MOTIFS. Finally, we discuss lessons learned from these results and present our conclusions.

METHODOLOGY: REFLECTION ON A FLEMISH CASE

The MOTIFS case

MOTIFS is an indicator-based sustainability assessment tool that presents a visual aggregation of indicator scores in a radar graph (Meul et al. 2008). It covers 10 sustainability themes related to ecological, economic, and social aspects, and is an example of a scientifically sound indicator-based tool developed for general use by farmers or farm advisors. Despite the participatory tool development process, which involved a wide range of stakeholders, adoption of the tool was disappointing. To learn from this outcome, we reflected on the process from tool design to implementation.

Framework of reflection

We followed a reflective approach to take full advantage of our inside information related to this process. To avoid the pitfalls of being interpretive and self-referential, we built our reflection process according to Blackstock et al. (2007). Their framework is designed for evaluating participatory research and builds on three bodies of literature, which concern participatory research, sustainability science, and evaluation of partnership processes (Fig. 1). Central to the approach is an evaluation process emerging from four aspects that delineate the object of evaluation: bounding, focus, timing, and purpose. The evaluation process itself concerns selection of evaluation criteria, choice of a methodology to gather and analyze data, and selection of the evaluation staff.

Bounding, focus, timing, purpose

Bounding serves to clearly delineate the object of the evaluation, which makes it easier to keep the evaluation process on track. The object of our reflection is the MOTIFS tool development process from the early beginning (visioning phase) to the different attempts at implementation. The focus of an evaluation can be either strategic, investigating if the project achieves the objectives, or operational, investigating time, costs, or quality of the activities (Blackstock et al. 2007). Our research objective—i.e., elucidating characteristics of the MOTIFS development process that influenced the tool adoption—makes our focus strategic. Our reflection is situated two years after the MOTIFS process had been put on hold, making it an ex post evaluation. Blackstock et al. (2007) mentioned four purposes for evaluation: proving, controlling, learning, and improving. We wish to contribute to the insights in literature about tool development processes and our learning about tool development for subsequent improvement of existing tools and design of better tools. Therefore, the central purpose of this evaluation can be described as learning and improving.

Evaluation criteria

Evaluation of a process needs evaluation criteria selected with reference to the type of evaluation employed and the objectives for which the evaluation is being carried out (Blackstock et al. 2007). Blackstock et al. (2007) mentioned the importance of choosing the criteria, as there are often no acceptable, valid, and reliable quantitative measures for the variables of interest. Due to this lack of predefined variables, we preferred to leave room for criteria emerging from our data. Therefore, we limited our a priori selection of evaluation criteria to what we call main fields of criteria. This prevented us from being limitative and overlooking important criteria. Our main fields of criteria are based on Burgess and Chilvers (2006) and Blackstock et al. (2007), who have emphasized the connection between the outcome of a process and its underlying context, research design, and decision situation. As a result, we selected the following main fields of criteria: (1) the political, social, cultural, environmental context in which the process took place, (2) the decision situation comprising the starting points for the research design (e.g., objectives set or principles adhered to during the process), and (3) the research design, relating to the process setup from tool design to implementation.

Methods, data sources, and analysis

We used a qualitative research approach (Denzin and Lincoln 2000, Creswell 2003) that combines data from scientific literature, reports, and interviews (Fig. 2). The scientific papers and research reports concerning MOTIFS (Mathijs et al. 2004, Mulier et al. 2004, Nevens et al. 2008, Meul et al. 2008, 2009, Campens et al. 2010, De Mey et al. 2011) were analyzed to identify the initial objectives set for the tool, and to reconstruct the course of the process from design to implementation. In addition, we carried out indepth interviews with people involved in the MOTIFS process to gain more specific details about the selected fields of evaluation criteria that did not emerge from existing MOTIFS publications. We interviewed 12 researchers involved in different stages of the process and one person involved only in MOTIFS’ practical implementation. Farmers and farm advisors were not included because their experiences were already described and analyzed in Meul et al. (2009) and De Mey et al. (2011). Each interview was recorded and transcribed.

To guarantee that the selected fields of evaluation criteria were addressed during the interviews, we used an interview guide (Marchall and Rossman 2006). To gain insight into the context of the MOTIFS process, we asked respondents questions about why and how the project started, what their and society’s expectations were, and how they thought farmers’ practices can become more sustainable. Questions about the decision situation gathered information about the rationale behind the development of MOTIFS, the respondents’ definition of sustainability, their thoughts about the usefulness of tools to increase agricultural sustainability, and what these tools should look like. To gain insight about the research design of the MOTIFS process, we asked about the setup of the research design, the stakeholders involved during the process, the respondents’ opinions about and experiences with the research setup, the barriers and success factors they perceived during the process, and what they learned about it for future projects.

We analyzed the interview transcripts in NVivo 9 (QSR International 2010), which enabled us to structure, label, and classify the qualitative data. We used the method of coding described by Strauss and Corbin (1998), based on researchers’ expertise (Rogge et al. 2011, Kerselaers et al. 2013). First, the data were classified into phenomena, transcript fragments representing discrete incidents, ideas, events, or acts that were mentioned by the respondents and were relevant to our research. Each phenomenon was labeled to enable grouping of similar phenomena under a common heading. Each phenomenon mentioned by two or more respondents was defined as a concept. Concepts were classified into categories and linked to the different fields of evaluation criteria. For example, the quotes “Farmers were underrepresented. Seldom a farmer sat around the table” and “Stakeholders were mainly experts” were identified as phenomena. Both phenomena were defined as the concept “Experts dominated advisory boards during tool development.” This concept was classified into the category “Stakeholder involvement and roles” and was linked to the evaluation criterion “Research design.” For further details on this method, see De Mey et al. (2011).

Research validity and evaluation staff

In qualitative research we need triangulation to ensure objectivity throughout the data collection and analysis (Strauss and Corbin 1998, Patton 2002). In our case, we applied the following triangulation techniques:
The authors of this reflection on the MOTIFS process include two people who were not involved and three who were involved in the process (two of them were also interviewed). The interviews were conducted by the authors who were not involved in the process. Coding of the interview transcripts was performed by three of the authors, one of which was involved in the process. These authors thoroughly discussed the results of the data analysis and the translation into lessons learned.

RESULTS

Table 1 presents an overview of detected barriers or success factors according to the predefined main fields of evaluation criteria. They are grouped into three clusters of learned lessons, which are addressed in the Discussion. Often, similar lessons can be learned from different barriers and success factors.

Context

In 2001, two leading Flemish academics and a chief of the policy staff published a vision text called “Future vision on sustainable agriculture in Flanders” (Mathijs et al. 2004). At that time, common knowledge about agricultural sustainability was rather limited in Flanders. Therefore, the Government of Flanders decided to found Stedula (2002–2006), the Flemish Policy Research Centre for Sustainable Agriculture, one of the first Flemish initiatives to enhance sustainability in agriculture. The mission of this interuniversity research group was to outline the relevant topics for a sustainable agricultural sector, establish objectives and achievable goals, and develop an appropriate indicator set. That indicator set should enable government to monitor and evaluate the state of sustainability in agriculture and the efficiency of policies and measures (Nevens et al. 2008).

According to the respondents, society’s knowledge about agricultural sustainability changed. Insufficient communication between Stedula researchers and the wide range of stakeholders directly and indirectly involved in Flemish agriculture, in combination with changing insights of both parties, resulted in divergent expectations of the researchers and stakeholders concerning the outcome of the Stedula research. For example, while Stedula researchers recognized that making sustainability concrete was far more complex than initially expected (e.g., unequal knowledge about the three pillars of sustainability resulted in the absence of good social indicators), they felt that stakeholders still expected a solution to all problems in agriculture within a single tool. We considered these divergent expectations as a barrier to the successful design and implementation of MOTIFS.

The broad mission set for the Stedula research team was another barrier resulting from the originally limited knowledge about sustainable agriculture. As mentioned by the respondents, this probably has led to the formulation of very diverse project objectives. When asked about the tool’s purposes, respondents described a range of applications. Their answers varied from measuring sustainability on farms, to measuring agricultural sustainability in Flanders, to creating a tool to encourage and motivate farmers to increase on-farm sustainability. This resulted in diverse potential functions and end users for MOTIFS. Consecutive implementation projects (Schoonhoven 2008, Meul et al. 2009, 2012, De Mey et al. 2011) show an evolution in MOTIFS’ function from a monitoring tool to a communication tool, and finally a decision support tool. Fig. 3 illustrates this evolution, giving pros and cons for each function.

The confusion about the tool’s objectives was further complicated by changes in the research team. The most abrupt change occurred in 2007 when the Stedula project transferred to the Institute for Agricultural and Fisheries Research. Within the research team this resulted in both discontinuity of knowledge and changing visions and interpretations of, for example, the tool’s purpose. The limited experience of new researchers with the tool and the lack of documentation about previous decision-making made it difficult for them to agree on the tool’s purpose. The research team coordinators also changed more than once. This was probably one reason for the lack of necessary decision-making about the objectives for tool development.

Decision situation

Stedula’s research started from a vision and a definition of principles about sustainability. Together with the previously mentioned Stedula objectives, they form the decision situation of the development process that influenced the research design.

The principle of equally and simultaneously incorporating economic, ecological, and social sustainability dimensions (Nevens et al. 2008) resulted in the development of a holistic tool. According to the respondents, the major success factor of such a tool is its ability to raise awareness about the holistic concept of sustainability. Other respondents question the value of a holistic tool for farm practice. They argue that some sustainability issues are not suitable for monitoring at the farm level, and that simultaneous implementation of the three dimensions in one tool for management and monitoring purposes is seldom performed in practice. This holistic principle guided the variety of expertise in the transdisciplinary research team, and created a success factor for knowledge building. The research team consisted of agronomists, veterinarians, economists, anthropologists, and geographers. But the composition of the research team was not proportionate to the three sustainability dimensions. Social (1 anthropologist, 2 social geographers) and economic (2 economists) scientists were less represented in the overall team compared to scientists with an environmental background (16 agronomists, 1 veterinarian). This barrier may have resulted in the failed development of social indicators.

The objective of Stedula to provide a scientifically sound interpretation of sustainability probably resulted in a focus on the development of quantitative indicators and the choice to develop a monitoring tool (instead of, for example, guidelines to enhance social sustainability on a farm). According to the respondents, this decision was spurred by the government’s preference for quantifying progress and the prevalence of natural scientists on the team. However, the social scientists questioned this approach because they believed some social issues require a qualitative approach. During the development process, the research team encountered difficulties in the quantitative expression of some sustainability themes. This resulted in an incomplete indicator set, which undermined the tool’s holistic monitoring purpose and led part of the research team to change its vision of the tool’s purpose from monitoring to decision support tool.

The definition of sustainability as a dynamic concept resulted in the aim to develop MOTIFS as a dynamic tool; i.e., a tool that requires continuous adaptation to new data and changing contexts. During the MOTIFS process, the changing context and insights were insufficiently translated into the tool (see also Context). Closer involvement of a broad stakeholder group throughout the process could have increased the researchers’ awareness of a changing context (and stakeholders’ needs).

Research design

In 2005, Stedula developed an updated vision of the original vision text of Mathijs et al. (2004) on agricultural sustainability in Flanders, entitled “On tomorrow’s grounds”(Nevens et al. 2008). They used a multistakeholder, transdisciplinary approach with involvement of a wide range of stakeholders directly and indirectly engaged in Flemish agriculture (farmers and representatives of farmers’ organizations, scientists, government representatives, suppliers, education, nongovernmental organizations, countryside, consumers, distribution, and food processing [Nevens et al. 2008]). The researchers considered MOTIFS as the strategy for realizing this vision’s objectives by monitoring farm progress toward integrated sustainability in terms of economic, ecological, and social aspects. Because of the time lag between the supported vision development in 2005 and the start of the tool development in 2002, stakeholders’ input was lacking at the very beginning of the MOTIFS development process. This could have led to a barrier because researchers missed information about the stakeholders’ support for the development of an indicator-based tool for Flemish agriculture, and any potential demands of the intended end users (farmers or farm advisors) for such a tool.

Experts and intended end users (often representatives of farmers’ organizations) were involved during tool development for indicator selection, design, and validation by participating in advisory boards (see Meul et al. 2008, 2009 for more information). However, experts frequently dominated the advisory boards. The researchers perceived this inequity as a barrier that contributed to difficulties in balancing precision, efficiency, transparency, and user friendliness of an indicator. In addition, some researchers experienced difficulties in managing different stakeholder groups during tool development because of the stakeholders’ varying backgrounds and ways of thinking and communicating.

During the implementation phase of MOTIFS’ development, farmers and farm advisors were involved to a greater extent. During the first implementation and validation (Schoonhoven 2008, Meul et al. 2009), farmers indicated they were reluctant to use the tool because they perceived it as not user-friendly. They felt it was time consuming, complicated, unable to deliver concrete farm advice, and sometimes difficult to interpret. However, they appreciated the use of the tool in discussion groups in the presence of an expert. Because of its advanced state, adjustments to the tool itself were not considered as an option. As Mulier et al. (2004) and Meul et al. (2009) show, this resulted in a shift in the tool’s intended end users and implementation settings (Fig. 3). At the start of the process, the researchers wanted to design a tool to be used by farmers independently. However, due to the aforementioned insight, MOTIFS shifted toward a tool to be used by farm advisors to enable farmer group discussions that preferably were attended by an expert (De Mey et al. 2011, Meul et al. 2012). Earlier involvement of farmers in the development process could have facilitated adjustments to the tool itself and not only in the targeted end users and implementation settings.

Farm advisors also experienced difficulties using the tool. Respondents identified the following main barriers to adoption by farm advisors: (1) the necessary data were not always readily available, particularly for ecological and social themes, (2) underlying indicator calculations were complex and not always transparent; for example, assumptions made were not always clear, (3) MOTIFS’ design was not applicable to the whole range of farming systems in Flanders, and (4) guidance for farm advisors to adopt the tool in their practices was lacking.

DISCUSSION

The barriers and success factors of the MOTIFS development process described in the Results were translated into lessons learned (Table 1). Fig. 4 shows the relationships between these lessons. We grouped them into three clusters: (C1) institutional embeddedness, (C2) ownership, and (C3) tool functions.

Cluster C1 addresses the researchers’ role in the development process. The complex sustainability concept requires a diverse research team favoring knowledge creation. The MOTIFS case shows that success depends on shared process visions and objectives of all members of the team. Objectives should be safeguarded during the process by setting aside time for frequent reflection and decision-making, which is affirmed by Neef and Neubert (2011). Barriers in the MOTIFS case indicate that knowledge must be accumulated and promoted within the research team by motivating and documenting decisions. This should also help tackle issues arising from changing team compositions. A team coordinator plays an important role in encouraging decision-making, evaluation, and vision alignment within the team. We refer to these researchers’ roles in the process as an institutional embeddedness enclosing an adaptive learning process.

Cluster C2 refers to the need to create stakeholders’ ownership in the process. Farmers must recognize and accept their responsibility in achieving a more sustainable agricultural practice. A tool on its own cannot guarantee sustainable development in agriculture. Creating opportunities for active stakeholder involvement can stimulate this sense of ownership (Voinov and Bousquet 2010, Prost et al. 2012). Various authors (van de Kerkhof 2006, Bohunovksy and Jäger 2008, De Kraker et al. 2009, Friend et al. 2009) have recognized the advantages of stakeholder involvement: (1) increasing awareness and acceptance of perceived problems and measures required to solve problems, (2) better decision-making as it accounts for diversity in viewpoints, (local) knowledge, and information about problems and solutions, (3) increasing support for the assessment outcomes, and (4) learning. The MOTIFS case study illustrates that organizing and managing good stakeholder involvement is a big challenge (Reed 2008, Neef and Neubert 2011). A well thought-out stakeholder selection for the different process stages, taking the intended end users into account, is critical (Weaver and Rotmans 2006). Stakeholder involvement and creation of ownership require frequent communication between researchers and stakeholders and evaluation by both researchers and stakeholders. This also advocates facilitating knowledge interchange between stakeholders by the presence of competent facilitators (Reed 2008, Cuéllar-Padilla and Calle-Collado 2011).

Cluster C3 involves lessons with respect to tool functions. The MOTIFS process elicited a multiplicity of tool functions, such as monitoring, communication, and decision support. These functions all require different specifications concerning implementation settings and end users (Bockstaller et al. 2009, Cerf et al. 2012, Prost et al. 2012). Even within one group of end users, needs can be different. For example, as their business evolves toward greater sustainability, farmers need tools with different functions. They first need a communication tool to raise their awareness, and only later require a decision support or a monitoring tool. The MOTIFS case revealed difficulties in combining all these functions into one tool, which resulted in shortcomings when applying functions separately (de Ridder et al. 2007, Schader et al. 2012, Marchand et al., 2014). Therefore, it is important to maintain the link with the intended tool use during the development process (Cerf et al. 2012). Another tool characteristic shown in the MOTIFS case is the importance of the tool’s implementation setting; for example, the use in discussion groups. The diversity of tool functions, end users, and user settings underpins the importance of defining clear objectives for the tool (Reed 2008).

By processing the data into lessons learned, we observed two types of relationships between the clusters. First, an analogy between C2 and C3 was detected and identified as “richness in diversity” (Fig. 4). Diversity concerns both stakeholder and end users (C2)—e.g., farmers, farm advisors, government, or food processors—and tools (C3), emanating from different possible tool functions and/or the various intended end users. Several times, respondents mentioned the importance of this richness in diversity. A second kind of relationship is indicated as actions for researchers (A1 to A3 in Fig. 4). In fact, the lessons learned necessitate three main challenges for researchers regarding the clusters ownership (A1) and tool functions (A3) and the link between them (A2).

The first type of action (A1) is “acknowledge mutual learning.” Researchers and experts do not monopolize knowledge and can and should learn from stakeholders and end users. We can underpin this action with Reed (2008), who advocates institutionalization of participation in organizational cultures. This supports tool design that fits the intended purpose and end users. Cerf et al. (2012) showed significant differences between designers’ and users’ interpretations of a problem, and experienced so-called “debriefing sessions” between researchers and end users as learning environments for researchers. They stressed the importance of involving end users early in the process. Berthet et al. (2012) denounced the lack of reflexivity in participatory design methods and highlighted the need to develop reflexive frameworks to analyze and compare them. Grin et al. (2004) described this as reflexive design or “a process of judgment in which assumptions, knowledge claims, distinctions, roles and identities, normally taken for granted, must be critically scrutinized.” Likewise, Langvad (2012) denoted the call for more case studies, seeking a deeper understanding of why processes unfold the way they do.

The second action type (A2) involves ”coaching for appropriate tool use” to account for the diversity of stakeholders and tool functions. Several authors (Niemeijer and de Groot 2008, Bockstaller et al. 2009, Gasparatos and Scolobig 2012) mention the lack of guidelines or criteria on how to choose between tools in the sustainability assessment literature. Future studies on how to link existing sustainability assessment tools to end users for a specific purpose are of paramount importance.

The third action type (A3) originates from the different possible tool functions. It entails structuring the development of different tool types and exploring spin-offs from existing tools. For example, by using existing tools in different contexts, new tool concepts can be devised (Cerf et al. 2012). In this context, Marchand et al. (2014) proposed a complementary use of tools and the development of flexible tools for varying farming situations. However, the scientific basis for linking tools across disciplines and scales is still weak (Ewert et al. 2009).

Now, besides these revelations on lessons learned and actions to take, what is unique in this paper? We distinguish three main issues: what we investigated, the way we investigated it, and the results it yielded. First, our approach to analyzing the development and implementation process of MOTIFS to explain its lack of adoption for practical use has, to our knowledge, scarcely been performed until now. Prost et al. (2012) argued that the agricultural research community is not concerned with the effects of the design methodology on the tools’ suitability and potential applications. However, we show that an evaluation of the development process can deliver valuable insights for future tool development. Second, the Blackstock et al. (2007) framework proved to be an appropriate method for such a process evaluation, although to our knowledge it has scarcely been used. This framework helped delineate the goals of our reflection process and the way to perform it. It helped structure the abundance of information on MOTIFS. Further, the holistic approach recognizing the connection between context, research design, and decision situation (Burgess and Chilvers 2006) revealed the barriers and success factors within the development process. Third, our reflexive research resulted in lessons that could be verified by literature, but additionally revealed relationships between these lessons, and thus uncovered actions for researchers. These normative results make us eager to discuss similar or contradictory experiences with others.

CONCLUSION

A rigorous self-reflexive research on the MOTIFS development process enabled us to identify characteristics that influenced its adoption by farmers and farm advisors. We not only found various factors of failure and success that could be confirmed by similar findings in the literature, we also developed a holistic picture that arranged these elements as lessons learned. The basic structure of this arrangement consists of three clusters of lessons learned. The first cluster, “institutional embeddedness,” refers to the researchers’ roles. Crucial is the presence of a diverse team that has the clear guiding role of a coordinator. Further, the incorporation of sufficient, well-documented, reflection and decision moments must support the development process. The second cluster, “ownership,” addresses the stakeholders’ roles and responsibility. The latter can be strengthened through the well thought-out and active involvement of stakeholders and end users. This requires frequent communication and a suitable facilitation process. The third cluster, “tool functions,” reveals an extensive tool variety depending on the intended function and end users, calling for clear objectives during tool development and a well-considered (social) setting for tool use.

Our results show that reflection on tool development processes can deliver valuable insights for future tool development and implementation. Additionally, they evoke three types of actions for researchers and future research. Researchers should (1) learn from stakeholders and end users, (2) provide coaching for appropriate tool use, and (3) structure development of different tool types and explore spin-offs from existing tools. We hope our normative results prompt researchers to analyze their tool development processes and disseminate their knowledge to feed a debate on this topic’s understanding. Furthermore, inspiration for future research can be found in our proposed actions for researchers; for example, exploring the link between existing sustainability assessment tools, end users, and purpose, or examining the learning relationship between researchers and stakeholders and end users.

RESPONSES TO THIS ARTICLE

Responses to this article are invited. If accepted for publication, your response will be hyperlinked to the article. To submit a response, follow this link. To read responses already accepted, follow this link.

LITERATURE CITED

Bell, S., S. Morse, and R. A. Shah. 2012. Understanding stakeholder participation in research as part of sustainable development. Journal of Environmental Management 101:13–22. http://dx.doi.org/10.1016/j.jenvman.2012.02.004

Berthet, E., C. Barnaud, N. Girard, and J. Labatut. 2012. Toward a reflexive framework to compare collective design methods for farming system innovation. In Proceedings of the 10th IFSA symposium—Producing and reproducing farming systems: new modes of organization for sustainable food systems of tomorrow. 1–4 July, Aarhus, Denmark. [online] URL: http://www.ifsa2012.dk/downloads/WS2_3/Berthet_Labatut_Girard.pdf

Binder, C. R., G. Feola, and J. K. Steinberger. 2010. Considering the normative, systemic and procedural dimensions in indicator-based sustainability assessments in agriculture. Environmental Impact Assessment Review 30:71–81. http://dx.doi.org/10.1016/j.eiar.2009.06.002

Blackstock, K. L., G. J. Kelly, and B. L. Horsey. 2007. Developing and applying a framework to evaluate participatory research for sustainability. Ecological Economics 60:726–742. http://dx.doi.org/10.1016/j.ecolecon.2006.05.014

Bockstaller, C., L. Guichard, O. Keichinger, P. Girardin, M-B. Galan, and G. Gaillaird. 2009. Comparison of methods to assess the sustainability of agricultural systems. A review. Agronomy for Sustainable Development 29:223–235. http://dx.doi.org/10.1051/agro:2008058

Bohunovsky, L., and J. Jäger. 2008. Stakeholder integration and social learning in integrated sustainability assessment. 2008 Berlin Conference on the Human Dimensions of Global Environmental Change, International Conference of the Social-Ecological Research Programme. http://userpage.fu-berlin.de/ffu/akumwelt/bc2008/download.htm

Burgess, J., and J. Chilvers. 2006. Upping the ante: a conceptual framework for designing and evaluating participatory technology assessments. Science and Public Policy 33:713–728.

Campens, V., K. De Mey, J. D’hooghe, and F. Marchand. 2010. Melkveecafé: Samen grenzen verleggen. Mededeling ILVO nr. 74, ILVO, Merelbeke, Belgium.

Castoldi, N., and L. Bechini. 2010. Integrated sustainability assessment of cropping systems with agro-ecological and economic indicators in northern Italy. European Journal of Agronomy 32:59–72. http://dx.doi.org/10.1016/j.eja.2009.02.003

Cerf, M., M. Jeuffroy, L. Prost, and J. Meynard. 2012. Participatory design of agricultural decision support tools: taking account of the use situations. Agronomy for Sustainable Development 32:899–910. http://dx.doi.org/10.1007/s13593-012-0091-z

Creswell, J. W. 2003. Research design. Qualitative, quantitative and mixed methods approach. Second edition. Sage Publications, Thousand Oaks, California, USA.

Cuéllar-Padilla, M., and A. Calle-Collado. 2011. Can we find solutions with people? Participatory action research with small organic producers in Andalusia. Journal of Rural Studies 27:372–383. http://dx.doi.org/10.1016/j.jrurstud.2011.08.004

De Kraker, J., C. Kroeze, and P. Kirshner. 2009. Model as social learning tools in participatory integrated assessment. Pages 498–499 in van Ittersum, M., J. Wolf, G. von Laar, editors. Proceedings of AgSAP Conference 2009. 10–12 maart 2009, Egmondaan Zee, The Netherlands. Wageningen University and Research Centre, Wageningen, The Netherlands.

De Mey, K., K. D’Haene, F. Marchand, M. Meul, and L. Lauwers. 2011. Learning through stakeholder involvement in the implementation of MOTIFS: an integrated assessment model for sustainable farming in Flanders. International Journal of Agricultural Sustainability 9:1–14.

de Ridder, W., J. Turnpenny, M. Nilsson, and A. Von Raggamby. 2007. A framework for tool selection and use in integrated assessment for sustainable development. Journal of Environmental Assessment Policy and Management 9:423–441. http://dx.doi.org/10.1142/S1464333207002883

Denzin, N. K., and Y. S. Lincoln, editors. 2000. Handbook of qualitative research. Second edition. Sage publications, Thousand Oaks, California, USA.

Díez, E., and B. McIntosh. 2009. A review of the factors which influence the use and usefulness of information systems. Environmental Modelling & Software 24:588–602. http://dx.doi.org/10.1016/j.envsoft.2008.10.009

Ewert, F., M. van Ittersum, I. Bezlepkina, O. Therond, E. Andersen, H. Belhouchette, C. Bockstaller, F. Brouwer, T. Heckelei, S. Janssen, R. Knapen, M. Kuiper, K. Louhichi, J. Olsson, N. Turpin, J. Wery. J. Wien, and J. Wolf. 2009. A methodology for enhanced flexibility of integrated assessment in agriculture. Environmental Science & Policy 12:546–561. http://dx.doi.org/10.1016/j.envsci.2009.02.005

Friend, M. A., A. M. Dunn, and J. Jennings. 2009. Lessons learnt about effectively applying participatory action research: a case study from the New South Wales dairy industry. Animal Production Science 49:1007–1014. http://dx.doi.org/10.1071/EA08168

Gasparatos, A., and A. Scolobig. 2012. Choosing the most appropriate sustainability assessment tool. Ecological Economics 80:1–7. http://dx.doi.org/10.1016/j.ecolecon.2012.05.005

Girardin, P., C. Bockstaller, and H. Van der Werf. 2000. Assessment of potential impacts of agricultural practices on the environment: the AGRO*ECO method. Environmental Impact Assessment Review 20:227–239. http://dx.doi.org/10.1016/S0195-9255(99)00036-0

Grenz, J., M. Schoch, A. Stämpfli, and C. Thalmann. 2011. RISE 2.0 Field Manual. Swiss College of Agriculture, Zollikofen, Switzerland.

Grin, J., F. Felix, B. Bos, and S. Spoelstra. 2004. Practices for reflexive design: lessons from a Dutch programme on sustainable agriculture. International Journal of Foresight and Innovation Policy 1:126–149.

Halberg, N., H. van der Werf, C. Basset-mens, R. Dalgaard, and I. de Boer. 2005. Environmental assessment tools for the evaluation and improvement of European livestock production systems. Livestock Production Science 96:33–50. http://dx.doi.org/10.1016/j.livprodsci.2005.05.013

Kerselaers, E., E. Rogge, E. Vanempten, L. Lauwers, and G. Van Huylenbroeck. 2013. Changing land use in the countryside: stakeholders’ perception of the ongoing rural planning processes in Flanders. Land Use Policy 32:197–206. http://dx.doi.org/10.1016/j.landusepol.2012.10.016

Langvad, A. 2012. Multi-stakeholder land-water management. How are sustainable measures constructed in practice? In Proceedings of the 10th IFSA symposium—Producing and reproducing farming systems: new modes of organization for sustainable food systems of tomorrow. 1–4 July, Aarhus, Denmark. [online] URL: http://www.ifsa2012.dk/downloads/WS2_3/Langvad.pdf

Marchall, C., and G. B. Rossman. 2006. Designing qualitative research. Fourth edition. Sage Publications, Thousand Oaks, California, USA.

Marchand, F., L. Debruyne, L. Triste, C. Gerrard, S. Padel, and L. Lauwers. 2014. Key characteristics for tool choice in indicator-based sustainability assessment at farm level. Ecology and Society 19(3) 46. http://dx.doi.org/10.5751/ES-06876-190346

Mathijs, E., D. Reheul, and J. Relaes. 2004. Toekomstvisie duurzame landbouw in Vlaanderen. http://lv.vlaanderen.be/nlapps/docs/default.asp?id=136

McCown, R. L. 2001. Learning to bridge the gap between science-based decision support and the practice of farming: evolution in paradigms of model-based research and intervention from design to dialogue. Australian Journal of Agricultural Research 52:549–571. http://dx.doi.org/10.1071/AR00119

Meul, M., F. Nevens, and D. Reheul. 2009. Validating sustainability indicators: focus on ecological aspects of Flemish dairy farms. Ecological Indicators 9:284–295. http://dx.doi.org/10.1016/j.ecolind.2008.05.007

Meul, M., S. Van Passel, D. Fremaut, and G. Haesaert. 2012. Higher sustainability performance of intensive grazing versus zero-grazing dairy systems. Agronomy for Sustainable Development 32:629–638. http://dx.doi.org/10.1007/s13593-011-0074-5

Meul, M., S. Van Passel, F. Nevens, J. Dessein, E. Rogge, A. Mulier, and A. Van Hauwermeiren. 2008. MOTIFS: a monitoring tool for integrated farm sustainability. Agronomy for Sustainable Development 28:321–332. http://dx.doi.org/10.1051/agro:2008001

Mortelmans, D. 2007. Handboek kwalitatieve onderzoeksmethoden. Acco, Leuven, Belgium.

Mulier, A., F. Nevens, D. Reheul, and E. Matijs. 2004. Ontwikkeling van een beoordelingssysteem voor de duurzaamheid van de Vlaamse land-en tuinbouw op bedrijfsniveau. Publicatie 9. Steunpunt Duurzame Landbouw, Merelbeke, Belgium.

Neef, A., and D. Neubert. 2011. Stakeholder participation in agricultural research projects: a conceptual framework for reflection and decision-making. Agriculture and Human Values 28:179–194. http://dx.doi.org/10.1007/s10460-010-9272-z

Nevens, F., J. Dessein, M. Meul, E. Rogge, I. Verbruggen, A. Mulier, S. Van Passel, J. Lepoutre, and M. Hongenaert. 2008. ‘On tomorrow’s grounds’, Flemish agriculture in 2030: a case of participatory translation of sustainability principles into a vision for the future. Journal of Cleaner Production 16:1062–1070. http://dx.doi.org/10.1016/j.jclepro.2007.06.007

Niemeijer D., and R. S. de Groot. 2008. A conceptual framework for selecting environmental indicator sets. Ecological Indicators 8:14–25. http://dx.doi.org/10.1016/j.ecolind.2006.11.012

Patton, M. 2002. Qualitative research and evaluation methods. Sage Publications, Thousand Oaks, California, USA.

Pope, J., D. Annandale, and A. Morrison-Saunders. 2004. Conceptualising sustainability assessment. Environmental Impact Assessment Review 24:595–616. http://dx.doi.org/10.1016/j.eiar.2004.03.001

Prost, L., M. Cerf, and M. Jeuffroy. 2012. Lack of consideration for end-users during the design of agronomic models. A review. Agronomy for Sustainable Development 32:581–594. http://dx.doi.org/10.1007/s13593-011-0059-4

Pülzl, H., I. Prokofieva, S. Berg, E. Rametsteiner, F. Aggestam, and B. Wolfslehner. 2012. Indicator development in sustainability impact assessment: balancing theory and practice. European Journal of Forest Research 131:35–46. http://dx.doi.org/10.1007/s10342-011-0547-8

QSR International. 2010. Nvivo 9: Getting started. QSR International, Melbourne, Australia.

Reed, M. S. 2008. Stakeholder participation for environmental management: a literature review. Biological Conservation 141:2417–2431. http://dx.doi.org/10.1016/j.biocon.2008.07.014

Rigby, D., P. Woodhouse, T. Young, and M. Burton. 2001. Constructing a farm level indicator of sustainable agricultural practice. Ecological Economics 29:463–478. http://dx.doi.org/10.1016/S0921-8009(01)00245-2

Rogge, E., J. Dessein, and H. Gulinck. 2011. Stakeholders perception of attitudes towards major landscape changes held by the public: the case of greenhouse clusters in Flanders. Land Use Policy 28:334–342. http://dx.doi.org/10.1016/j.landusepol.2010.06.014

Schader, C., M. S. Meier, J. Grenz, and M. Stolze. 2012. The trade-off between scope and precision in sustainability assessment of food systems. In Proceedings of the 10th IFSA symposium—Producing and reproducing farming systems: new modes of organization for sustainable food systems of tomorrow. 1–4 July, Aarhus, Denmark. [online] URL: http://www.ifsa2012.dk/downloads/WS6_1/Schader.pdf

Schoonhoven, D. 2008. Sterk met melk! Een handleiding om duurzaam melk te produceren. Plaatselijke Groep Leader+ Meetjesland. Eeklo. [online] URL: http://www.meetjesland.be/Sterk_met_melk/brochure%20Sterk%20met%20Melk%20DEF.pdf

Strauss, A., and J. Corbin. 1998. Basics of qualitative research: grounded theory procedures and techniques. Sage Publications, Thousand Oaks, California, USA.

van de Kerkhof, M. F. 2006. A dialogue approach to enhance learning for sustainability—a Dutch experiment with two participatory methods in the field of climate change. Integrated Assessment Journal Bridging Sciences & Policy 6:7–34.

Van Meensel, J., L. Lauwers, I. Kempen, J. Dessein, and G. Van Huylenbroeck. 2012. Effect of a participatory approach on the successful development of agricultural decision support systems: the case Pigs2win. Decision Support Systems 54:164–172. http://dx.doi.org/10.1016/j.dss.2012.05.002

Van Passel, S., and M. Meul. 2012. Multilevel and multi-user sustainability assessment of farming systems. Environmental Impact Assessment Review 32:170–180. http://dx.doi.org/10.1016/j.eiar.2011.08.005

Voinov, A., and F. Bousquet. 2010. Modelling with stakeholders. Environmental Modelling & Software 25:1268–1281. http://dx.doi.org/10.1016/j.envsoft.2010.03.007

Weaver, P. M., and J. Rotmans. 2006. Integrated sustainability assessment: what is it, why do it and how? International Journal of Innovation and Sustainable Development 1:284–303.

Woodward, S. J. R., A. J. Romera, W. B. Beskow, and S. J. Lovatt. 2008. Better simulation modelling to support farming system innovation: review and synthesis. New Zealand Journal of Agricultural Research 51:235–252. http://dx.doi.org/10.1080/00288230809510452

Zahm, F., P. Viaux, L. Vilain, P. Girardin, and C. Mouchet. 2008. Assessing farm sustainability with the IDEA method— from the concept of agriculture sustainability to case studies on farms. Sustainable Development 16:271–281. http://dx.doi.org/10.1002/sd.380

Address of Correspondent:
Laure Triste
Burgemeester van Gansberghelaan 115
Merelbeke
Belgium
9820
Laure.triste@ugent.be
Jump to top
Table1  | Figure1  | Figure2  | Figure3  | Figure4