Research and policy processes in many fields, such as sustainability and health, are increasingly relying on transdisciplinary cooperation among a multitude of governmental, nongovernmental, and private actors from local to global levels. In the absence of hierarchical chains of command, multistakeholder governance may accommodate conflicting or diverse interests and facilitate collective action, but its effectiveness depends on its capacity to integrate systems, transformation, and target knowledge. Approaches to foster such governance are nascent and quickly evolving, and methodological standards to facilitate comparison and learning from best practice are needed. However, there is currently no evaluation approach that (i) comprehensively assesses the capacity for knowledge integration in multistakeholder governance, (ii) draws on the best available knowledge that is being developed in various fields, and (iii) combines a systematic and transferable methodological design with pragmatic feasibility.
We brought together 20 experts from institutions in nine countries, all working on evaluation approaches for collaborative science–policy initiatives. In a synthesis process that included a 2-day workshop and follow-up work among a core group of participants, we developed a tool for evaluating knowledge integration capacity in multistakeholder governance (EVOLvINC). Its 23 indicators incorporate previously defined criteria and components of transdisciplinary evaluations into a single, comprehensive framework that operationalizes the capacity for integrating systems, target, and transformation knowledge during an initiative’s (a) design and planning processes at the policy formulation stage, (b) organization and working processes at the implementation stage, and (c) sharing and learning processes at the evaluation stage of the policy cycle. EVOLvINC is (i) implemented through a questionnaire, (ii) builds on established indicators where possible, (iii) offers a consistent and transparent semiquantitative scoring and aggregation algorithm, and (iv) uses spider diagrams for visualizing results. The tool builds on experience and expertise from both the northern and southern hemispheres and was empirically validated with seven science–policy initiatives in six African and Asian countries.
As a generalized framework, EVOLvINC thus enables a structured reflection on the capacity of multistakeholder governance processes to foster knowledge integration. Its emphasis on dialog and exploration allows adaptation to contextual specificities, highlights relative strengths and weaknesses, and suggests avenues for shaping multistakeholder governance toward mutual learning, capacity building, and strengthened networks. The validation suggests that the adaptive capacity of multistakeholder governance could be best enhanced by considering systems characteristics at the policy formulation stage and fostering adaptive and generic learning at the evaluation stage of the policy cycle.
Research and policy processes in many fields are increasingly relying on transdisciplinary cooperation between academic, governmental, inter-, and nongovernmental and private actors from local to global levels. Examples are United Nations policies on environment and development, such as the Agenda 21, the framework convention on climate change, and the sustainable development goals (United Nations 1992a, b, 2016), but also national initiatives such as energy transition programs (World Energy Council 2014). Similar approaches are called for by the World Health Organization (WHO), and the concept of One Health emerged to integrate human, animal, and environmental health (WHO 1978, Woods and Bresalier 2014, One Health Commission 2018). In the absence of hierarchical chains of command, such collaborations are regulated by multistakeholder governance, a continuous process through which conflicting or diverse interests are accommodated and cooperative action is taken. Multistakeholder governance may include formal institutions and regimes to enforce compliance, as well as informal arrangements that people and institutions either have agreed to or perceive to be in their interest (Burger and Mayer 2003, Haufler 2003, Fidler 2010). Thus, challenges relating to the governance of the science–policy interface are common to various fields, such as sustainability and One Health, and are reflected in converging approaches (Assmuth and Lyytimäki 2015).
To improve multistakeholder governance, program management and policy development require synthesis or knowledge integration. Knowledge integration is defined as combination of specific bodies of knowledge in order to form a more complete view of a system, and as an understanding of how different concepts relate to each other and interact in specific contexts (Encyclopedia Britannica 1998, Liu et al. 2008). Rather than agreeing on a consensus, knowledge integration thus builds a common framework to understand links between the knowledge of others and one’s own. As thought styles or paradigms consist of interactions of social and cognitive perspectives and interests, knowledge integration is closely linked to social transformations. It was even characterized as ‘controlled confrontation,’ which can be revealing and productive, but needs careful attention and management in order to avoid breakdown of stakeholder relations (Hollaender et al. 2008, Pohl et al. 2008). Thus, knowledge integration cannot solely focus on descriptive systems knowledge relating to the state of the system under investigation. To facilitate collaboration and collective action, normative target knowledge relating to aims and objectives of actors and stakeholders, i.e., desired future system states, needs to be addressed. Prescriptive transformation knowledge relating to procedural insights on how to efficiently transform a current system toward a future one is also crucial (Pohl and Hirsch Hadorn 2007). Intercultural research has further highlighted the relevance of social processes and contextual factors for successful knowledge integration (Bohensky and Maru 2011, Berger-González et al. 2016, Hitziger et al. 2017).
Responding to this challenge, transdisciplinary approaches embrace synthesis or integration of knowledge as a key for facilitating the governance of effective collaborations beyond disciplinary, sectoral, and societal boundaries (Pohl and Hirsch Hadorn 2007, Scholz 2011, Bergmann et al. 2012, Seidl et al. 2013). In the sustainability sciences, adaptive governance was proposed to embed transdisciplinarity in the structures and processes of decision making of multiple actors, networks, organizations, and institutions. It aims at directing governance toward (i) learning to live with change and uncertainty, (ii) combining different types of knowledge, (iii) fostering self-organization, and (iv) nurturing resilience (Folke et al. 2005, Chaffin et al. 2014). In health, knowledge integration was recently introduced and listed as one of the core challenges of 21st century epidemiology (Lee and Brumme 2013, Assmuth and Lyytimäki 2015, Körner et al. 2016, Lebov et al. 2017). Since 2015, the Network for the Evaluation of One Health (NEOH) has engaged with ca. 230 scientists and practitioners from economics, health, environmental, social, and political science in 25 countries to develop a framework for evaluating transdisciplinary initiatives in One Health, which was applied in several European countries (Rüegg et al. 2018b). This laid the ground for a more generic conceptualization of how knowledge integration contributes to effective multistakeholder governance (Hitziger et al. 2018).
These transdisciplinary approaches share many characteristics, yet they are nascent and quickly evolving. Various evaluation frameworks are being developed to enable methodological standards that facilitate comparison and learning from best practice (Jahn and Keil 2015, Hoffmann et al. 2017a, Rüegg et al. 2018a, b), but generally accepted methodologies are mostly lacking, and terminology and conceptualization are highly diverse. Most of these frameworks recognize different phases of collaboration and different actor groups. Jahn and Keil (2015) distinguish preparation, monitoring, and evaluation, which are addressed in different sets of questions to different actor groups. Based on distinctions of different methods and processes for knowledge integration (Rossini 1979, Bergmann et al. 2012, Enengel et al. 2012), Hoffmann et al. (2017a, b) assessed different types of generated knowledge and actor involvement in the different stages of the synthesis process. Yet, in scientific sources, the aspect of understanding is dominating over action-oriented perspectives (Woods and Bresalier 2014, Wolf 2015, Lysaght et al. 2017, Queenan et al. 2017), whereas implementation agencies stress the need for collaborative action over a knowledge-oriented perspective (World Bank 2010, USAID 2018). This is reflected in frameworks that conceptualize a divide between acting and implementing in policy, and understanding and reflecting in science (Prowse et al. 2009, Keune et al. 2013, Assmuth and Lyytimäki 2015).
Thus, there currently is no evaluation approach that (i) comprehensively assesses the capacity for knowledge integration in multistakeholder collaborations, (ii) draws on the best available knowledge that is being developed in various fields, and (iii) combines a systematic and transferable methodological design with pragmatic feasibility. This paper presents EVOLvINC to evaluate the knowledge integration capacity in multistakeholder governance. It is based on an interactive synthesis process with experts on research and policy evaluation from several countries and various disciplinary backgrounds, and rigorous field validation. We discuss how EVOLvINC builds on areas of consensus and how it addresses common challenges.
We used synthesis moderation to elicit and discuss principles, criteria, and indicators, as well as inherent assumptions, benefits, and challenges of existing evaluation approaches. Although seeing conflicts of perspectives as a driver for creative and self-reflective group processes, synthesis moderation is primarily based on a shared interest in building mutual understanding and in finding common ground (Janis 1972, Scholz and Tietje 2002, Baron 2008). Organized by University of Zurich, a preparation phase to liaise with relevant institutions started in Winter 2016. A 2-day workshop took place in June 2017 at ETH Zurich’s Transdisciplinarity Lab. It brought together 20 experts from institutions in nine countries, all working on evaluation approaches for collaborative science–policy initiatives (Table 1). Although the participating institutions were from the northern hemisphere, several among them focus on international development, and various participants have strong personal backgrounds in the developing world. The working group thus represented different perspectives, backgrounds, and paradigms from research and practice, but all with deep pertinent expertise.
We used a three-tiered workshop structure, starting with short presentations of each participating institution to introduce the evaluation methods they pursue and challenges they encounter. To compare the approaches, short question-and-answer sessions were held after every second presentation. Core assumptions, criteria, and indicators were conceptualized. The workshop proceeded iteratively between individual brainstorming sessions to elicit additional concepts, creative small group discussions on tentative ways to structure and systematize the wealth of relevant input, and moderated plenary sessions for clustering and/or regrouping the concepts. Throughout these sessions, the workshop benefited from a flexible set-up that allowed quick rearrangement according to methodological requirements. Plenty of wall space and bulletin boards were used for grouping and displaying concepts on self-adhesive cards. Two moderators split the moderation time among them to keep the focus on goals, methods, and timeline, lead over between the sessions, integrate all participants in group discussions, provoke new thoughts, suggest clarifications, and break potential deadlocks. Based on the workshop outcomes and additional literature research, a core group of participants structured and systematized an evaluation approach that takes into account the lessons learned.
The tool consists of a semistructured questionnaire, in which each indicator is operationalized as a question with an associated four-level Likert scale. All indicators rely on questions and scales from literature where possible, including several that were developed in the context of the developing world. Where no such questions and scales are available, they were developed with a view to consistently use a small number of different scales, and were extensively discussed within peer networks. In some exceptions, tick-box lists of items were used (e.g., applied methods for knowledge integration in the “bridging knowledges” criterion). In these cases, the number of items that are applicable to an initiative are translated into a fourfold score that is consistent with the Likert scales. Responses are scored between zero (not conducive to knowledge integration) and one (highly conducive to knowledge integration). The median indicator score results in criteria scores, and the median criteria scores result in aspect scores. The relative influence of individual responses is balanced because similar numbers of criteria and indicators are used in each aspect. This aggregation summarizes overall responses and can be well displayed, for example in spider diagrams. The questionnaire is employed in semistructured interviews between an external evaluator and initiative representatives. It requires documenting both the qualitative discussion of how an initiative implements a specific indicator, and the scale level that best approximates this response. Determining how specific indicators enable or hamper knowledge integration in an ongoing initiative is thus a discursive process between the initiative representatives and the evaluator.
EVOLvINC was validated with seven science–policy initiatives in Armenia, the Republic of Chad (two initiatives), Congo, India, Kenya, and Tanzania. These formative evaluations entailed a three-step discursive process aimed at enabling mutual learning between the initiative and the evaluator. In the first step, background information of the tool and its conceptual approach, indicators, and scoring scales were provided and discussed. In the second step, the tool was applied. Each question was discussed with a view to eliciting how the initiative implements the specific indicator, whether the scale is applicable, and which level reflects the initiative’s approach best. The qualitative answer was recorded, and the indicator level agreed between the evaluator and the initiative. In the third step, the evaluator aggregated the scores and provided graphical and textual feedback about the initiative’s knowledge integration capacities at all stages of the policy cycle, as well as any specific weaknesses and opportunities that had been elicited throughout the interview. This feedback was again discussed, and a short semistructured interview was held to assess strengths, weaknesses, and lessons learned from the entire three-step application of EVOLvINC. In all seven initiatives, this process was conducted with initiative representatives. Two evaluations entailed additional interviews with participants from different hierarchy levels and specializations, as well as seminars and workshops to discuss the approach and outcomes.
Figure 1 presents the EVOLvINC approach. We start discussing it from the center. In line with previous research, EVOLvINC describes integration between the knowledge of different disciplines, sectors, and cultures as the main aspect that distinguishes multistakeholder initiatives from disciplinary or sectoral approaches (Shiroyama et al. 2012, Seidl et al. 2013, Jahn and Keil 2015). Whether collaborations are conducive to knowledge integration depends on specific capacities, on the level of participating individuals, the overall initiative, and its context. Such capacities can be developed and supported by specific methods and processes (Hitziger et al. 2018, Schuttenberg and Guth 2015, van Kerkhoff and Lebel 2015). Although the variability of objectives between different initiatives are a challenge to comparative outcome or impact evaluations (Jahn and Keil 2015, Rüegg et al. 2018b), knowledge integration capacities, as preconditions of successful multistakeholder governance, are independent of an initiative’s specific objectives.
The arrows from the interior to the exterior cycle symbolize the benefits of knowledge integration capacity for each stage of a multistakeholder policy cycle. Integration of target knowledge at the policy formulation stage resolves trade-offs and sets a common vision and a common direction across disciplines, institutions, and sectors. At the implementation stage, integration of transformation knowledge develops and strengthens networks for collective action. At the evaluation stage, integration of systems knowledge enables systemic monitoring to transform observations into narratives and to understand how situations emerge and might unfold in the future. If adequately managed toward integrating the various actors’ and stakeholders’ perspectives and contributions, the policy cycle constitutes an adaptive cycle; a single loop in an iterative learning process in which agendas are collaboratively developed, implemented, assessed, and improved, and all three forms of knowledge are integrated (Hitziger et al. 2018). This contrasts with authors who describe action and understanding as separate processes, taking place within a binary distinction of science and society as clearly distinguishable actor groups, which are merely linked for a defined period of collaboration and integration (Bergmann et al. 2012, Jahn et al. 2012, Lang et al. 2012).
The causal attribution of observed changes to initiative impacts is challenging because research–policy initiatives are contextualized and subject to complex dynamics (Kelley 1973, Rogers 2014, USAID 2018). Nevertheless, adaptive policy cycles enable learning processes that go beyond narrowly framed disciplinary silos, empower participating actors, strengthen trust and networks for collective action, and facilitate concrete improvements of the addressed situations (Chaffin et al. 2014, Belcher et al. 2016).
Comprehensively operationalizing knowledge integration capacities throughout the policy cycle has multiple aspects, which are often subtle to assess and can interact in complex manners. To address this challenge, we propose a semiquantitative approach that uses a structured set of criteria and indicators for an indepth discussion among initiatives and evaluators. For the purpose of integrating and visualizing the data, these discussions include searching for consensus on Likert-scale scores. This method allows for accommodating complexity with a feasible effort, which would be hampered in a more analytic approach in which variables are defined, operationalized, and objectively measured (see Walter et al. 2007 as an example of such a method). At the same time, a semiquantitative method also enables comparative, detailed, and systematic assessments, which would be lost in an entirely holistic, qualitative, subjective, and unguided reflection process (see Mitev and Venters 2009 as an example of such a method).
The questionnaire commences with a section of purely qualitative questions without any scales. This section is intended to frame the interview, to introduce relevant concepts, and to build some common understanding of the initiative. In its subsequent, semiquantitative sections, EVOLvINC distinguishes six aspects of knowledge integration capacity (Rüegg et al. 2016, 2018b). As displayed in Fig. 1 (bold concepts that directly link to the stages of the policy cycle), holistic and reflective thinking and participative planning relate to integrating target knowledge in agenda setting and policy formulation. Systemic organization and working relate to integrating transformation knowledge in policy implementation. Sharing and learning aspects operationalize the integration of systems knowledge at the evaluation stage of the policy cycle. Each of these six aspects are defined more clearly in 3–5 criteria (linked to main aspects in Fig. 1), and each criterion is operationalized through several indicators, which are formulated as questions (Tables 2–4). All Likert scales are defined such that the evaluator, in discourse with the initiative’s leadership or participants, can understand how the specific indicator was implemented in the initiative. Although each scale addresses a relevant aspect of successful knowledge integration, the discourse might well recognize specific circumstances that render particular scales inapplicable in a certain context, or that provide valid reasons why higher ratings do not indicate higher degrees of knowledge integration. Observed examples of such circumstances are the exclusion of certain stakeholders, as their motivations or behavior would not be conducive to the initiative’s objectives, or their involvement would lead to internal conflicts or power distortions. Privacy, ethical considerations, or intellectual property requirements might justify limits to data sharing. Stakeholder capacity might warrant limits to employed formalism and analysis. Such cases might justify disregarding a question, adapting a scale, or providing additional explanations. To account for the complexity and diversity of transdisciplinary initiatives and for the trade-offs that are inherent in designing knowledge integration efforts in various contexts, the tool does not prescribe benchmarks. The complete questionnaire and a preprogrammed Excel® sheet for aggregating responses are supplied with the supplementary materials (Appendix 1, 2).
EVOLvINC thus enables a structured reflection process that directs attention to the capacity of the initiative to foster knowledge integration. It allows comparison between aspects, criteria, and indicators, with the aim to detect potential strengths and weaknesses, rather than quantitative ratings of specific aspects or initiatives. Most criteria, except the ones relating to learning aspects (Table 4) can be assessed in different phases of a transdisciplinary initiative. Prospectively, the framework supports structuring and designing policies or programs toward a high capacity for fostering knowledge integration. Formatively, they aid with assessing an ongoing process, detecting weaknesses, and suggesting avenues for improvement. Retrospectively, they enable an assessment of strengths and weaknesses of a concluded process.
Table 2 describes the criteria and indicators to assess the thinking and planning aspects of an initiative, i.e., the conceptualization of objectives and strategies and setting up of a platform from which they can be implemented.
Formulating policies on complex issues requires building consensus on policy objectives. Stakeholders and actors will have divergent experiences that translate into equally divergent preferences and expectations. To support finding a common ground, this integration of target knowledges requires indepth understanding of rationales and a mediation of potentially conflicting values, assumptions, and expectations (Baron 2008, Scholz 2011, Aenishaenslin et al. 2013, Hitziger et al. 2018). To evaluate whether the setting is conducive to such integration, EVOLvINC probes for an inclusive design process. It assesses the deliberation spent on defining objectives and theory of change, and the dialog and negotiation that was invested in reaching out to perspectives of relevant stakeholders and actors. An assessment of the current disposition of the system is a precondition for defining a problem that is to be addressed and to discern its drivers and causal processes. This requires the integration of systems knowledge from various fields (Meadows 2008, Scholz 2011, Rüegg et al. 2018b). Therefore, the framework probes whether the problem was defined comprehensively and whether core concepts of systems thinking (time delays and feedback loops) were considered. Finally, transformation strategies are assessed for their potential leverage, i.e., how a policy problem is translated into research or development objectives, whether the objectives address dimensions that are relevant to the problem, and which levels of the causal network that leads to a problem are addressed (Meadows 1999, Rüegg et al. 2018b).
The assessment of the planning aspect encompasses four criteria. Implementing change in settings that do not have hierarchical chains of command requires coordination among multiple decision-making actors (Hemmati et al. 2002). Therefore, EVOLvINC probes for the processes of selecting relevant actors and stakeholders and securing their commitment. Reflexivity is a key requirement for effective multistakeholder collaboration, for adaptive governance, and for knowledge integration (Hirsch Hadorn et al. 2007, Stockholm Resilience Centre 2012, Popa et al. 2015, Berger-González et al. 2016). Therefore, we probe for processes that enable reflection and adaptation at different time scales. Finally, transformation strategies are assessed for their potential leverage, i.e., how a policy problem is translated into research or development objectives, whether the objectives address dimensions that are relevant to the problem, and which levels of the causal network that lead to a problem are addressed (Rüegg et al. 2018b).
In conjunction, these indicators assess whether the formulation stage of the initiative is able to account for all available sources of knowledge and to establish a platform that is able to balance trade-offs and to determine common vision and direction.
Table 3 summarizes the criteria and indicators to assess the organization and working aspects of the initiative, i.e., of operationalizing a platform and executing its strategies. We distinguish between internal team members, who are under the guidance of the initiative’s leadership, and external stakeholders and actors, which contribute according to their relevance to the problem and their self-perceived interest.
Team members, stakeholders, and actors will have different perspectives on societal or natural processes, which translate into envisioning different mechanisms for implementing change (so-called theories of change). Many will bring indepth implementation-related experience and act as intermediaries or partners in translating initiative policies into action. To assess the integration of this transformation knowledge, we probe for the internal structure of one or several teams, for interteam relations, and the processes of assigning team and individual objectives (Rüegg et al. 2018b). To assess the intensity of stakeholder and actor relations, we probe the frequency and regularity of their involvement, using Arnstein’s (1969) fourfold citizen participation ladder: (i) unilateral information provision by the initiative to stakeholders, (ii) bilateral information flow in consultation processes, (iii) collaboration in joint task executions that retain decision making within the initiative’s core leadership, and (iv) empowerment, in which decision making is shared with external actors and stakeholders, for example through joint leadership. Crucially, the collaboration of internal teams and external collaborators requires bridging, particularly so in very diverse, conflictive, or intercultural settings (Hollaender et al. 2008, Bohensky and Maru 2011, Berger-González et al. 2016). We probe for the use of diverse integration methods. They include opportunities for unstructured exchange, facilitation of structured dialog, mediation, joint task execution to enable changes of perspective, bridge persons, and the use of specialized modeling tools and boundary objects (Scholz and Tietje 2002, Bergmann et al. 2012, Hitziger et al. 2017). Furthermore, we probe for integration processes between various participating groups, with a focus on who is involved in the efforts to bridge and integrate knowledge (Rossini 1979, Hoffmann et al. 2017a, b).
Specific skills are required to coordinate and facilitate the implementation of multistakeholder efforts. The importance of power relations is stressed in recent characterizations of transdisciplinarity (Lawrence 2015, Berger-González et al. 2016). Therefore, EVOLvINC probes the balance of influence from academic or professional, gender or socioeconomic, and ethnic, cultural, or religious backgrounds. Coordinating multistakeholder efforts requires particular leadership qualities (Yukl 2012, Nancarrow et al. 2013, Schuttenberg and Guth 2015). We thus probe for the adequacy of the management structure, with a focus on nonhierarchical leadership skills, orientation toward human relations and change, openness, and flexibility. Finally, we probe for conflict resolution processes that clarify perspectives and develop team relations (Lang et al. 2012, Schuttenberg and Guth 2015, Berger-González et al. 2016).
Collectively, these criteria assess the ability to organize an initiative and to strengthen networks for collective action, which are required to cope with the particular challenges of multistakeholder governance.
Table 4 displays the criteria and indicators to assess the sharing and learning aspects that form the basis for integrating systems knowledge. This entails the exchange of information between collaborators and opportunities for learning at individual, team, and organizational levels.
In larger, more institutionalized initiatives, information flows are more compartmentalized, and specific mechanisms to ensure exchange of information become more important (Chokshi et al. 2006, Tenopir et al. 2011). Therefore, EVOLvINC probes for data-sharing agreements put in place and whether these are well resourced and used. Data sharing is a sensitive topic in science, but also in many policy areas, which can be a trade-off with the aim to exchange information for enhancing mutual learning and knowledge integration (Rüegg et al. 2018b). Therefore, we probe for procedures to ensure the quality of data, to balance safety and accessibility of data, and for the range of collaborators with access to data, methods, and results. Finally, we probe for mechanisms to ensure and safeguard long-term institutional memory that renders the gathered knowledge fruitful beyond the limits of the initiative.
The criteria and indicators to assess the learning and capacity-building effects of the collaboration build on a twofold distinction. On the one hand, all learning starts at an individual level, but will not generate impact if it fails to translate into organizational learning. To enable the translation of individual experience into organizational knowledge and procedures, team learning plays an important intermediate role, by questioning, discussing, aggregating, validating, and disseminating individual experiences. On the other hand, we draw on the distinction of basic (knowledge acquisition), adaptive (or single-loop), and generative (or double-loop) learning. The first denotes reception and understanding of information, but without putting it into practice. The second denotes using new information for improving procedures, competences, and technologies. The third describes learning that leads to challenging and revising fundamental assumptions, beliefs, norms, or paradigms (Levitt and March 1988, Argyris 1999; Rüegg et al. 2018b). Finally, we probe for the role of contextual factors for learning. The direct environment of an initiative are the collaborators, actors, and stakeholders, or other institutions with whom it interacts. The general environment is constituted by less directly influencing factors, such as cultural, economic, or societal characteristics (Santa 2014, 2015).
Taken together, these criteria assess the process of transforming observations into narratives of how situations emerge and might evolve in the future. They form the basis of the next iteration of the policy cycle—a phase of revising previous policies in the light of new experiences and defining a strategy for adaptation and improvement.
EVOLvINC synthesizes previously defined key principles in transdisciplinary evaluation (Table 5). It addresses the variability of goals, criteria, and indicators at the policy formulation stage by assessing the integration of target knowledge from diverse perspectives and beliefs (Klein 2008, Belcher et al. 2016). The management of iterative, social, and cognitive integration processes permeate the entire approach. Transparency was defined as processes to disclose sources and involve actors and stakeholders (Sarkki et al. 2015), which is assessed in various indicators relating to stakeholder involvement and the sharing of information. Effectiveness and impact are operationalized through a broad conceptualization of learning that includes creation of knowledge, its practical application, and the building of social capacity, even though content-specific impacts are excluded from this process evaluation framework. Credibility refers to the (perceived) quality, validity, and scientific adequacy of the exchanged knowledge and includes credibility of both the knowledge production processes and the knowledge holders (Sarkki et al. 2015). It is addressed by focusing on building capacities and structured processes for knowledge integration, which are engaging relevant and competent actors and stakeholders, and trigger reflective, transparent, and inclusive sharing and learning processes. Legitimacy is understood as (perceived) transparency, fairness, and balance in including other stakeholders and diverging values, beliefs, and interests (Sarkki et al. 2015). Among the key ambitions of multistakeholder governance, it is addressed in multiple indicators that relate to collaborative processes at all stages of the policy cycle. Finally, relevance is understood as referring to the responsiveness of the initiative to policy and societal needs (Sarkki et al. 2015). EVOLvINC addresses relevance by transcending the science–policy divide and conceptualizing the entire science–policy process as a joint effort in adaptive multistakeholder governance of societally relevant issues.
The pilot evaluations led to revisions of the tool, questionnaire, indicators, and scales, but demonstrated reliability and validity of the approach in intercultural contexts. A main trade-off was found between focusing the evaluation process on a small number of initiative leaders that have sufficient insight to comprehensively address all sections of the questionnaire, or considering a wider range of perspectives of different initiative participants, who might have only partial insight or may not be familiar with the level of conceptual and abstract understanding that is required by interviewees.
Every initiative appreciated the discursive reflection on frequently subtle aspects of multistakeholder governance, their capacity to foster knowledge integration, and the derived recommendations for improvement. Each evaluation triggered important reflections that the initiatives intend to apply in the future. Several criteria and indicators were singled out as particularly helpful, relevant, or thought provoking by at least one initiative, and lessons learned were derived from all phases of the evaluation process, from the conceptual background, and from each of the six assessed aspects. Examples of such insights include a need for systemic analyses of the researched problem at hand and of initiative impacts, additional activities to enhance an initiative’s leverage potential and match to its context, stronger emphasis on knowledge integration methods and processes in research designs, enhanced emphasis on the formal and informal processes of involvement of and knowledge exchange with stakeholders, conflict resolution and leadership skills, and for creating mechanisms to foster adaptive and generative learning among participants of the initiative.
Scores were highest at the policy implementation stage (organization and working), intermediate at the policy formulation stage (thinking and planning), and lowest at the policy evaluation stage (sharing and learning). This finding suggests an emphasis on integration of transformation knowledge in implementation-related aspects of the policy cycle. As adaptive multistakeholder governance requires integrating different forms of knowledge at each stage of the policy cycle (Hitziger et al. 2018), this finding also highlights challenges, in particular the consideration of systems characteristics at the policy formulation stage, and adaptive and generic learning at the evaluation stage of the policy cycle. Further work is required to deepen these findings, but our research suggests that the adaptive capacity of multistakeholder governance could be best enhanced by focusing on these indicators and criteria.
Evaluating complex multistakeholder initiatives at the science–policy interface is a challenge for which there are currently no frameworks that are accepted within or across disciplinary communities. We propose EVOLvINC as a tool to navigate this complexity. EVOLvINC conceptualizes the capacity for integration of target, transformation, and systems knowledge as a key requirement for multistakeholder policy formulation, implementation, and evaluation. It uses previously defined criteria and components of transdisciplinary evaluation approaches. It bridges the divides between understanding and action, science and policy, disciplinary and sectoral backgrounds, and between multiple actors and stakeholders. Thus, it provides a single framework for an adaptive learning process in which agendas are iteratively developed, implemented, assessed, and improved. EVOLvINC offers a comprehensive, semiquantitative approach to assess (i) the ability to account for all available sources of knowledge and to establish a platform that balances trade-offs and determines common vision and direction at the policy formulation stage, (ii) the ability to efficiently organize and strengthen networks for collective action at the implementation stage, and (iii) the ability to transform complex observations into narratives of how situations emerge and might evolve in the future. As a generalized framework, EVOLvINC enables a structured reflection process between evaluators and initiative leadership to monitor and enhance knowledge integration capacity. This emphasis on dialog and exploration allows an adaptation to contextual specificities and comparison between aspects, criteria, and indicators, with the aim to shape multistakeholder governance toward mutual learning, capacity building, and strengthened networks. Although conceptualized in Europe, EVOLvINC builds on expertise and experience from both the northern and southern hemispheres and was validated with seven formative evaluations in six African and Asian countries. All initiatives valued EVOLvINC as insightful, appreciated the process of reflecting on frequently subtle aspects of knowledge integration, and derived lessons for their future activities. The validation suggests that the adaptive capacity of multistakeholder governance could be best enhanced by considering systems characteristics at the policy formulation stage and fostering adaptive and generic learning at the evaluation stage of the policy cycle.
This article is based upon work from COST Action (TD 1404, Network for Evaluation of One Health, NEOH), supported by COST (European Cooperation on Science and Technology), and the State Secretariat for Education, Research and Innovation via the Swiss National Science Foundation (grant IZCNZ0-174587). The transdisciplinary workshop was hosted by ETH Zurich's Transdisciplinarity Lab and cofunded by University of Zurich's Veterinary Epidemiology Group.
Aenishaenslin, C., V. Hongoh, H. D. Cisse, A. G. Hoen, K. Samoura, P. Michel, J.-P. Waaub, and D. Belanger. 2013. Multi-criteria decision analysis as an innovative approach to managing zoonoses: results from a study on Lyme disease in Canada. BMC Public Health 13: 897. https://doi.org/10.1186/1471-2458-13-897
Argyris, C. 1999. On organizational learning. Second edition. Blackwell Business Publishing, Malden, Massachusetts, USA; Oxford, UK.
Arnstein, S. R. 1969. A ladder of citizen participation. Journal of the American Institute of Planners 35:216–224. https://doi.org/10.1080/01944366908977225
Assmuth, T., and J. Lyytimäki. 2015. Co-constructing inclusive knowledge within converging fields: environmental governance and health care. Environmental Science and Policy 51:338–350. https://doi.org/10.1016/j.envsci.2014.12.022
Baron, J. 2008. Thinking and deciding. Fourth edition. Cambridge University Press, New York, New York, USA. https://doi.org/10.1017/CBO9780511840265
Belcher, B. M., K. E. Rasmussen, M. R. Kemshaw, and D. A. Zornes. 2016. Defining and assessing research quality in a transdisciplinary context. Research Evaluation 25(1):1–17. https://doi.org/10.1093/reseval/rvv025
Berger-González, M., M. Stauffacher, J. Zinsstag, P. Edwards. and P. Krütli. 2016. Intercultural research on cancer healing systems between biomedicine and the Maya of Guatemala: a transdisciplinary approach to induce reciprocal reflexivity in a multi-epistemological setting. Journal of Qualitative Health Research 26(1):77-91. https://doi.org/https://doi.org/10.1177/1049732315617478
Bergmann, M., T. Jahn, T. Knobloch, W. Krohn, C. Pohl, and E. Schramm. 2012. Methods for transdisciplinary research: a primer for practice. Campus Verlag, Frankfurt, Germany; New York, New York, USA.
Bohensky, E. L., and Y. Maru. 2011. Indigenous knowledge, science, and resilience: what have we learned from a decade of international literature on “integration”? Ecology and Society 16(4): 6. https://doi.org/10.5751/ES-04342-160406
Burger, D., and C. Mayer. 2003. Making sustainable development a reality: the role of social and ecological standards. Deutsche Gesellschaft für Technische Zusammenarbeit (GTZ), Eschborn, Germany.
Chaffin, B. C., H. Gosnell, and B. A. Cosens. 2014. A decade of adaptive governance scholarship: synthesis and future directions. Ecology and Society 19(3): 56. https://doi.org/10.5751/ES-06824-190356
Chokshi, D., M. Parker, and D. Kwiatkowski. 2006. Data sharing and intellectual property in a genomic epidemiology network: policies for large-scale research collaboration. Bulletin of the World Health Organisation 84:382–387. https://doi.org/10.2471/BLT.06.029843
Encyclopedia Britannica. 1998. Synthesis. [online] URL: https://www.britannica.com/topic/synthesis-philosophy.
Enengel, B., A. Muhar, M. Penker, B. Freyer, S. Drlik, and F. Ritter. 2012. Co-production of knowledge in transdisciplinary doctoral theses on landscape development—an analysis of actor roles, and knowledge types in different research phases. Landscape and Urban Planning 105:106–117. https://doi.org/10.1016/j.landurbplan.2011.12.004
Fidler, D. 2010. The challenges of global health governance. Council on Foreign Relations, New York, New York, USA.
Folke, C., T. Hahn, P. Olsson, and J. Norberg. 2005. Adaptive governance of social–ecological systems. Annual Review of Environment and Resources 30(1):441–473. https://doi.org/10.1146/annurev.energy.30.050504.144511
Haufler, V. 2003. New forms of governance: certification regimes as social regulations of the global market. Pages 237–248 in E. Meidinger, C. Elliot, and G. Oesten, editors. Social and political dimensions of forest certification. Sieber, Kaltenengers, Germany.
Hemmati, M., F. D. Enayti, and J. McHarry. 2002. Multistakeholder procesess on governance and sustainability. Earthscan, London, UK.
Hirsch Hadorn, G., H. Hoffmann-Riem, S. Biber-Klemm, W. Grossenbacher-Mansuy, W. Joye, C. Pohl, U. Wiesmann, and E. Zemp. 2007. Handbook of transdisciplinary research. Springer, Dordrecht, The Netherlands. https://doi.org/https://doi.org/10.1007/978-1-4020-6699-3
Hitziger, M., M. Berger-González, E. Gharzouzi, D. Ochaíta Santizo, R. Solis Miranda, A. Aguilar Ferro, A. Vides-Porras, M. Heinrich, P. Edwards, and P. Krütli. 2017. Patient-centered boundary mechanisms to foster intercultural partnerships in health care: a case study in Guatemala. Journal of Ethnobiology and Ethnomedicine 13(1): 44. https://doi.org/10.1186/s13002-017-0170-y
Hitziger, M., R. Esposito, M. Canali, M. Aragrande, B. Häsler, and S. Rüegg. 2018. Knowledge integration in One Health policy formulation, implementation and evaluation. WHO Bulletin 96(3):211–218. https://doi.org/10.2471/BLT.17.202705
Hoffmann, S., C. Pohl, and J. Hering. 2017a. Exploring transdisciplinary integration within a large research program: empirical lessons from four thematic synthesis processes. Research Policy 46(3): 678–692. https://doi.org/https://doi.org/10.1016/j.respol.2017.01.004
Hoffmann, S., C. Pohl, and J. Hering. 2017b. Methods and procedures of transdisciplinary knowledge integration. Ecology and Society 22(1): 27.
Hollaender, K., M. C. Loibl, and A. Wilts. 2008. Management. Pages 385–398 in G. Hirsch Hadorn, editor. Handbook of transdisciplinary research. Springer, Dordrecht, The Netherlands; London, UK. [online] URL: http://public.eblib.com/choice/publicfullrecord.aspx?p=338481. https://doi.org/10.1007/978-1-4020-6699-3_25
Jahn, T., M. Bergmann, and F. Keil. 2012. Transdisciplinarity: between mainstreaming and marginalization. Ecological Economics 79:1–10. https://doi.org/10.1016/j.ecolecon.2012.04.017
Jahn, T., and F. Keil. 2015. An actor-specific guideline for quality assurance in transdisciplinary research. Futures 65:195–208. https://doi.org/10.1016/j.futures.2014.10.015
Janis, I. L. 1972. Victims of groupthink; a psychological study of foreign-policy decisions and fiascoes. Houghton Mifflin, Boston, Massachusetts, USA. https://doi.org/https://doi.org/10.1163/2468-1733_shafr_sim010150024
Kelley, H. 1973. The processes of causal attribution. American Psychologist 28(2):107–128. https://doi.org/10.1037/h0034225
Keune, H., C. Kretsch, G. de Blust, M. Gilbert, L. Flandroy, K. van den Berge, V. Versteirt, T. Hartig, L. de Keersmaecker, H. Eggermont, D. Brosens, J. Dessein, S. Vanwambeke, A. H. Prieur-Richard, H. Wittmer, A. van Herzele, C. Linard, P. Martens, E. Mathijs, I. Simoens, P. van Damme, F. Volckaert, P. Heyman, and T. Bauler. 2013. Science–policy challenges for biodiversity, public health and urbanization: examples from Belgium. Environmental Research Letters 8(2): ) 025015. https://doi.org/10.1088/1748-9326/8/2/025015
Klein, J. T. 2008. Evaluation of interdisciplinary and transdisciplinary research. A literature review. American Journal of Preventive Medicine 35:116–123. https://doi.org/10.1016/j.amepre.2008.05.010
Körner, M., C. Lippenberger, S. Becker, L. Reichler, C. Müller, and L. Zimmermann. 2016. Knowledge integration, teamwork and performance in health care. Health Organization and Management 30(2):227–243. https://doi.org/10.1108/JHOM-12-2014-0217
Lang, D. J., A. Wiek, M. Bergmann, M. Stauffacher, P. Martens, P. Moll, M. Swilling, and C. J. Thomas. 2012. Transdisciplinary research in sustainability science: practice, principles, and challenges. Sustainability Science 7:25–43. https://doi.org/10.1007/s11625-011-0149-x
Lawrence, R. J. 2015. Advances in transdisciplinarity: epistemologies, methodologies and processes. Futures 65:1–9. https://doi.org/10.1016/j.futures.2014.11.007
Lebov, J., K. Grieger, D. Womack, D. Zaccaro, N. Whitehead, B. Kowalcyk, and P. MacDonald. 2017. A framework for One Health research. One Health 3:44–50. https://doi.org/10.1016/j.onehlt.2017.03.004
Lee, K., and Z. L. Brumme. 2013. Operationalizing the One Health approach: the global governance challenges. Health Policy and Planning 28(7):778–785. https://doi.org/10.1093/heapol/czs127
Levitt, B., and J. March. 1988. Organizational learning. Annual Review of Sociology 14:319–340. https://doi.org/10.1146/annurev.so.14.080188.001535
Liu, O. L., H.-S. Lee, C. Hofstetter, and M. Linn. 2008. Assessing knowledge integration in science: construct, measures, and evidence. Educational Assessment 13(1):33–55. https://doi.org/10.1080/10627190801968224
Lysaght, T., B. Capps, M. Bailey, D. Bickford, R. Coker, Z. Lederman, S. Watson, and P. A. Tambyah. 2017. Justice is the missing link in one health: results of a mixed methods study in an urban city state. PLoS ONE 12(1):1–11. https://doi.org/10.1371/journal.pone.0170967
Meadows, D. 1999. Leverage points: places to intervene in a system. Sustainability Institute, Hartland, Vermont, USA.
Meadows, D. H. 2008. Thinking in systems—a primer. D. Wright, editor. Chelsea Green Publishing Company, Hartford, Vermont, USA.
Mitev, N., and W. Venters. 2009. Reflexive evaluation of an academic–industry research collaboration: can mode 2 management research be achieved? Journal of Management Studies 46(5):733–754. https://doi.org/https://doi.org/10.1111/j.1467-6486.2009.00846.x
Nancarrow, S., A. Booth, S. Ariss, T. Smith, P. Enderby, and A. Roots. 2013. Ten principles of good interdisciplinary team work. Human Resources for Health 11: 19. https://doi.org/10.1186/1478-4491-11-19
One Health Commission. 2018. World health through collaboration. [online] URL: https://www.onehealthcommission.org/en/why_one_health/what_is_one_health/
Pohl, C., and G. Hirsch Hadorn. 2007. Principles for designing transdisciplinary research. Oekom Verlag, Munich, Germany.
Pohl, C., L. van Kerkhoff, G. Hirsch Hadorn, and G. Bammer. 2008. Integration. Pages 411–424 in G. Hirsch-Hadorn, H. Hoffmann-Riem, S. Biber-Klemm, W. Grossenbacher-Mansuy, D. Joye, C. Pohl, U. Wiesmann, and E. Zemp, editors. Handbook of transdisciplinary research. Springer Science and Business Media, Berlin/Heidelberg, Germany. https://doi.org/10.1007/978-1-4020-6699-3_27
Popa, F., M. Guillermin, and T. Dedeurwaerdere. 2015. A pragmatist approach to transdisciplinarity in sustainability research: from complex systems theory to reflexive science. Futures 65:45–56. https://doi.org/10.1016/j.futures.2014.02.002
Prowse, S. J., N. Perkins, B. Hon, and H. Field. 2009. Strategies for enhancing Australia’s capacity to respond to emerging infectious diseases. Veterinaria italiana 45(1):67–78.
Queenan, K., J. Garnier, L. Rosenbaum Nielsen, S. Buttigieg, D. de Meneghi, M. Holmberg, J. Zinsstag, S. Rüegg, B. Häsler, and R. Kock. 2017. Roadmap to a One Health agenda 2030. CAB Reviews: Perspectives in Agriculture, Veterinary Science, Nutrition and Natural Resources 12: 014. https://doi.org/https://doi.org/10.1079/pavsnnr201712014
Rogers, P. 2014. Overview: strategies for causal attribution. UNICEF Office of Research, Florence, Italy. [online] URL: https://www.unicef-irc.org/publications/pdf/brief_6_overview_strategies_causal_attribution_eng.pdf
Rossini, F. J. 1979. Frameworks for integrating interdisciplinary research. Research Policy 8(1):70–79. https://doi.org/10.1016/0048-7333(79)90030-1
Rüegg, S. R., B. Häsler, and J. Zinstag, editors. 2018a. Integrated approaches to health: a handbook for the evaluation of One Health. Wageningen Academic Publishers, Wageningen, The Netherlands. [online] URL: https://www.wageningenacademic.com/doi/pdf/10.3920/978-90-8686-875-9
Rüegg, S., B. J. McMahon, B. Häsler, R. Esposito, L. Rosenbaum Nielsen, C. Ifejika Speranza, T. Ehlinger, M. Peyre, M. Aragrande, J. Zinsstag, P. Davies, A. Mihalca, J. Rushton, L. Carmo, D. De Meneghi, M. Canali, M.-E. Filippitzi, F. Goutard, V. Ilieski, D. Milicevic, H. O’Shea, M. Radeskim and A. Lindberg. 2016. A blueprint to evaluate One Health. Frontiers in Public Health 5: 20. https://doi.org/10.3389/fpubh.2017.00020
Rüegg, S. R., L. Rosenbaum Nielsen, S. Buttigieg, M. Santa, M. Aragrande, M. Canali, T. Ehlinger, I. Chantziaras, E. Boriani, M. Radeski, M. Bruce, K. Queenan, and B. Häsler. 2018b. A systems approach to evaluate One Health initiatives. Frontiers in Public Health 5: 23. https://doi.org/10.3389/fvets.2018.00023
Santa, M. 2014. Framework for multivariate continuous transformation towards learning organization. Pantheon-Sorbonne University, Paris, France.
Santa, M. 2015. Learning organisation review—a “good” theory perspective. The Learning Organization 22(5):242–270. https://doi.org/10.1108/TLO-12-2014-0067
Sarkki, S., R. Tinch, J. Niemelä, U. Heink, K. Waylen, J. Timaeus, J. Young, A. Watt, C. Neßhöver, and S. van den Hove. 2015. Adding “iterativity” to the credibility, relevance, legitimacy: a novel scheme to highlight dynamic aspects of science–policy interfaces. Environmental Science and Policy 54:505–512. https://doi.org/10.1016/j.envsci.2015.02.016
Scholz, R. W. 2011. Environmental literacy in science and society: from knowledge to decisions. Cambridge University Press, Cambridge, UK; New York, New York, USA. https://doi.org/10.1017/CBO9780511921520
Scholz, R. W., and O. Tietje. 2002. Embedded case study methods: integrating quantitative and qualitative knowledge. Sage Publications, Thousand Oaks, California.
Schuttenberg, H. Z,. and H. K. Guth. 2015. Seeking our shared wisdom: a framework for understanding knowledge coproduction and coproductive capacities. Ecology and Society 20(1): 15. https://doi.org/10.5751/ES-07038-200115
Schwarz, G. 2010. Konfliktmanagement—Konflikte erkennen, analysieren, lösen. Gabler/GWV Fachverlage, Wiesbaden, Germany. https://doi.org/10.1007/978-3-8349-9251-2_12
Seidl, R., F. S. Brand, M. Stauffacher, P. Krütli, Q. B. Le, A. Spörri, G. Meylan, C. Moser, M. Berger-González, and R. W. Scholz. 2013. Science with society in the anthropocene. Ambio 42(1):5–12. https://doi.org/10.1007/s13280-012-0363-5
Shiroyama, H., M. Yarime, R. W. Scholz, and A. Ulrich. 2012. Governance for sustainability: knowledge integration and multi-actor dimensions in risk management. Sustainability Sciences 7:45–55. https://doi.org/10.1007/s11625-011-0155-z
Simon, F. 2012. Einführung in die Systemtheorie des Konfliks. Carl-Auer Systeme Verlag, Heidelberg, Germany.
Stockholm Resilience Centre. 2012. ADAPTIVE GOVERNANCE Governance of social–ecological systems in an increasingly uncertain world needs to be collaborative, flexible and learning-based. Research Insights 3:1–3.
Tenopir, C., S. Allard, K. Douglass, A. Aydinoglou, L. Wu, E. Read, M. Manoff, and M. Frame. 2011. Data sharing by scientists. PlosOne 6:1–21. https://doi.org/10.1371/journal.pone.0021101
United Nations. 1992a. Agenda 21. [online] URL: https://sustainabledevelopment.un.org/content/documents/Agenda21.pdf
United Nations. 1992b. UN framework convention on climate change. [online] URL: http://unfccc.int/files/essential_background/background_publications_htmlpdf/application/pdf/conveng.pdf
United Nations. 2016. Sustainable Development goals. [online] URL: http://www.un.org/sustainabledevelopment/sustainable-development-goals/
USAID. 2018. Evaluating global development alliances: an analysis of USAID’s public–private partnerships for development. A synopsis. USAID, Washington, D.C., USA. [online] URL: https://www.usaid.gov/sites/default/files/documents/1880/GDA_Evaluation_Synopsis.pdf
van Kerkhoff, L. E., and L. Lebel. 2015. Coproductive capacities: rethinking science–governance relations in a diverse world. Ecology and Society 20(1): 14. https://doi.org/10.5751/ES-07188-200114
Walter, A. I., S. Helgenberger, A. Wiek, and R. W. Scholz. 2007. Measuring societal effects of transdisciplinary research projects: design and application of an evaluation method. Evaluation and Program Planning 30(4):325–338. https://doi.org/10.1016/j.evalprogplan.2007.08.002
Wolf, M. 2015. Is there really such a thing as “one health”? Thinking about a more than human world from the perspective of cultural anthropology. Social Science and Medicine 29:5–11. https://doi.org/10.1016/j.socscimed.2014.06.018
Woods, A., and M. Bresalier. 2014. One health, many histories. The Veterinary Record 174(26):650–654. https://doi.org/10.1136/vr.g3678
World Bank. 2010. People, pathogens and our planet, Volume 1: towards a One Health approach for controlling zoonotic diseases. [online] URL: http://siteresources.worldbank.org/INTARD/Resources/PPP_Web.pdf
World Energy Council. 2014. Global energy transitions: a comparative analysis of key countries and implications for the international energy debate. [online] URL: https://www.atkearney.com/documents/10192/5293225/Global+Energy+Transitions.pdf/220e6818-3a0a-4baa-af32-8bfbb64f4a6b
World Health Organization (WHO). 1978. Report of the International Conference on Primary Health Care. World Health Organization, Alma-Ata, USSR. [online] URL: http://www.who.int/publications/almaata_declaration_en.pdf
Yukl, G. 2012. Effective leadership behavior: what we know and what questions need more attention. Academy of Management Perspectives 26(4):66–85. https://doi.org/10.5465/amp.2012.0088