Home | Archives | About | Login | Submissions | Notify | Contact | Search

 E&S Home > Vol. 11, No. 2 > Art. 37

Copyright © 2006 by the author(s). Published here under license by The Resilience Alliance.
Go to the pdf version of this article

The following is the established format for referencing this article:
Janssen, M. A., and E. Ostrom. 2006. Empirically based, agent-based models. Ecology and Society 11(2): 37. [online] URL: http://www.ecologyandsociety.org/vol11/iss2/art37/


Guest Editorial, part of Special Feature on Empirical based agent-based modeling

Empirically Based, Agent-based models

Marco A. Janssen 1 and Elinor Ostrom 2


1Arizona State University, 2Indiana University and Arizona State University



ABSTRACT


There is an increasing drive to combine agent-based models with empirical methods. An overview is provided of the various empirical methods that are used for different kinds of questions. Four categories of empirical approaches are identified in which agent-based models have been empirically tested: case studies, stylized facts, role-playing games, and laboratory experiments. We discuss how these different types of empirical studies can be combined. The various ways empirical techniques are used illustrate the main challenges of contemporary social sciences: (1) how to develop models that are generalizable and still applicable in specific cases, and (2) how to scale up the processes of interactions of a few agents to interactions among many agents.


Key words: Agent-based models; empirical applications; social science methods



INTRODUCTION

In recent years, agent-based modeling (ABM) has frequently been considered a promising quantitative methodology for social science research (see Janssen 2002, Parker et al. 2003, Tesfatsion and Judd 2006). Agent-based modeling is the computational study of social agents as evolving systems of autonomous interacting agents. The technical methodology of computational models of multiple interacting agents was initially developed during the 1940s when John von Neumann started to work on cellular automata (von Neumann 1966). A cellular automaton is a set of cells, where each cell can be in one of multiple predefined states, such as forest or farmland. Changes in the state of a cell occur based on the prior states of the cell’s own history and the history of neighboring cells. Cellular automata became more popular in light of a creative application by John Conway, named the “Game of Life” (Gardner 1970), which illustrated how following simple rules of local interaction could lead to the emergence of complex global patterns.

In contrast to cellular automata, ABMs enable a researcher to examine the heterogeneity of agents beyond their specific location and history. A pioneering contribution is the work of economist Thomas Schelling (1971, 1978) who developed an early ABM by moving pennies and dimes on a chessboard according to certain simple rules. The surprising result of his model was that, although each agent tolerated neighbors who were different (being a penny or a dime), the population ended up in segregated groups. Political scientist Robert Axelrod (1984) made a major contribution with his repeated Prisoner’s Dilemma (PD) tournaments. Axelrod invited scholars from all over the world to submit strategies that would be programmed to play repeated PD games against other submitted strategies. The winner in two successive experiments (submitted by Anatol Rapoport) was the simple rule, Tit-for-Tat. Players in a two-person repeated PD who followed this strategy would start with cooperation. In subsequent rounds, one player would then copy the action of the other player during the previous round. Thus, if both players continued to cooperate in any one round, both would continue to cooperate in the next round until one defected, leading to a defection by the other. After the tournament, using an agent-based simulation, Axelrod showed why Tit-for-Tat strategies can evolve as the dominant strategy starting from various distributions of initial strategy populations.

The initial contributions of ABMs were thus theoretical and abstract. They showed how simple rules of interaction could explain macro-level phenomena such as spatial patterns and levels of cooperation. During the last 20 years, the number of publications on simulations of populations of interacting agents who play games and exchange information has exploded. Although most models have been inspired by observation of real biological and social systems, many of them have not been rigorously tested using empirical data. In fact, most ABM efforts do not go beyond a “proof of concept.”

More recently, an increasing number of scholars are starting to confront their models with empirical observation in more rigorous ways. This was the topic of a workshop held at Indiana University in June 2005. This special feature is compiled from papers discussed and presented at that workshop. In this editorial, we discuss a number of the challenges that arise when ABMs are confronted with empirical data, as well as some of the methods that scholars are using. The rest of this special feature presents various types of empirical applications of ABMs.

Many reasons exist for the current development of empirically based ABM. First, because of the large number of theoretical models developed, there is more confidence that ABM is a valid technical methodology that can provide novel insights to scientific inquiry. Second, more relevant data have become available in recent years. For example, scientists can now use large volumes of high-quality data of stock-market transactions, the exchange of emails on the Internet, consumer purchases, and satellite data providing accurate over-time remotely sensed data for land-cover changes. Furthermore, the increasing use of laboratory experiments in the social sciences has called into question some of the initial, simple models of human interactions in social-dilemma situations that study conflicts of decision making between benefits derived by the individual or groups. Agent-based models provide a tool to examine the theoretical consequences of more complex assumptions.

Empirical information, both qualitative and quantitative, can be used in a variety of ways. It can be used as input data to a model or as a means to falsify and test a model. When it is used as an input, the focus might be to study a particular situation, i.e., the situation from which the data is derived. When it is used to test the model, the model might aim for some generalizable arguments that can be tested against new empirical cases. We cover both these uses in our discussion.

In the rest of this editorial, we discuss first some general challenges of empirical research in the social sciences. Then, we discuss in more detail the challenges that they present for ABM. We present a framework that helps distinguish four different approaches to using empirical observations in combination with ABM. In the last section, we discuss the papers in this special feature, in line with this framework.


GENERAL CHALLENGES IN DOING EMPIRICAL RESEARCH IN THE SOCIAL SCIENCES

Social science in general faces a number of challenges in doing empirical investigations. In physics, experiments to study phase changes in the behavior of H2O can be carried out anywhere in the world at any time. Such experiments can be done with relative ease—at least on Earth—and results will show that phase changes always occur around 0°C and 100°C. Experiments using human subjects, however, have more limited possibilities because of the physical, economic, cultural, political, and ethical considerations, and because of the importance of contextual factors. For example, psychologist Stanley Milgram’s experiments in the early 1960s are now considered unethical and would not be approved by current institutional research boards. In his experiments, a subject playing the role of “teacher” punished a “student” with electric shocks whenever the “student” gave a wrong answer to a question (see http://en.wikipedia.org/wiki/Milgram_experiment). The experimenter, a stern-looking person in a white coat, assumed full responsibility when worried “teacher” subjects asked about the harmful consequences of these actions. The “student” was played by a volunteer and the electric shocks were not real. The reason for these experiments was a genuine puzzle. Could cruel war crimes committed during World War II have been the result of a normal person following orders issued by someone viewed as having the authority to issue such commands? Although these questions are important, social scientists have come to realize that we should not “re-create” such morally reprehensible situations in our experimental studies.

An important factor for social scientists is that their subjects are reflexive—in contrast to cells, molecules, and atoms (Searle 1995). When developing a new medication, for example, the researcher has to ask whether the results obtained are due to the new medication or to the patients’ belief that the medication will cure them. To control for this possibility, medical research on the effectiveness of new drugs randomly assigns a placebo to one-half of the patients. Even in decision-making experiments, a similar type of problem exists. How do we know that the subject provides honest responses during an experiment and not the answers the subject thinks the experimenter expects? For experiments about individual norms of fairness and reciprocity, research has shown that a significant difference exists in the results obtained when the subjects are assured that the experimenter cannot link their identity with their decisions in the experiment (so-called double-blind experiments), and experiments where this is not guaranteed (Hoffman et al. 1994, Cox 2004).

Many social science experiments are performed using undergraduate students attending universities in the United States or Western Europe. Critics of experiments using human subjects ask: “How representative are such groups?” One could never respond that experiments conducted over 1 or 2 hours with subjects who are relatively young would be a good source of strong data about long-term processes, about specific cultural patterns, or about the behavior of much older subjects. Recent experiments conducted with villagers living in remote regions of developing countries provide more confidence in the findings obtained in social dilemma experiments.

Many scholars are now examining whether similar patterns of behavior occur across different cultures. Juan-Camilo Cardenas (2000) has, for example, replicated the core findings of extensive common-pool resource experiments conducted in the United States (Ostrom et al. 1994) with villagers living in remote regions of Colombia (see also Cardenas et al. 2000). Because the Colombian villagers knew each other, rather than the anonymous conditions of the U.S. experiments, further information about relations among small groups could also be studied.

Recently, Brandts et al. (2004) conducted the same experimental social dilemma (in this case, a linear, voluntary contribution, public goods game) in Japan, the Netherlands, Spain, and the United States. They found only minor differences in the levels of cooperation of subjects in all four countries. Earlier, Cameron (1999) undertook a very interesting study with ultimatum games. In an ultimatum experiment, player A gets x and is asked to give player B a share y of it. When player B accepts y, player A receives x-y. When player B does not accept y, neither player gets anything. Cameron conducted ultimatum experiments in Indonesia and was able to offer payoffs that amounted to the equivalent of 3 months’ wages. In this extremely tempting situation, she still found that 56% of the Proposers allocated between 40% and 50% of this very substantial sum to the Responder—a pattern very similar to that found among subjects in experiments conducted in the United States and Europe, where the sums offered amounted to around 2 to 3 hours of work at an accepted hourly rate.

Henrich et al. (2004) have published an important study conducted in 15 small-scale societies around the world that also confirms central findings from public good experiments in the United States and Europe. This study also adds specific information about how cultural variables affect the complexity of understanding human behavior. Including more contextual variables in the experiments leads to more diversity in the responses. We do not know exactly how context affects decisions. Thus, doing controlled experiments with human subjects has limitations due to the nature of the subjects in any particular experiment. Nevertheless, human subject experiments are very useful for testing hypotheses resulting from theories and for generating data for new theoretical developments, and we need to recognize that all research methods have both advantages and disadvantages.

It is now relatively well established, as a result of experimental research on social dilemmas, that the narrow model of “economic man” focused primarily on monetary returns—which has been the primary model of human behavior adopted by many social scientists—is not a good foundation for explaining behavior outside of open competitive situations. Scholars should no longer presume that individuals seek only short-term, material benefits for themselves in either experimental or field settings outside of competitive situations (including markets as well as elections and other competitive political situations). On the other hand, social scientists must not assume that all individuals seek benefits for others, contribute to collective benefits, and thus are always “good guys.” Individuals are capable of learning to trust others and of following norms of reciprocity, but in every culture there exist some individuals who are well modeled by Homo economicus (Ostrom 1998, 2005).

Individuals who want to achieve collective objectives over time must find a wide variety of institutional mechanisms that enable them to create fair rules of contribution and distribution and ways of monitoring people’s contributions without squelching cooperation by overmonitoring. Without these mechanisms, a few individuals can begin to grab benefits. Then, levels of trust and cooperation plummet rapidly. Modeling these two- or three-level dilemmas, however, using formal analytical models has proved to be extremely difficult (Greif and Laitin 2004). Thus, the findings about the complexity of human choice revealed in extensive experimental research are core motivating factors leading scholars to use ABMs more extensively than before.

Another major research method used extensively by social scientists is the case method. With case studies, we refer to observational studies of social systems in their (ecological) context. Case studies offer an opportunity to examine the internal logic posited by a theorist. A good case study will trace the causal processes observed in situ and determine whether they are consistent with a specific theory or challenge it (Coppedge 1999, Campbell 1975). Case studies are particularly well suited for testing theories that predict that some event or process will never occur. They were used extensively after Hardin (1968) published “The Tragedy of the Commons,” in which he envisioned users of a commons being trapped by the incentives to overharvest and unable to extricate themselves from the tragedy. Finding one case in which the users themselves self-organized to manage their own resource was a challenge to Hardin’s prediction. Scholars were able to offer multiple challenges given the large number of cases in which users of a commons themselves devised rules for governing their resource (Baland and Platteau 1996, McCay and Acheson 1987, National Research Council 1986, Ostrom 1990).

Case studies frequently focus on a specific spatial and temporal scale, varying from small settlements in the past, to regional land-use changes. Many different methods are used to observe the case, including archaeological methods, remote sensing, surveys, censuses, interviews, ethnographic observation, etc. The various ways the system is measured may lead to some challenges when comparing cases with somewhat different observation procedures.

The recently developed method of “analytical narratives” has demonstrated the feasibility of bridging and blending formal theory with in-depth case studies (Bates et al. 1998). For some theories, particularly where variations in underlying biophysical or social conditions are important factors affecting behavior, case-study methods are not appropriate, given the challenge of finding enough case studies with sufficiently substantial variation to test such theories.

One of the major problems of relying strictly on case studies is that case authors tend to identify different causal factors as being important for explaining the processes and outcomes observed. With respect to self-organization by the users of a commons, for example, Agrawal (2001) identified more than 30 variables that had been identified as important causal variables in the analyses of diverse case studies. Most case-study authors do not deliberately attempt to measure a large number of variables beyond the particular theoretical framework they are using to organize data collection in the field. Undertaking meta-analyses of a large number of case studies does enable scholars to study a much larger N than any of the original case studies, but the meta-analysis cannot compare the importance of variables that are not even mentioned in many case studies that might be included in a larger analysis. Poteete and Ostrom (2005) provide a relatively comprehensive analysis of the challenges and opportunities of conducting rigorous meta-analysis and an overview of such studies conducted during the last decade.

In addition to experimental and case-study research, many social scientists rely heavily on large N surveys to try to understand how various individual-level factors such as age, gender, political party, country of residence and region thereof, and participation in various social groupings affect their reported attitudes and behavior. Census data, which are collected on a regular basis (usually every 10 years) in many countries, constitute another source of large N data. The advantage of census data is that they include a very large number of respondents and cover questions related to distribution of wealth and patterns of urbanization. The disadvantages are that many of the poorest members of a society are not included, and many interesting questions about social behavior cannot be addressed in a census because the data are collected by a government agency. With great effort, a substantial investment of time, and sufficient financial support, scholars can undertake large N studies without being entirely dependent on data collected by government agencies. Such studies examine processes of social and ecological change over time using remote-sensing data combined with extensive fieldwork in an effort to understand environmental change (Moran and Ostrom 2005).


IMPORTANT DIMENSIONS OF MAJOR EMPIRICAL RESEARCH METHODS

Below we discuss several approaches and empirical techniques that scholars use within the ABM community. Three dimensions of social systems are important to consider when evaluating these empirical techniques: number of subjects, cognitive processes, and dynamics.

Number of Subjects

It is necessary to investigate whether the behavior of one person acting alone is qualitatively different from the behavior of multiple persons. When one person is used for an interview or for an individual human subject experiment, the researcher can concentrate on the actions of that particular person. When more than one person is involved, an important, yet difficult to measure, element of the system is communication via body language, sign language, or spoken language. When hundreds of subjects are involved, controlled experiments can rarely be performed. Thus, surveys and other less precise measurement techniques must be used.

Cognitive Processes

Although some behavioral scientists now use magnetic resonance imaging (MRI) images of brain activity (Rilling et al. 2002), even this technique does not let us directly observe the reasoning of our subjects. We can only observe people's actions. However, by performing carefully designed experiments, we can start eliminating theories that may explain the observations. Testing possible cognitive theories is a time-consuming process. Most empirical techniques do not derive very precise information about underlying cognitive processes.

Dynamics

A fundamental problem in our investigations is how to derive observations of a social system over time. Over-time statistics can be gleaned from census data or other statistics on a relative aggregated level with time gaps between a month and 10 years. Occasionally, it is feasible to design a large N study of communities where repeat visits are planned to be conducted at regular intervals, but it is extremely difficult to obtain funding for such studies (see Gibson et al. 2000). Statistics can also be derived from human subjects in repeated experiments over very short time frames, where the total experiment takes a maximum of several hours (not long enough to address many dynamic questions). Because of new technology, we can also derive high-quality information about purchasing behavior of consumers in specific supermarkets, or exchanges on financial markets. Nevertheless, only rarely can sufficient detail be derived about the dynamics of a system of interest to the researcher. During an interview, the subject can be asked about different time periods, but memory loss makes survey research an unreliable source of data except for very salient events, or for events occuring within the last 6 months.

Each of the approaches used to obtain information for testing social science theories has advantages and disadvantages. Some methods are useful for testing precise hypotheses on reasoning and decision making (individual laboratory experiments). Some provide detailed information about the context for a particular process (case studies). Some derive a lot of information on the individual motivations in general (survey research). Other methods derive data from many subjects, but each data point only provides a limited amount of information about underlying cognitive processes (census data and large financial market data). Methods also differ in their ability to measure dynamic processes such as learning and cultural evolution.

Figure 1 depicts many of the methods used by social scientists along several dimensions.



EMPIRICAL APPROACHES IN AGENT-BASED MODELING

We can distinguish the different approaches used when doing ABM based on the differences in the methods used to derive empirical information on social and social–ecological systems. Although the approaches discussed below apply to other quantitative methods as well, it is important to recognize the relatively unique characteristics of ABM. With ABM, the researcher explicitly describes the decision processes of simulated actors at the micro level. Structures emerge at the macro level as a result of the actions of the agents and their interactions with other agents. Developing such models requires gaining information about how agents make their decisions, how they forecast future developments, and how they remember the past. What do they believe or ignore? How do agents exchange information? And, does the structure of agent interactions (trade, kin, organization) affect the macro-level scale phenomena?

Traditionally, we evaluate models, especially statistical models, on their goodness of fit, or the maximum likelihood. We never knows for sure whether a model describes the empirical world. We can, however, compare different models and test which model gives a better description of the real world. What do we mean, however, by a better model? There is a difficult trade-off in fitting the data and in keeping it simple. The more complicated the model, the more parameters, equations, etc., to describe the model, the better the model fits the data compared with a simpler model, but it might be too specific for a particular data set. Pitt et al. (2002) propose using maximum likelihood estimation, but including a penalty for the complexity of the model beyond the number of parameters. The best model balances the goodness of fit and the ability to generalize.

Given the empirical problems with data collection, and the explicit inclusion of cognitive, institutional, and social processes in ABMs, achieving good statistical performance is not sufficient. In some cases, no data even exist to perform a statistical analysis. Other criteria that can be used are:
  • Is the model plausible given our understanding of the processes?
  • Can we understand why the model is doing so well?
  • Did we derive a better understanding of our empirical observations?
  • Does the behavior of the models coincide with the understanding of the relevant stakeholders about the system?

Depending on the type of information that is available and the questions asked by the researchers, we can distinguish several approaches. However, we still face the trade-off between generalizability and context. This is somewhat similar to the problem of model selection. Some studies focus on generalization of the results, others try to apply the model to a specific case. On the other hand, we also face the trade-off between a few subjects and a large number of subjects. With a few subjects, we can focus more often on the cognitive processes and derive high-quality data about individual decisions and circumstances. With thousands of subjects, empirical information on the decision-making process of subjects is often not available, such as the model of firm sizes discussed in Axtell (1999). On the other hand, when there are many subjects, we can distinguish among types of subjects (see Janssen and Ahn 2006).

Using these trade-offs, we distinguish four approaches for using empirical information to help confirm patterns observed in ABM (Fig. 2). When a large number of high-quality observations do exist, one can derive statistical distributions and other stylized facts from the empirical data. These stylized facts are often the starting point for modeling. What are the simple rules that generate these stylized facts? A popular example of such a stylized fact is the power law distribution, as observed in many systems, such as city size, firm size, number of links in networks such as websites and sexual contacts, etc. (Axtell 2001, Barabási and Albert 1999, Liljeros et al. 2001).

Using relatively uncomplicated models of the decisions of simple reactive agents, scholars can investigate the modeled conditions under which they can derive similar statistics as the observed stylized facts. This approach is especially used in finance (e.g., LeBaron 2001), but also in economics (Axtell 1999) and political science (Cederman 2002). This is also the approach used by physicists who apply their methodologies to social systems, such as in econophysics (Stanley et al. 1999, Bouchaud 2001).

Another approach that is focused on the development and testing of generalizable models is the use of laboratory experiments to test computational models. Laboratory experiments provide a highly abstract, controlled environment in which social scientists test very precise hypotheses. The data from these models are used to compare alternative models of human decision making. A large set of experiments is focused on individual reasoning, e.g., recognition tasks, learning, and memory. Experiments with more than one person are of more interest to agent-based modelers. Therefore, computational social scientists use experiments on markets or social dilemmas together with ABM. For a recent overview of economic approaches, we refer to Camerer (2003) and Duffy (2006).

Controlled laboratory experiments are of limited use when studying the context in which particular subjects make their decisions. An alternative approach that enables the researcher to include better ways of representing context involves role-play and companion modeling. The researcher develops a game based on the situation in a particular community, and the subjects play the roles they normally play. The information from the role-playing game is used to develop ABMs (Bousquet et al. 2002, Barreteau et al. 2003, Étienne 2003), and the results are evaluated by the stakeholders—the players themselves. They can debate whether the model represents how they played the game, as well as how the game is different from reality. Gurung et al. (2006), in this special feature, use role-playing games to study irrigation issues in Bhuthan.

The fourth approach is case-study analysis. Based on the information from a specific system, with different types of information, ABMs canbe developed. This hybrid method is a common approach in land-use change modeling, agricultural economics, and electricity markets (Bower and Bunn 2000, Balmann et al. 2002, Berger 2001, Evans and Kelley 2004, Brown et al. 2005). In such studies, the researcher has multiple sources, but incomplete information. Information from remote sensing, surveys, census data, field observations, etc., is used to develop the different components of the system. Often, the goal is to understand the interactions between the different components of the system, and to use the model to explore different policy scenarios. A number of papers in this issue discuss various sources of observation for developing models, such as ethnography (Huigen et al. 2006), surveys (Brown and Robinson 2006), and agricultural census data (Happe et al. 2006). Furthermore, different approaches can be used to parameterize the model. Does the parameterized model lead to statistics on the initial population that are consistent with the data (Berger and Schreinemachers 2006)? Do the researchers explore what might happen with different parameter values (Brown and Robinson 2006)? Does the researcher focus on quantitative data (Happe et al. 2006) or also include ethnographic observations (Huigen et al. 2006)?

The first two approaches are methods that focus on generalizability; the latter two approaches focus more on the “fitting” of a special case. Each approach has unique characteristics and usefulness. Therefore, it is no surprise that some scholars combine different approaches. We discuss a number of these combinations:
  • Within the field of finance, some scholars are using laboratory experiments to test particular assumptions about behavioral models that they are using to explain patterns of outcomes. For example, Hommes (2001) views financial markets as an evolutionary system of various competing trading strategies of boundedly rational agents. Beliefs about which trading strategies derive the best returns coevolve over time with the prices of the stocks. Stylized facts such as volatility, clustering, fat tails, and long memory can be explained by these ABMs. Furthermore, laboratory experiments on expectation formation are consistent with the assumption of the ABM.
  • Companion modeling is used in several methods (role-playing games often being used) to derive iteratively one or several models with stakeholders (Bousquet et al. 2002, Barreteau et al. 2003). Also, a number of role-playing games can be played in different communities to derive a model of a “typical” community. Castella et al. (2005) performed a number of role-playing games concerning deforestation in Vietnam, and used the information from them to develop models of land-use changes on a larger scale than the individual village.
  • Another combination of methods is the use of laboratory experiments to test specific hypotheses for ABMs of case studies. Evans et al. (2006) perform laboratory experiments on land use and externalities. The insights from these laboratory experiments are used to inform the modeling of household decision making in Monroe County, Indiana.
  • Barreteau et al. (2001) developed a role-playing game based on an irrigation system in Senegal. They have played this game many times all over the world with various audiences. They find consistent behavior in the results of the participants, although the Senegal irrigators derive a significant higher efficiency compared with other players.
  • The first author of this paper leads a project that combined ABMs with field and laboratory experiments (http://www.public.asu.edu/~majansse/dor/nsfhsd.htm). The laboratory experiments are more abstract and controlled than the field experiments. Nevertheless, by careful design, similar experiments will be performed in both situations so that the insights and resulting models can be compared.
  • Evolutionary economics focuses on innovation processes in economic systems (Nelson and Winter 1982). During the last 25 years, many abstract models have been developed that show plausible processes of innovation by firms and diffusion of innovation in economic sectors. During the last few years, some scholars from evolutionary economics used stylized models of specific cases to test their models and to derive a systematic narrative of the plausible processes of innovation in specific economic sectors, e.g., the computer industry (Malerba et al. 2001).


DISCUSSION

A diverse set of approaches has been used during the last few years to test and develop ABMs of social and social–ecological systems. All these approaches have their pros and cons, and we have to develop mechanisms to learn systematically from the findings of each approach and to incorporate these insights into ABM efforts in other domains.

It is encouraging to see the rapid development of applications of ABMs. However, we are also experiencing new methodological challenges for rigid empirical testing of these models. Effort is needed to develop methods that enable us to select from among alternatives those ABMs that fit the data and are generalizable. New types of laboratory and field experiments need to be performed to understand crucial components of ABMs, such as social interactions and the diffusion of knowledge and information. What are meaningful stylized facts to test ABMs in social systems? The various ways empirical techniques are used show the two main challenges: how to develop models that are generalizable and still applicable in specific cases, and how to scale up the processes of interactions of a few agents to interactions among many agents?

This special feature brings together a rich set of examples of developing and testing ABMs of social and social–ecological systems. These diverse approaches may lead to more rigorous standards on how to develop ABMs with empirical data. The use of commonly agreed standards of practice enhances comparison of models and their analysis, and may increase the acceptance of ABM methodology in the broader domain of social science. We hope that this special feature will contribute to this development.



RESPONSES TO THIS ARTICLE


Responses to this article are invited. If accepted for publication, your response will be hyperlinked to the article. To submit a response, follow this link. To read responses already accepted, follow this link




ACKNOWLEDGMENTS

The authors gratefully acknowledge the support of our research by the Center for the Study of Institutions, Population, and Environmental Change, Indiana University, through National Science Foundation grants SES0083511, and SES0232072. The authors also would like to thank the participants of the Summer Institute on agent-based modeling and natural resources, 15 May–1 June 2005, in Bloomington, Indiana, and the participants of the workshop on empirically based, agent-based models, 2–4 June 2005, Bloomington, Indiana, for the valuable feedback they gave on earlier versions of this manuscript.




LITERATURE CITED

Agrawal, A. 2001. Common property institutions and sustainable governance of resources. World Development 29(10):1623–1648.

Axelrod, R. 1984. Evolution of cooperation. Basic Books, New York, New York, USA.

Axtell, R. L. 1999. The emergence of firms in a population of agents: local increasing returns, unstable Nash equilibria, and power law size distributions. CSED working paper 3, Brookings Institution, Washington, D.C., USA. (online) URL: http://www.brookings.org/es/dynamics/papers/firms/firms.htm.

Axtell, R. L. 2001. Zipf distribution of U.S. firm sizes. Science 293:1818–1820.

Baland, J.-M., and J.-P. Platteau. 1996. Halting degradation of natural resources: is there a role for rural communities? Clarendon Press, Oxford, UK.

Balmann, A., K. Happe, K. Kellermann, and A. Kleingarn. 2002. Adjustment costs of agri-environmental policy switchings: an agent-based analysis of the German region Hohenlohe. Pages 127–157 in M. A. Janssen, editor. Complexity and ecosystem management: the theory and practice of multi-agent systems. Edward Elgar Publishing, Cheltenham, UK.

Barabási, A.-L., and R. Albert. 1999. Emergence of scaling in random networks. Science 286:509–512.

Barreteau, O., F. Bousquet, and J. M. Attonaty. 2001. Role-playing games for opening the black box of multi-agent systems: method and lessons of its application to Senegal River Valley irrigated systems. JASSS–The Journal of Artificial Societies and Social Simulation 4(2):5. [online] URL: http://www.soc.surrey.ac.uk/JASSS/4/2/5.html.

Barreteau, O., C. Le Page, and P. D’Aquino. 2003. Role-playing games, models and negotiation processes. JASSS–The Journal of Artificial Societies and Social Simulation 6(2):10. [online] URL: http://jasss.soc.surrey.ac.uk/6/2/10.html.

Bates, R., A. Greif, M. Levi, J.-L. Rosenthal, and B. Weingast. 1998. Analytical narratives. Princeton University Press, Princeton, New Jersey, USA.

Berger, T. 2001. Agent-based spatial models applied to agriculture: a simulation tool for technology diffusion, resource use changes and policy analysis. Agricultural Economics 25(2/3):245–260.

Berger, T., and P. Schreinemachers. 2006. Empirical parameterization of multi-agent models. Ecology and Society, this issue.

Bouchaud, J. P. 2001. Power-laws in economy and finance: some ideas from physics. Quantitative Finance 1:105–112.

Bousquet, F., O. Barreteau, P. d’Aquino, M. Étienne, S. Boissau, S. Aubert, C. Le Page, D. Babin, and J.-C. Castella. 2002. Multi-agent systems and role games: collective learning processes for ecosystem management. Page 248–285 in M. A. Janssen, editor. Complexity and ecosystem management: the theory and practice of multi-agent systems. Edward Elgar Publishing, Cheltenham, UK.

Bower, J., and D. W. Bunn. 2000. Model-based comparisons of pool and bilateral markets for electricity. Energy Journal 21(3):1–29.

Brandts, J., T. Saijo, and A. Schram. 2004. How universal is behavior? A four country comparison of spite and cooperation in voluntary contribution mechanisms. Public Choice 119(3–4):381–424.

Brown, D. G., S. E. Page, R. Riolo, M. Zellner, and W. Rand. 2005. Path dependence and the validation of agent-based spatial models of land use. International Journal of Geographical Information Science 19(2):153–174.

Brown, D., and D. Robinson. 2006. Effects of heterogeneity in residential preferences on an agent-based model of urban sprawl. Ecology and Society 11(1):46. (online) URL: http://www.ecologyandsociety.org/vol11/iss1/art46/.

Camerer, C. F. 2003. Behavioral game theory. Princeton University Press, Princeton, New Jersey, USA.

Cameron, L. A. 1999. Raising the stakes in the ultimatum game: experimental evidence from Indonesia. Economic Inquiry 37(1):47–59.

Campbell, D. T. 1975. ‘Degrees of freedom’ and the case study. Comparative Political Studies 8(2):178–193.

Cardenas, J.-C. 2000. How do groups solve local commons dilemmas? Lessons from experimental economics in the field. Environment, Development and Sustainability 2:305–322.

Cardenas, J.-C., J. Stranlund, and C. Willis. 2000. Local environmental control and institutional crowding-out. World Development 28(10):1719–1733.

Castella, J. C., T. N. Trung, and S. Boissau. 2005. Participatory simulation of land-use changes in the northern mountains of Vietnam: the combined use of an agent-based model, a role-playing game, and a geographic information system. Ecology and Society 10(1):27. [online] URL: http://www.ecologyandsociety.org/vol10/iss1/art27/.

Cederman, L. K. 2002. Modeling the size of wars: from billiard balls to sandpiles. American Political Science Review 97(1):19–59.

Coppedge, M. 1999. Thickening thin concepts and theories: combining large N and small in comparative politics. Comparative Politics 31(4):465–476.

Cox, J. C. 2004. How to identify trust and reciprocity. Games and Economic Behavior 46:260–281.

Duffy, J. 2006. Agent-based models and human subject experiments. Pages 949–1011 in L. Tesfatsion and K. L. Judd, editors. Handbook of computational economics: agent-based computational economics, vol. 2. Elsevier, Oxford, UK.

Étienne, M. 2003. SYLVOPAST: a multiple target role-playing game to assess negotiation processes in sylvopastoral management planning. JASSS–The Journal of Artificial Societies and Social Simulation 6(2):5. [online] URL: http://jasss.soc.surrey.ac.uk/6/2/5.html.

Evans, T. P., and H. Kelley. 2004. Multi-scale analysis of a household level agent-based model of landcover change. Journal of Environmental Management 72(1–2):57–72.

Evans, T. P., W. Sun, and H. Kelley. 2006. Spatially explicit experiments for the exploration of land use decision-making dynamics. International Journal of Geographic Information Science, in press.

Gardner, M. 1970. The fantastic combinations of John Conway’s new solitaire game ‘life’. Scientific American 223:120–123.

Gibson, C., M. McKean, and E. Ostrom, editors. 2000. People and forests: communities, institutions, and governance. MIT Press, Cambridge, Massachusetts, USA.

Greif, A., and D. D. Laitin. 2004. A theory of endogenous institutional change. American Political Science Review 98(4):633–652.

Gurung, T., F. Bousquet, and G. Trébuil. 2006. Companion modelling, conflict resolution and institutional building: sharing irrigation water in Lingmuteychu watershed, Bhutan. Ecology and Society, this issue.

Happe, K., K. Kellermann, and A. Balmann. 2006. Agent-based modeling of regional structural change: an application to EU agricultural policy reform. Ecology and Society 11(1):46. (online) URL: http://www.ecologyandsociety.org/vol11/iss1/art49/.

Hardin, G. 1968. The tragedy of the commons. Science 162:1243–1248.

Henrich, J., R. Boyd, S. Bowles, C. Camerer, E. Fehr, and H. Gintis, editors. 2004. Foundations of human sociality: economic experiments and ethnographic evidence from fifteen small-scale societies.Oxford University Press, Oxford, UK.

Hoffman, E., K. McCabe, K. Shachat, and V. Smith. 1994. Preferences, property-rights, and anonymity in bargaining games. Games and Economic Behavior 7(3):346–380.

Hommes, C. H. 2001. Financial markets as complex adaptive evolutionary systems. Quantitative Finance 1:149–167.

Huigen, M., K. Overmars, and W. De Groot. 2006. Multi-actor modeling of settling decisions and behavior in San Mariano watershed, the Philippines; a first application with the MameLuke framework. Ecology and Society, this issue.

Janssen, M. A., editor. 2002. Complexity and ecosystem management: the theory and practice of multi-agent systems. Edward Elgar Publishing, Cheltenham, UK.

Janssen, M. A., and T. K. Ahn. 2006. Learning, signaling and social preferences in public good games. Ecology and Society, this issue.

LeBaron, B. 2001. Stochastic volatility as a simple generator of power laws and long memory. Quantitative Finance 1:621–631.

Liljeros, F., C. R. Edling, L. A. N. Amaral, H. E. Stanley, and Y. Aberg. 2001. The web of human sexual contacts. Nature 411:907–908.

McCay, B. J., and J. Acheson. 1987. The question of the commons: the culture and ecology of communal resources. University of Arizona Press, Tucson, Arizona, USA.

Malerba, F., R. Nelson, L. Orsenigo, and S. Winter. 2001. History-friendly models: an overview of the case of the computer industry. JASSS–The Journal of Artificial Societies and Social Simulation 4(3). [online] URL: http://www.soc.surrey.ac.uk/JASSS/4/3/6.html.

Moran, E. F., and E. Ostrom. 2005. Seeing the forest and the trees: human–environment interactions in forest ecosystems. MIT Press, Cambridge, Massachusetts, USA.

National Research Council. 1986. Proceedings of the conference on common property resource management. National Academy Press, Washington, D.C., USA.

Nelson, R. R., and S. G. Winter. 1982. An evolutionary theory of economic change. Belknap Press, Cambridge, Massachusetts, USA.

Ostrom, E. 1990. Governing the commons: the evolution of institutions for collective action. Cambridge University Press, New York, New York, USA.

Ostrom, E. 1998. A behavioral approach to the rational choice theory of collective action. American Political Science Review 92(1):1–22.

Ostrom, E. 2005. Understanding institutional diversity. Princeton University Press, Princeton, New Jersey, USA.

Ostrom, E., R. Gardner, and J. Walker. 1994. Rules, games, and common-pool resources. University of Michigan Press, Ann Arbor, Michigan, USA.

Parker, D. C., S. M. Manson, M. A. Janssen, M. J. Hoffman, and P. Deadman. 2003. Multi-agent systems for the simulation of land-use and land-cover change: a review. Annals of the Association of American Geographers 93(2):316–340.

Pitt, M. A., I. J. Myung, and S. Zhang. 2002. Toward a method of selecting among computational models of cognition. Psychological Review 109(3):472–491.

Poteete, A., and E. Ostrom. 2005. Bridging the qualitative–quantitative divide: strategies for building large-N databases based on qualitative research. Paper presented at the American Political Science Association Meetings, 1–4 September 2005, Washington, D.C., USA.

Rilling, J. K., D. A. Gutman, T. R. Zeh, G. Pagnoni, G. S. Berns, and C. D. Kilts. 2002. A neural basis for social cooperation. Neuron 35(2):395–405.

Schelling, T. C. 1971. Dynamic models of segregation. Journal of Mathematical Sociology 1:143–186.

Schelling, T. C. 1978. Micromotives and macrobehavior. W. W. Norton, New York, New York, USA.

Searle, J. R. 1995. The construction of social reality. The Free Press, New York, New York, USA.

Stanley, H. E., L. A. N. Amaral, D. Canning, P. Gopikrishnan, Y. Lee, and Y. Liu. 1999. Econophysics: can physicists contribute to the science of economics? Physica A 269(1):156–169.

Tesfatsion, L., and K. L. Judd. 2006. Handbook of computational economics: agent-based computational economics, vol. 2. Elsevier, Oxford, UK.

von Neumann, J. 1966. Theory of self-reproduction automata. University of Illinois Press, Urbana, Illinois, USA.


Address of Correspondent:
Marco A. Janssen
School of Human Evolution and Social Change
Arizona State University
Box 872402
Tempe, Arizona 85287-2402
USA
Marco.Janssen@asu.edu

Home | Archives | About | Login | Submissions | Notify | Contact | Search