Ecology and SocietyEcology and Society
 E&S Home > Vol. 24, No. 3 > Art. 29
The following is the established format for referencing this article:
Bollen, J., S. R. Carpenter, J. Lubchenco, and M. Scheffer. 2019. Rethinking resource allocation in science. Ecology and Society 24(3):29.
https://doi.org/10.5751/ES-11005-240329
Invited Manuscripts

Rethinking resource allocation in science

1School of Informatics, Computing, and Engineering, Indiana University, 2Center for Limnology, University of Wisconsin-Madison, 3Department of Integrative Biology, Oregon State University, 4Department of Environmental Sciences, Wageningen University, The Netherlands, 5South American Institute for Resilience and Sustainability Studies, SARAS, Uruguay

ABSTRACT

Many funding agencies rely on grant proposal peer review to allocate scientific funding, i.e., researchers compete for funding by submitting proposals that are reviewed and ranked by committees of their peers. Only a fraction of applicants are awarded the requested funds. This system has a long and venerable tradition, but it is increasingly struggling to handle the larger number of applications, suffers from high levels of administrative overhead, may be unreliable in separating successful from unsuccessful projects, and may suffer from bias against innovative ideas, young researchers, and female scientists. We have proposed redesigning funding systems according to a few simple principles, namely, focusing on funding people instead of projects and involving as many scientists in funding decisions as possible. This underpins a proposal for a novel funding system in which every scientist periodically receives an equal, unconditional amount of funding but must anonymously donate a given fraction of everything he or she receives to other scientists of his or her choice. Over time, this simple process will lead to a funding distribution that reflects the entire scientific community, fosters young scientists, and reduces overhead. However, in spite of its simplicity, we must address certain challenges in its implementation such as deciding who participates in the funding system, how to control for conflicts of interest and bias, and how to manage its application. Funding agencies will play a pivotal role in the development and management of this system.
Key words: computational science; peer review; science funding; science policy; self-organization

INTRODUCTION

U.S. funding agencies alone distribute a yearly total of roughly US$65 billion largely through the process of proposal peer review: Scientists compete for project funding by submitting grant proposals, which are evaluated by selected panels of peer reviewers. Similar funding systems are in place in most advanced democracies. However, in spite of its venerable history, proposal peer review is increasingly struggling to deal with the increasing mismatch between demand and supply of research funding.

A COSTLY SYSTEM

The most conspicuous problem with the current system is the cost associated with the time spent on writing, processing, and reviewing project proposals. For instance, it is estimated that European researchers collectively spent about €1.4 billion worth of time to submit unsuccessful applications to the Horizon 2020 program, a sizable proportion of the €5.5 billion distributed. Meanwhile, Australian researchers are estimated to collectively spend three centuries a year writing, submitting, and reviewing project proposals (Herbert et al. 2013). Of course, this time is not entirely lost. Writing and reviewing proposals helps to articulate one’s vision, but is that worth the extraordinary amount of time spent? Does the system allocate science funding in the most effective manner so that society receives the greatest possible return on investment? Intuitively, one would think so, but the capacity of peer review to sort out the most productive proposal is in fact surprisingly low. For instance, analysis of 102,740 grants funded by the National Institutes of Health found almost no relationship between review scores and the resulting scientific output (Fang et al. 2016).

So, should we simply skip the proposal submission and review machinery? We could, for example, give all tenured researchers an equal share of available funding. An analysis of the Natural Science and Engineering Research Council Canada statistics shows that preparing a grant application costs approximately Can$40,000. This is more expensive than simply giving every qualified investigator a direct baseline discovery grant of Can$30,000 (Gordon and Poulin 2009). On the other hand, not all scientific work is equal. Some scientists do conduct research that is more promising, and some efforts inherently require greater resources. Awarding the same amount of baseline funding to every researcher is therefore not an optimal strategy. Furthermore, an equal distribution may not meet societal or programmatic needs.

Could we redesign our funding system in a way that reduces both excessive costs and low accuracy, while ensuring the fundamental needs of society are being met? We suggest a redesign based on two simple starting points:

To see how a system based on these two principles may work for fund allocation, consider the following two-step procedure:

  1. Every participating scientist receives an equal portion of all available funding as his or her base starting budget.
  2. Each participant anonymously donates a fixed percentage (say 50%) of his or her funding to other, nonaffiliated scientists.

This is repeated each funding round.

It is important to note that each scientist must distribute a percentage of everything he or she received in the previous round, i.e., the base funding plus what he or she previously received from other scientists. For example, suppose that a scientist receives the base amount of US$50,000 and receives US$150,000 from other researchers. The total received is $200,000, of which 50%, i.e., US$100,000, needs to be donated to other scientists. The scientist retains a total of US$100,000. Because every scientist participates, funding circulates through the community converging over time to funding levels that all scientists have collectively, yet independently determined (Fig. 1, right), unlike the current proposal-based system where small committees of reviewers make recommendations with respect to which projects (or whom) to fund (Fig. 1, left). Importantly, scientists that receive more funding than others also become the more significant funders in the system. This self-organized weighting resembles the mathematical technique of power iteration used to converge on stationary probability distributions of web page relevancy (Bollen et al. 2014, Bollen 2018).

To speed up the process, the initial manual funding selections can be algorithmically carried forward until a convergence criterion is reached. Another option is to have a two-phase donation process. A first round is followed by the publication of funding numbers and subsequently a second donation round. Regardless of the specific implementation details, the system converges on a distribution of funding that reflects all information in the scientific community with a minimum investment of time and effort.

CHALLENGES

Although the basic principle is simple and transparent, its practical implementation requires some additional considerations. First of all, we have to decide who can participate in this system. For example, the system could involve everyone with an academic appointment at an accredited institution. Second, like the current proposal peer-review system, conflicts of interest must be vigorously prevented. A well-designed automated approach may effectively eliminate most problems. For instance, coauthorship and shared affiliations can be automatically detected from scientific information databases. Also, algorithms may efficiently detect fraudulent reciprocal donation loops or cartels, which should be forbidden and penalized.

Funding agencies will naturally play a central role in the development, application, and refinement of the proposed system. For instance, self-organized fund allocation (SOFA) could be set up to run within specific domains, subdomains, or even smaller topic areas, e.g., chemistry, environmental chemistry, or marine chemistry. This allows funding agencies and policy makers to set budgets according to programmatic priorities. Stable funding for expensive infrastructure and long-term contracts could continue to be allocated by the existing funding system. However, staying closer to the new approach, researchers could also be allowed to put up large common projects or infrastructures as “supernodes” for funding in SOFA. It may also be convenient to provide some generic options such as “redistribute my funding equally to all female scientists” or “scientists younger than 30 years old.” These and other elaborations may further ensure a reliable and balanced system. Clearly, a cautious approach is needed, requiring a transparent multidisciplinary team effort that involves the funding agencies for designing, monitoring, and evaluating pilot projects that pave the way for a larger scale implementation.

OPPORTUNITIES

Although there are obvious challenges and uncertainties in implementing such a novel approach, there are also opportunities that go beyond solving the excessive overhead and unreliability of the current system. There are at least four commonly recognized issues that can be addressed in one stroke:

  1. Systematic biases with regard to ethnicity or gender can be objectively measured and mitigated. For instance, a bias against funding women may be corrected by raising the funding to each female scientist by a fixed percentage.
  2. Excessive inequality in funding can be controlled by tuning the mandatory donation fraction. Simulations suggest that a 50% donation fraction results in funding inequality that approximates that of the current system (Bollen et al. 2014). By contrast, a very small donation fraction will result in a highly egalitarian distribution because scientists simply retain most of their base funding (Bollen et al. 2014).
  3. Newcomers always receive the guaranteed base fund with no obligations to spend excessive time in applying for project funding. A reduced mandatory donation fraction for early career scientists could strengthen their position even further.
  4. The “ivory tower” effect could be reduced by letting a percentage (say 10%) of the funds be distributed by the public allowing for transparent input with respect to societally desirable research directions. This would in addition stimulate researchers to communicate their ideas to the public.

RISKS, BARRIERS, AND BRIDGES

The proposed system would immediately save billions of dollars that are now spent on proposal submission and reviewing. Although it has the potential to solve a range of broadly felt issues with our present system of science funding, it remains impossible to foresee all consequences. A donation system may lead to higher well-being among researchers (Brosnan and de Waal 2003) than the present competition-oriented model, but at the same time, the crowd-based aspect will reward those who most openly communicate their work and plans, encouraging “salesmanship” at the cost of thoughtfulness.

A central challenge will be to ensure that the system remains responsive to societal needs. Will the wisdom of the crowd converge to priorities and objectives that meet societal needs? Our proposal includes the ability of policy makers to direct funding to particular domains and constituencies. Clearly, funding agencies will remain uniquely positioned to provide guidance and know-how for bridging societal and scientific objectives. Government program managers would continue to be highly engaged in the process, but their role would shift toward designing useful classification structures, for instance defining subdomains, and managing crowd-based assessments within those domains rather than the laborious task of evaluating scientific excellence. Instead of directing funds to scientists, the agencies would work collaboratively with scientists and decision makers to leverage shared resources to support both scientific excellence and programmatic obligations.

Fortunately, implementation is not an “all or nothing” matter. One could run small-scale trials with fractions of the national research budget alongside the existing system. This might in fact soon be realized in the Netherlands where the Dutch parliament approved a motion directing the national science funding agencies to experiment with new models of funding allocation. Such tests afford the opportunity to conduct repeated cycles of evaluation that can inform gradual improvement to the system as it is being scaled up.

The funding model we propose may seem a potentially disruptive innovation. However, society can no longer afford to lose billions in a complex and costly machinery with unclear performance. The present system has served us well for more than half a century. It may now be perceived as “tested and proven,” but we have come to a point that incremental adjustments seem unlikely to repair its broadly recognized shortcomings. The situation we face may be an example of how scaling-up can sometimes lead to fundamentally unsustainable overhead as observed in systems ranging from businesses (Zenger 1994) to societies (Tainter 1988). A carefully planned experiment with a SOFA system may provide a bridge to more efficient and reliable alternatives.

RESPONSES TO THIS ARTICLE

Responses to this article are invited. If accepted for publication, your response will be hyperlinked to the article. To submit a response, follow this link. To read responses already accepted, follow this link.

ACKNOWLEDGMENTS

The authors express their gratitude to Kate Coronges and Alessandro Vespignani of the Network Science Institute, Northeastern University, Boston, Massachusetts, for their tremendously helpful feedback and comments that helped us to significantly improve this manuscript. We also thank the organizers and participants of the national workshop on increasing grant submission pressure (“Aanvraagdruk”) and improving NWO grant request procedures that was organized by Netherlands Organization for Scientific Research (NWO) on April 2017 (https://www.nwo.nl/beleid/nwo+werkconferenties+2017/nationale+werkconferentie).

LITERATURE CITED

Azoulay, P., J. S. G. Zivin, and G. Manso. 2009. Incentives and creativity: evidence from the academic life sciences. National Bureau of Economic Research (NBER) Working Paper No. 15466. NBER, Cambridge, Massachusetts, USA. https://doi.org/10.3386/w15466

Bollen, J. 2018. Who would you share your funding with? Nature 560:143. https://doi.org/10.1038/d41586-018-05887-3

Bollen, J., D. Crandall, D. Junk, Y. Ding, and K. Börner. 2014. From funding agencies to scientific agency: collective allocation of science funding as an alternative to peer review. EMBO Reports 15:131-133. https://doi.org/10.1002/embr.201338068

Brosnan, S. F., and F. B. M. de Waal. 2003. Monkeys reject unequal pay. Nature 425:297-299. https://doi.org/10.1038/nature01963

Fang, F. C., A. Bowen, and A. Casadevall. 2016. NIH peer review percentile scores are poorly predictive of grant productivity. eLife 5:e13323. https://doi.org/10.7554/eLife.13323

Gordon, R., and B. J. Poulin. 2009. Cost of the NSERC science grant peer review system exceeds the cost of giving every qualified researcher a baseline grant. Accountability in Research: Policies and Quality Assurance 16:13-40. https://doi.org/10.1080/08989620802689821

Herbert, D. L., A. G. Barnett, and N. Graves. 2013. Australia’s grant system wastes time. Nature 495:314. https://doi.org/10.1038/495314d

Tainter, J. 1988. The collapse of complex societies. Cambridge University Press, Cambridge, UK.

Woolley, A. W., C. F. Chabris, A. Pentland, N. Hashmi, and T. W. Malone. 2010. Evidence for a collective intelligence factor in the performance of human groups. Science 330:686-688. https://doi.org/10.1126/science.1193147

Zenger, T. R. 1994. Explaining organizational diseconomies of scale in R&D: agency problems and the allocation of engineering talent, ideas, and effort by firm size. Management Science 40:708-729. https://doi.org/10.1287/mnsc.40.6.708

Address of Correspondent:
Johan Bollen
9191 E 10th St.
Bloomington, IN 47401
USA
jbollen@indiana.edu
Jump to top
Figure1