How should governments allocate resources to research projects? The usual answer to this is through a mixture of strategic allocations in areas of national significance, recurrent funding for those with proven pedigree and responsive funding for the ‘best’ new ideas. Funding allows a diverse research ecology with a strong research infrastructure to support a research community in which no blind spots emerge.
Grass-roots – responsive mode – funding is always a bit of a gamble for all those involved. Funders have limited resources, and developing proposals can be an incredibly time-consuming effort. Therefore the ‘oil’ in the system is a willingness for researchers to spend their scarce time developing proposals that might not be funded.
That willingness is strongly dependent on the system’s credibility to deliver fair outcomes, and to ensure that if you play the game, follow the rules and come up with a good idea, then you’ve got a decent chance of funding be funded. And that willingness is strongly influenced by relative success rates – if good research proposals are being rejected with no chance of funding, then the funding system starts to have more in common with a lottery than a credible objective governance tool. Arguments vary but empirical data suggests that there is a credibility threshold of success rates at around 25-33%.
It’s plausible that three-quarters of proposals just aren’t good enough, because we’ve all been asked to review weak proposals, and most of us accept that sometimes we just get it wrong. So when success rates can hit as low as 2% as the Dutch Social Sciences open Ph.D. call reportedly did in recent years, then there are strong negative effects, with people being unwilling to invest significant time in developing proposals.
This undermines the role of the responsive funding to support risky, path breaking research, and the Matthew effect means that it is people with existing resources, not necessarily the best ideas that have the time to write proposals for these “funding lotteries”. A real science policy problem, and one which has also affected by UK’s ESRC, as noted by Phil Ward, with outcomes for November 2014 noting that more than half of the excellently-scoring proposals were not funded.
Phil raised the question of what is the solution to this problem, noting that the ESRC “combing through 144 applications to identify the 14 to be funded seems like a colossal waste of their time. It’s inefficient, and it’s not good to appear as such with the Nurse Review just around the corner”. One suggestion he made was a two-stage process, as was previously used in UK research council Research Programmes, and is now common in European Joint Programming Initiatives and ERA-NET schemes.
He also argued for more demand management at the level of universities to stop proposals with no chance of getting funded going forward. But that’s asking demand management to make some fairly fine calls, with 45% of those submitted ranking as at least very good, significant value with the change to make a real contribution.
Could universities, as private institutions with their own politics, really be trusted to make a distinction between holding back a superstar Prof’s very good proposal to allow a post-doc’s excellent one to go in? Why swap a large pool of peer reviewers for a tiny one, full of internal politics? Why ask researchers to spend their costly time to run their own institutionally-based research councils?
If the effect is as desired driving a further upward shift in proposal quality, demand management is effectively adding to the lotterisation of the system: you buy a ticket with an excellent proposal and hope the stars shine on you. And ultimately, if you believe universities can be trusted identify the best research proposals (and that’s by no means uncontroversial), then why have research councils at all?
Clearly, there is some scope for more top-down management, but the fundamental point is that there’s a fine line between ‘nurturing’ proposals and stitch-ups that primarily benefit established researchers over innovative researchers. That could be avoided by having a process with more stages in it and certainty given before projects are definitely finalised. But that risks being a massive administrative burden, and a bureaucracy squeezing out academic input, and favouring box ticking and inevitably conservatism.
These problems also reflect the shortage of cash. I was lucky enough to be in the UK during the boom years of the early 2000s when there was suddenly as much money as good ideas. That money was well invested in creating a generation of researchers who are now aggressively chasing every opportunity with brilliant ideas, with the well-known consequences.
It’s not just Research Council research funds that are under pressure – their admin budgets are also under pressure. There is a present risk that administrative concerns lead them to drop this essentially bottom-up tool to favour a top down centrally guided set of funds. This can be seen with the popularity of the idea of massive research centres which address strategic themes (or depending on your position, today’s craze). But they do at least have the plus of being easy to administer compared to the headache of deciding which of 144 excellent proposals get the nod.
We’ve been here before, with the US trying to address the time waste issue already in the 1980s. Their efforts stranded so it could be that the problem is just intractable: “you want dynamism, you get waste”. Any replacement needs to not just address the symptoms of administrative burden but primarily ensure there are incentives for science system renewal that provide substantial public benefit.
Perhaps the 10% success rate is just a blip arising from one meeting in the ESRC case, but clearly, funding rates far below the credibility threshold are a growing problem across Europe needing our serious attention.
(This post is a condensed form of a spontaneous Twitter conversation involving @Nightingale_P @heravalue @AdamCommentism @kieronflanagan @frootle @Cash4Questions) arising from @frootle’s original post; all errors and omissions remain the responsibility of @heravalue ).
Photo appears courtesy of catherinecronin under a CC BY-SA 2.0 License.