Skip to main navigationSkip to main content
The University of Southampton
Public Policy|Southampton

Consultation response | Research Integrity Inquiry

Research Integrity

Research Integrity Inquiry

Science and Technology Committee

A response from the University of Southampton | March 2017

Read the call for evidence Download the response

Written evidence submitted by Dr Zacharias Maniadis (with input from Dr Thomas Gall), University of Southampton

Zacharias Maniadis is an associate professor of economics at the University of Southampton. He has been conducting research on the credibility of research and scientists’ incentives, examined through the lenses of economic modelling. This submission emphasizes the importance of economic theory in analysing reforms of incentives and practices in science, which need to be assessed rigorously before implementation.

Executive Summary

The Importance of Rigorous Assessment

1. State control over science has been the subject of heated political debate since the 1980s. According to Oxbridge professors Partha Dasgupta and Paul David, two early proponents of the economic study of science: “ … the best preventative against blind and costly social experimentation that we can recommend is a prior investment in acquiring and disseminating deeper scientific understanding of the subject of concern.”(i) These words of caution are highly relevant in today’s environment, where substantial concern has emerged about the conduct and the social role of science.

2. The January 2017 ‘Integrity in Research’ POSTnote enumerated a series of proposed reforms (purported to ‘re-align incentives for researchers’), such as: incentivising replications, giving less emphasis on positive results, and even criminalising misconduct. This submission will argue that the field of economics has a longstanding focus on identifying and assessing reforms that change individual incentives, and therefore economic evidence should be sought and will be tremendously useful in the process of devising and implementing any reforms to re-align incentives for researchers.

3. There is no lack of suggestions for reform, put forward by practitioners in the respective fields, but only seldom are they backed by a formal, theoretical analysis or sound plans for empirical evaluation. The absence of rigorous evaluation entails high risk of ending up in a regime of worse credibility.(ii) A point that has been greatly underappreciated is that various tools of economic analysis (both theoretical and experimental) can be fruitfully employed to assess proposals for reforming biomedical research practices.(iii) In particular, careful economic analysis can inform us about unintended consequences of policies and about ‘market outcomes’ that aggregate researchers’ behaviour. Importantly, each type of reform has its costs, which needs to be weighted carefully against the benefits. This weighting needs to be transparent and rigorous, and this is where economic theory has a great role to play (especially the subfields of welfare economics and cost-benefit analysis).

4. The feasibility of assessment by means of such methodologies can be illustrated by a number of successful examples where similar economic evidence informed real policies. Economist and Nobel laureate Alvin Roth and colleagues combined mathematical models with laboratory experiments to examine the best way to address market inefficiency in the market for new physicians in the USA and Canada.(iv) They conducted experiments to examine the performance of mechanisms with good theoretical properties in simulated markets with human participants. Their analysis inspired the creation of ‘centralized clearinghouses’ for the markets they studied. Economic modelling and laboratory experiments have also informed optimal auction design for radio spectrum licenses,(v) and they have also been employed in the study of efficiency implications of creating a market of tradable ‘emission permits’ for polluting companies.(vi)

5. Somewhat surprisingly, there seems to have been little rigorous evidence on possible outcomes guiding policies such as ‘Amending the REF’ and ‘Extending the funding horizon of junior researchers’. Standard economic policy analysis could clarify the potential trade-offs and greatly enhance our understanding regarding these policies. Such economic analysis could utilize mathematical modelling of incentives, insights from laboratory experiments, or Randomized Controlled Trials in the field. It is noteworthy that these approaches are cost efficient and can be conducted on short notice. Pure mathematical modelling only requires the researchers’ time as an input. A first empirical evaluation of theoretically desirable reforms in social environments could be conducted through lab experiments with human participants, simulating the modelled scenarios. Such experiments could be conducted within 6-9 months and their costs (on top of researchers’ time) would be in the range of 50,000-100,000 pounds for a large series of trials. Scaling up to a Randomized Controlled Trial in the field (in the cases where this is possible) would require a time horizon of 2-3 years and a substantially larger budget per trial.

Extreme Punishments and Audits are Unnecessary

6. An example can illustrate the power of economic theory. The 2107 POSTnote raises the question concerning whether misconduct should be criminalised. Clearly, such punishment would only pertain to extreme forms of misconduct, such as data alteration and fabrication, which greatly distort our knowledge. This is strongly opposed by many researchers, who feel that this could instil a culture of fear in admitting well-intentioned mistakes.

7. Simple economic and game theoretic reasoning indicates that such measures are not necessary in order to reduce the rate of extreme misconduct.(vii) Suppose that researchers who compete in publishing in elite journals could either behave honestly, conduct questionable research practices, or engage in misconduct. Importantly, misconduct is associated with a high cost - potentially stemming from psychological aversion or from the possible consequences if misconduct is detected. On the other hand, questionable research practices, (which seem to fall into an ethical grey area), have a positive but moderate personal cost to the researcher. Pure misconduct conveys a large advantage in the probability of publication, while questionable research practices convey a small advantage (compared to pure honesty).

8. Using tools from game theory incorporating the main strategic interactions and individuals’ motivations shows that a policy of striking down questionable research practices (not extreme misconduct) is the best option. The idea is the following: if a new policy makes it difficult for everyone to gain a ‘small’ advantage in publishing, the average researcher will perceive a relatively level playing field (apart from misconduct, which has a high cost). This means that misconduct is no longer necessary in order not to fall in a disadvantageous position, and therefore its high
costs make it less attractive. Similar simple models have been used extensively in the literature on economics of crime, often influencing criminal law. (viii)

9. A corollary of this analysis is that a policy such as enforcing ‘Checklists’ will also significantly diminish extreme misconduct, even in the absence of extreme punitive and monitoring measures. This is because Checklists will render questionable research practices more costly. Accordingly, if one worries about abominable practices such as data fabrication, there is no need to criminalise it in order for its frequency to be reduced.

The Incentives for Replication

10. A key challenge in the current scientific environment is how to incentivise replications. It can be shown that, if the social criterion is to achieve as credible results as possible, a small number of replications (in the order of three for each finding) can greatly enhance the credibility of published results.(ix) This is so, even if replications themselves are subject to bias - either in favour or against the initial result.(x) The question is how to incentivise replications in an environment where they are not considered as very creative research and they may be met with hostility by one’s peers. In general, researchers have no incentives to make the effort needed to document and codify the knowledge necessary to allow replications of their findings.

11. The classical dichotomy between tacit vs. codified knowledge is very relevant in understanding the modern researchers’ decision on how much effort to devote in providing codification and materials to facilitate the future replication efforts. Following Dasgupta and David, such decisions will be a function of the incentive systems involved.

12. Under this perspective, one may also consider a change in researchers’ incentives in the direction of increasing the reward of having one’s work replicated.(xi) In particular, such a scheme would have the potential to improve the relationships between colleagues and align private and social returns, providing a more appropriate payoff to the socially beneficial activity of codifying implicit knowledge needed for replication. Of course, as we argued in our main point, this proposal will need to be carefully assessed before its actual implementation.
The State of Meta-Research in the UK

13. It comes perhaps at little surprise that increasing concerns about an alleged ‘credibility crisis’ in science have induced the development of tools for the scientific study of the crisis itself. In particular, a new academic field that conducts ‘research-on-research’ using rigorous methodology is continually being refined. This new field is called ‘meta-research’(xii) and its key objective is to convey to the scientific community a bird’s-eye view of the overall literature.

14. It is an accepted corollary that basic research tends to be underprovided if left to market forces, as producers of insights from basic research do not generally appropriate the fruits of their work. One frequently employed solution is the subsidisation of basic research. In the new field of meta-research, the inefficiency problem is particularly exacerbated. This type of analysis has high social benefits, because it allows for a rigorous assessment of the credibility of the existing literature. However, it is well known that in many scientific disciplines this type of work is not considered as prestigious as original research, and it is also very time demanding. One could add to this the fear that replicating existing studies might jeopardize relationships with fellow researchers, and, in case of early career researchers, tenure decisions. Accordingly, the private marginal benefit of individual researchers conducting this type of research is much lower than the social marginal benefit.

15. In the US, a country with a long tradition of privately-initiated philanthropy, this important problem has been chiefly addressed by a very large charity called the Arnold Foundation. This institution is particularly active in supporting meta-research in different domains including supporting randomised interventions and the development of networks such as METRICS, a multidisciplinary centre at Stanford University, California.

16. The UK has a large tradition in the quantitative research synthesis of different results, being home of the Cochrane collaboration, a famous network supporting meta-analysis. However, meta-research goes beyond the input of statistics, benefiting from the collaboration of biomedical researchers, sociologists of science, psychologists, economists and even evolutionary biologists. It is an open question whether the UK can sustain this type of research in the absence of a similarly large charity sector.

i P Dasgupta, and PA David (1994). "Toward a new economics of science." Research policy 23.5 487-521.

ii JPA Ioannidis (2012). "Why science is not necessarily self-correcting." Perspectives on Psychological Science 7.6, pp. 645-654.

iii T Gall, JPA Ioannidis and Z Maniadis (2016). “The Credibility Crisis in Research: Can Economics Tools Help?” R&R, PLoS Biology.

iv JH Kagel, AE Roth. The dynamics of reorganization in matching markets: a laboratory experiment motivated by a natural experiment. Quarterly Journal of Economics 115, 201-235 (2000).

v J Ledyard, D Porter, A Rangel. Experiments testing multi object allocation mechanisms. Journal of Economics and Management Strategy 6, 639-675 (1997).

vi TN Cason, L Gangadharan, C Duke. Market power in tradable emission markets: a laboratory testbed for emission trading in Port Phillip Bay, Victoria. Ecological Economics 46.3, 469-491 (2003).

vii T Gall and Z Maniadis (2016). “Evaluating Solutions to the Problem of False Positives in Science.” University of Southampton Discussion Paper 1504.

viii RA Posner. "An economic theory of the criminal law." Columbia law review 85.6 (1985): 1193-1231.

ix Z Maniadis, F Tufano, and JA List (2014). "One swallow doesn't make a summer: New evidence on anchoring effects." The American Economic Review 104.1, pp. 277-290.

x Z Maniadis, F Tufano, and JA List (2017). “How Important is Replication in Economics? A Model and Pilot Study”. Forthcoming , Economic Journal.

xi Z Maniadis, F Tufano, and JA List (2015). "How to make experimental economics research more reproducible: Lessons from other disciplines and a new proposal." Replication in experimental economics. Emerald Group Publishing Limited, 215-230.

xii JPA Ioannidis, et al. (2015). "Meta-research: evaluation and improvement of research methods and practices." PLoS Biol 13.10, e1002264.

Privacy Settings