Five things to consider when designing a policy to measure research impact [Originally published in The Conversation]

By Andrew Gunn, Researcher in Higher Education Policy, University of Leeds and Michael Mintrom, Professor of Public Sector Management, Monash University

the_conversation_logoThis year will see the Australian government pilot1 new ways to measure the impact of university research.

As recommended by the Watt Review2, the Engagement and Impact Assessment will encourage universities to ensure academic research produces wider economic and social benefits.

This fits into the National Innovation and Science Agenda3, in which taxpayer funds are targeted at research that will have a beneficial future impact on society.

Education Minister Simon Birmingham said4 the pilots will test

“how to measure the value of research against things that mean something, rather than only allocating funding to researchers who spend their time trying to get published in journals”.

This move to measure the non-academic impact of research introduces many new challenges that were not previously relevant when evaluation focused solely on academic merit. New research5 highlights some of the key issues that need to be addressed when deciding how to measure impact.

1. What should be the object of measurement?

Research impact evaluations needs to trace out a connection between academic research and “real world” impact beyond the university campus. These connections are enormously diverse and specific to a given context. They are therefore best captured through case studies.

When analysing a case study the main issues are: what counts as impact, and what evidence is needed to prove it? When considering this, Australian policymakers can use recent European examples6 as a benchmark.

For instance, in the UK’s Research Excellence Framework (REF) – which assesses the quality of academic research – the only impacts that can be counted are those directly flowing from academic research submitted to the same REF exercise.

To confirm the impact, the beneficiaries of research (such as policymakers and practitioners) are required to provide written evidence. This creates a narrow definition of impact because those that cannot be verified, or are not based on submitted research outputs, do not count.

This has been a cause of frustration for some UK researchers, but the high threshold does ensure the impacts are genuine and flow from high quality research.

2. What should be the timeframe?

There are unpredictable time lapses between academic work being undertaken and it having impact. Some research may be quickly absorbed and applied, whereas other impacts, particularly those from basic research, can take decades to emerge.

For example, a study looking at time lags in health research7 found the time lag from research to practice to be on average 17 years. It should be noted, though, that time lapses vary considerably by discipline.

Only in hindsight can the value of some research be fully appreciated. Research impact assessment exercises therefore need to be set to a particular timeframe.

Here, policymakers can learn from previous trials such as one conducted by Australian Technology Network and Group of Eight in 20128. This exercise allowed impacts related to research that occurred during the previous 15 years.

3. Who should be the assessors?

It is a long established convention that academic excellence is decided by academic peers. Evaluations of research are typically undertaken by panels of academics.

However, if these evaluations are extended to include non-academic impact, does this mean there is now a need to include the views of end-users of research? This may mean the voices of people outside of academia need to be involved in the evaluation of academic research.

In the 2014 UK REF, over 250 “research users” (individuals from the private, public or charitable sectors) were recruited to take part in the evaluation process. However, their involvement was restricted to assessing the impact component of the exercise.

This option is an effective compromise between maintaining the principle of academic peer review of research quality while also including end-users in the assessment of impact.

4. What about controversial impacts?

In many instances the impact of academic research on the wider world is a positive one. But there are some impacts that are controversial – such as fracking, genetically modified crops, nanotechnologies in food, and stem cell research – and need to be carefully considered.

Such research may have considerable impact, but in ways that make it difficult to establish a consensus on how scientific progress impacts “the public good”. Research such as this can trigger societal tensions and ethical questions.

This means that impact evaluation needs to also consider non-economic factors, such as: quality of life, environmental change, and public health. Even though it is difficult placing dollar values on these things.

5. When should impact evaluation occur?

Impact evaluation can occur at various stages in the research process. For example, a funder may invite research proposals where the submissions are assessed based on their potential to produce an impact in the future.

An example of this is the European Research Council Proof of Concept Grants, where researchers who have already completed an ERC grant can bid for follow-on funding to turn their new knowledge into impacts.

Alternatively, impacts flowing from research can be assessed in a retrospective evaluation. This approach identifies impacts where they already exist and rewards the universities that have achieved them.

An example of this is the Standard Evaluation Protocol (SEP) used in the Netherlands, which assesses both the quality of research and its societal relevance.

A novel feature of the proposed Australian system is the assessment of both engagement and impact9, as two distinctive things. This means there isn’t one international example to simply replicate.

Although Australia can learn from some aspects of evaluation in other counties, the Engagement and Impact Assessment pilot is a necessary stage to trial the proposed model as a whole.

The pilot – which will test the suitability of a wide range of indicators and methods of assessment for both research engagement and impact – means the assessment can be refined before a planned national rollout in 2018.

Notes

1 AUSTRALIA RESEARCH COUNCIL. 2017 pilot to test impact, business engagement of researchers [online]. Australian Research Council, Australian Government, 2016. [viewed 16 January 2017]. Available from: http://www.arc.gov.au/news-media/media-releases/2017-pilot-test-impact-business-engagement-researchers

2 SHAW, C. Watt report suggests financial incentives for measuring research impact [online]. The Conversation, 2015. [viewed 16 January 2017]. Available from: https://theconversation.com/watt-report-suggests-financial-incentives-for-measuring-research-impact-51815

3 MAZZAROL, T. Will the National Innovation and Science Agenda deliver Australia a world class National Innovation System? [online]. The Conversation, 2015. [viewed 16 January 2017]. Available from: https://theconversation.com/will-the-national-innovation-and-science-agenda-deliver-australia-a-world-class-national-innovation-system-52081

4 DEPARTMENT OF EDUCATION AND TRAINING MEDIA CENTRE, THE. 2017 pilot to test impact, business engagement of researchers [online]. The Department of Education and Training Media Centre, 2016. [viewed 16 January 2017]. Available from: https://ministers.education.gov.au/birmingham/2017-pilot-test-impact-business-engagement-researchers

5 GUNN, A. and MINTROM, M. Evaluating the non-academic impact of academic research: design considerations. Journal of Higher Education Policy and Management [online], 2017, vol. 39, n.1, pp. 20-30. [viewed 16 January 2017]. DOI: 10.1080/1360080X.2016.1254429

6 GUNN, A. and MINTROM, M. Higher Education Policy Change in Europe: Academic Research Funding and the Impact Agenda. European Education [online], 2016, vol. 48, n.4, pp. 241-257. [viewed 16 January 2017]. DOI: 10.1080/10564934.2016.1237703

7 MORRIS, Z.S., WOODING, S. and GRANT, J. The answer is 17 years, what is the question: understanding time lags in translational research. Journal of the Royal Society of Medicine [online], 2011, vol. 104, n. 2, pp. 510-520. [viewed 16 January 2017]. Available from: http://journals.sagepub.com/doi/full/10.1258/jrsm.2011.110180

8 AUSTRALIAN TECHNOLOGY NETWORK and GROUP OF EIGHT, Guidelines for completion of case studies in ATN/Go8 EIA impact assessment trial, Group of Eight [online], 2012. [viewed 16 January 2017]. Available from: https://www.go8.edu.au/sites/default/files/docs/eia_trial_guidelines_final_mrb.pdf

9 TAYLOR, S. When measuring research, we must remember that ‘engagement’ and ‘impact’ are not the same thing [online], The Conversation, 2016. [viewed 16 January 2017]. Available from: https://theconversation.com/when-measuring-research-we-must-remember-that-engagement-and-impact-are-not-the-same-thing-56745

References

AUSTRALIA RESEARCH COUNCIL. 2017 pilot to test impact, business engagement of researchers [online]. Australian Research Council, Australian Government, 2016. [viewed 16 January 2017]. Available from: http://www.arc.gov.au/news-media/media-releases/2017-pilot-test-impact-business-engagement-researchers

AUSTRALIAN TECHNOLOGY NETWORK and GROUP OF EIGHT, Guidelines for completion of case studies in ATN/Go8 EIA impact assessment trial, Group of Eight [online], 2012. [viewed 16 January 2017]. Available from: https://www.go8.edu.au/sites/default/files/docs/eia_trial_guidelines_final_mrb.pdf

DEPARTMENT OF EDUCATION AND TRAINING MEDIA CENTRE, THE. 2017 pilot to test impact, business engagement of researchers [online]. The Department of Education and Training Media Centre, 2016. [viewed 16 January 2017]. Available from: https://ministers.education.gov.au/birmingham/2017-pilot-test-impact-business-engagement-researchers

GUNN, A. and MINTROM, M. Evaluating the non-academic impact of academic research: design considerations. Journal of Higher Education Policy and Management [online], 2017, vol. 39, n.1, pp. 20-30. [viewed 16 January 2017]. DOI: 10.1080/1360080X.2016.1254429

GUNN, A. and MINTROM, M. Higher Education Policy Change in Europe: Academic Research Funding and the Impact Agenda. European Education [online], 2016, vol. 48, n.4, pp. 241-257. [viewed 16 January 2017]. DOI: 10.1080/10564934.2016.1237703

MAZZAROL, T. Will the National Innovation and Science Agenda deliver Australia a world class National Innovation System? [online]. The Conversation, 2015. [viewed 16 January 2017]. Available from: https://theconversation.com/will-the-national-innovation-and-science-agenda-deliver-australia-a-world-class-national-innovation-system-52081

MORRIS, Z.S., WOODING, S. and GRANT, J. The answer is 17 years, what is the question: understanding time lags in translational research. Journal of the Royal Society of Medicine [online], 2011, vol. 104, n. 2, pp. 510-520. [viewed 16 January 2017]. Available from: http://journals.sagepub.com/doi/full/10.1258/jrsm.2011.110180

SHAW, C. Watt report suggests financial incentives for measuring research impact [online]. The Conversation, 2015. [viewed 16 January 2017]. Available from: https://theconversation.com/watt-report-suggests-financial-incentives-for-measuring-research-impact-51815

TAYLOR, S. When measuring research, we must remember that ‘engagement’ and ‘impact’ are not the same thing [online], The Conversation, 2016. [viewed 16 January 2017]. Available from: https://theconversation.com/when-measuring-research-we-must-remember-that-engagement-and-impact-are-not-the-same-thing-56745

External links

European Research Council – Proof of Concept – https://erc.europa.eu/proof-concept

Research Excellence Framework – http://www.ref.ac.uk/

Research Excellence Framework – Information for research users – http://www.ref.ac.uk/about/users/

VSNU – Quality Assurance Research: Standard Evaluation Protocol – http://www.vsnu.nl/en_GB/sep-eng.html

Read the original post:

https://theconversation.com/five-things-to-consider-when-designing-a-policy-to-measure-research-impact-71078The Conversation

 

cc-by-nd0This text was published under a Creative Commons License — Attribution-NoDerivatives.

 

Como citar este post [ISO 690/2010]:

THE CONVERSATION. Five things to consider when designing a policy to measure research impact [Originally published in The Conversation] [online]. SciELO in Perspective, 2017 [viewed ]. Available from: https://blog.scielo.org/en/2017/01/16/five-things-to-consider-when-designing-a-policy-to-measure-research-impact-originally-published-in-the-conversation/

 

One Thought on “Five things to consider when designing a policy to measure research impact [Originally published in The Conversation]

Leave a Reply to SciELO Cancel reply

Your email address will not be published. Required fields are marked *

Post Navigation