Australian Research Council review of research excellence, engagement and impact leaves important policy questions unanswered

By Frank Larkins

The Australian Research Council commissioned in 2020 a review of the Excellence in Research for Australia (ERA) and the Engagement and Impact (EI) assessment programs. This review was in response to increasing concerns, principally from sections of the university community, as to the transparency, costs, relevance, credibility, and academic and financial benefits of the two exercises.

An advisory committee assessed in 2020 and 2021 the 112 submissions received from individuals and organisations in response to a consultation paper along with other data. The advisory committee report containing 22 recommendations was presented to the ARC and released in June 2021. An accompanying action plan from the ARC accepting all the recommendations was released at that time. It is most encouraging that the committee and the ARC acknowledged shortcomings in the present assessment processes and conceded that the ERA and EI rating scales and benchmarks needed to be reviewed. The ARC has a resolve to make significant changes to the assessment exercises. However, the action plan as released lacks details of how the major shortcomings will be addressed. The approach has been to establish another expert panel and more working parties to advise on solutions.

An opportunity has been missed because the 17 members of the advisory committee with a breadth of experiences did not do more to address options for solutions rather than just identifying problems.

A major conclusion of the review committee was that the existing ERA framework is not now as ‘fit for purpose’ as it may have been for previous rounds. The committee acknowledged that the value of the ERA as a stocktake of research had declined and the objective of identifying emerging research areas and opportunities for further development had not been achieved. They highlighted a serious deficiency that the ERA was not well defined and may not have been applied consistently between citation analysis disciplines and peer reviewed disciplines. These are damming criticisms vindicating major concerns expressed by university researchers and policy makers in recent years.

The revised vision statement recommended by the advisory committee to underpin further exercises was accepted by the ARC. One can anticipate that the vision statement in its generalised form -‘That rigorous and transparent research assessment informs and promotes Australian universities’ pursuit of research that is excellent, engaged with community, industry and government, and delivers social, economic, environmental and cultural impact’ – would not be considered controversial by the stakeholders. The central challenge is translating the words into actions so that the research community can be confident that the exercises will deliver ‘rigorous and transparent research assessment’ as promised in the future. This practice has not been universally the case with past exercises, especially in relation to publication and citation metrics to establish world standards and the methodological inconsistencies in assessment practices between Science-related disciplines and Humanities and Social Sciences-related disciplines.

The advisory committee stated that Australian university research should be assessed in the future with ‘greater robustness, transparency, inclusivity and utility’ to make the outcome more accessible and insightful for stakeholders. The policies to achieve these noble goals are not adequately addressed in the report or in the ARC response. Policy changes are essential to restore the confidence of the university community in the integrity of the outcomes of the ERA and EI exercises.

The major challenge will be to establish revised methodologies acceptable to the community within a very short timeframe, given that the ARC has advised that the new ERA and EI rating scales will be published in the first half of 2022 in time for the proposed next ERA exercise in 2023 and the next EI assessment in 2024. Universities will be recovering from the adverse effects of the COVID-19 pandemic with reduced staffing levels within this timeframe.

Since Government funding for university research is not directly aligned with the outcomes of the ERA and EI exercises a longer lead time to the next rounds for more consultations and preparation is appropriate.

The ERA and EI exercises, if progressed, should be deferred until at least 2025 and 2026 respectively to give universities time to establish new ‘COVID normal’ operational frameworks.

A central pillar of the ERA assessment vital to its credibility is knowledge of the quantitative and qualitative metrics used to establish the world standards for the various Field of Research (FOR) disciplines. Currently, there has been little transparency in this process with world standard benchmarks in some science and technology disciplines apparently declining from the 2012 to the 2018 exercise. Furthermore, there are clear inconsistencies between metrics-based and peer review assessment methodologies. These deficiencies can have adverse consequences for researchers when universities make policy decisions about research priorities and funding allocation for different disciplines.

Anomalies were apparent in a 2019 study when changes in the number and percentage of Australian universities above world standard in 2018 and 2012 for various research disciplines were compared. A clear example to illustrate this is to compare the metrics-based Mathematical Sciences (MS) discipline outcomes with the predominantly peer-reviewed Information and Computing Sciences (ICS) discipline outcomes. In 2018 93% of universities with MS disciplines (25 of 27 universities) were assessed to be above world standard compared with only 41% of ICS disciplines (14 of 34 universities). In 2012 only 41% of universities (11 of 27) with MS disciplines were above world standard and 20% (7 of 35) with ICS. This outcome and several other assessments warrant greater scrutiny.

It is not credible that 25 of 27 Australian Universities undertaking Mathematical Sciences research that were assessed to be above world standard in the 2018 ERA exercise would be ranked this highly globally by their peers.

In the earlier study of the outcomes from the 2018 ERA exercise it was also highlighted that many more universities were rated at above world standard in the science-related disciplines than in  the humanities and social sciences related disciplines (compare figures 1 and 2 of the reference). Confidence in the ERA processes, as viewed by many researchers, declined markedly as a result of the published assessment outcomes from the 2018 round.

The review committee did acknowledge stakeholder concerns that the peer review methodology was not comparable to the citation analysis methodology and the concept of ‘world standard’ may be unintentionally set higher for peer review disciplines than the citation analysis disciplines. They did not have a solution to this problem, so it was referred to an expert working group. It should be acknowledged that reconciling the disparity within a ‘one size fits all’ approach is an insoluble problem. Therefore:

The ERA assessment round should be separated into two distinct categories, one for the sciences and another for the humanities and social sciences, with different rating scales and benchmark metrics for what constitutes ‘world standard’ or ‘world best practice’ in each category.  

In reviewing the benchmarking of Australian universities research discipline performances against international norms and the accompanying ERA rating scales it would be more realistic to establish publication and citation benchmarks for science disciplines against a small group of English-speaking developed countries that are our main competitors rather than using global average performances. One approach would be to use research outcomes from the USA, UK, Canada and New Zealand to establish the ERA performance benchmarks.

The ERA performance benchmarks should be established using data from a small group of English-speaking developed countries to restore integrity in the ranking process.

The Engagement and Impact (EI) exercise planned for 2024 will see the engagement indicators for each FOR chosen by universities, because the committee has acknowledged that research engagement is discipline and university specific. No adequate suitable metrics exist which can quantify research engagement relationships in a one size fits all approach. Furthermore, the committee stated that currently there is no set of quantitative indicators that adequately captures the broad range of research impacts.

Comparative acceptable assessment performances between universities and the development of a valid rating scale for EI is elusive and unlikely to be achieved. The rating scale is planned to be revised by December 2021, but no insight has yet been given as to the kind of amendments that will be made. The definitions for impact and the assessment methodology have been also identified for revision with submission guidelines being published by June 2022. The ARC will look to a working party for guidance in an area where international experience indicates assessment is fraught with many difficulties.

The expense and time-commitments associated with the EI exercise cannot be justified given questionable benefits and the limitations associated with defining and assessing the metrics that will be discipline specific, predominantly qualitative, highly subjective and vary widely between universities.

The ERA and EI review conducted by the ARC has been important in acknowledging the deficiencies in the current approaches. If the ARC is to fulfil its aim of providing ‘greater robustness, transparency, inclusivity and utility’ to streamline the process and make the outcomes more credible, accessible and insightful for stakeholders it has much work to do within a very short timeframe. The methodological barriers to real success, to restore confidence for the stakeholders in the exercises, may be insurmountable.

FPL July 2021.

Professor Emeritus Frank Larkins is an Honorary Professorial Fellow from the Melbourne Centre for the Study of Higher Education and School of Chemistry, at The University of Melbourne.

Download review

More Information

Professor Emeritus Frank Larkins

f.larkins@unimelb.edu.au

  • Fellow Voices