The ARC’s attempt to establish an authoritative ranked journal list in 2009 in preparation for the first ERA exercise caused consternation rather than consensus. After decades of collecting and auditing journal data sets built on the gold standard of peer review, the HERDC was recognised as essentially being a quantitative exercise that rewarded volume in the allocation of funding to universities.
For the ERA 2015 exercise, the ARC published a Submission Journal List, formulated after gaining advice from peak discipline bodies and submissions from universities. The variability in the editorial rigour of publications on this unranked eligible journal list did not raise any concern because, by developing an information technology platform or a System to Evaluate the Excellence of Research (SEER), the ARC has ensured quality control is finely tuned to the needs of each discipline. ARC-appointed authorities, such as ERA reviewers and expert panel members, access a dashboard showing the pattern of journal and other publications for institutions in their area of authority and are therefore best placed to make qualitative judgement on the reputation of a journal or book publisher or exhibition venue or other mechanism of peer management.
The all-encompassing nature of the ERA research data evaluation makes the distinction between “traditional” and “non-traditional” outputs redundant. The Non-Traditional Research Output (NTRO) designation was instigated by the ARC as an administrative necessity to identify non-HERDC eligible outputs such as creative works, and, more recently, translations and reports to external bodies. Each of these NTROs submitted for evaluation also required a research statement outlining the research background, contribution, and significance of the particular work. Consequently, for the past two iterations of the ERA at least, the evaluation of creative works and other NTROs have been more secure than many so-called “traditional” research publications since they have had to pass through three layers of peer-review. They have been scrutinised by editorial or peer-review selection processes by publishers, gallery directors, curators, and selection panels before publication. Post publication, the outputs have undergone verification and evaluation by Research Deans and Officers in each university and, finally, by external ERA reviewers. In addition to this, a number of universities, such as the University of Sydney, also appoint external peer assessors to oversee all creative research outputs collected in its research data repository. Peer judgement remains fundamental to evaluating all research, and the ARC returned to first principles and the use of the latest data management and visualisation to construct the comprehensive measures of quality for NTROs. In bringing recognition to creative arts research, the ARC has fortuitously developed a model that can become the standard procedure across all disciplines, including those currently using citation metrics, to evaluate research publications in future ERA exercises.
The ERA exercise has made manifest the ability of disciplinary authorities to evaluate research on a granulated five point scale of quality. Logically, it follows that the old HERDC two-step allocation of 5 research points for “A1” books and 1 research point for all other modes of publication or dissemination could be reconfigured to create a three-step 5/3/1 point allocation for individual research outputs.
The allocation of the highest ranking to books was always absurdly anomalous in its discipline exclusivity. Few would dispute the fact that most books that are published by reputable academic publishers represent a substantial investment of research time. However, so does a major discovery in physics or a massive longitudinal study in social science, reported in a journal article; equally, so does a major architectural, information technology or engineering innovation reported at a conference. The same applies to a significant novel, film, theatrical, or musical piece that provides insights into a culture, historical moment, or social issue, or to a major original public art work that encapsulates place and identity, or to an important exhibition that shifts perceptions of meaning and time.
Several Australian universities have developed protocols for internal recognition of a 5-point book equivalent for creative arts research and more granulated ranking beyond the binary 5/1. The universal application of a tripartite hierarchical ranking of outputs in future developments of the ERA exercise would bring the institutional ranking of quality of individual outputs closer to the ultimate five-point scaling at the discipline level. Significantly, a “Substantial”, “Major”, and “Standard” designation, for example, could be applied to outputs regardless of their form or mode of production. By not being tied to particular publication forms, the focus would be on the quality of the research, and a particular book would not automatically rank as “Substantial”—just as a ground-breaking journal article could rank as “Substantial”.
The outmoded 5/1 binary ranking of research used in HERDC has no place in the tractable ERA methodology if it is to continue to present a finely-grained picture of research quality performance by discipline within each Australian institution.
Professor Ross Woodrow is Director of the Griffith Centre for Creative Arts Research and Deputy Director Research at the Queensland College of Art, Griffith University, Brisbane . He received his BA from the University of Queensland. His MPhil and PhD from the University of Sydney were both traditional thesis submissions although he has supervised to completion more than twenty studio-based Doctorates along with a number of thesis submissions. He is the Executive and founding Editor of the peer-reviewed journal Studio Research and he has gained extensive experience with the ERA in preparation of submission data and as an ERA Reviewer. As an artist and theorist his publication profile includes a wide range of traditional and creative outputs.