The lack of consensus on quality and how non-traditional research outputs were valued by the respective institutions was obvious. To give a few examples: one university put up a folio of creative works – i.e. one countable output – that constituted upwards of 60 poems, plus full books of poems. As a poet myself, and knowing the difficulties of publishing, I was surprised by the suggestion that such enormous quantities were required to constitute equivalence with a single research output such as a peer-reviewed scholarly article. This approach suggested that the poet researcher’s outputs were more-or-less gathered up and retrospectively lassoed by a research statement, without demonstrating strong practice research purpose; or, this was a possible gaming strategy on the part of the university.
In another example from a different university, the submitted sample of creative works included images of the covers of outputs such as published books, along with industry indicators of quality such as review excerpts and lists of prizes and accolades that the respective works had garnered. But there were no excerpts of the creative works themselves.
The work itself, I thought, must surely be necessary to demonstrate quality in creative practice research, rather than relying purely on an author’s reputation or industry success, which we all know can be fickle markers.
In that same assessment round, there were other approaches to submitting samples and ‘evidence’, too, demonstrating a very broad spectrum of understanding of quality creative practice research, and how to demonstrate that quality.
I have heard similar anecdotes from colleagues and peers who have also been assessors. We are clearly faced with problems of understanding the relationship between quality and quantity, and with how to demonstrate quality in fields that haven’t developed in the academy according to ‘traditional’ methods.
What I enjoy about ‘evaluation’ of creative practice research in all its contexts – whether that be examination of PhDs, peer-reviewing for publications, assessing for ERA, or even evaluating my own outputs and writing a statement for internal processes – is the call to connect with the work itself and the extent to which this work stretches not only practice or idea, but articulation of practice and idea. This ‘connecting’ with the breadth of research by peers tempers the ‘labour’ for me when it comes to evaluation tasks (to an extent, of course!). The attentiveness required for proper evaluation of creative practice research is a privilege that we ought to emphasise in developing ‘national standards’. But where we perhaps stray is in ‘evaluating’ according to arbitrary industry markers that don’t necessarily have much to do with research – for example, awards and other accolades, number of reviews and so on. These markers can be important factors in terms of whom one’s work reaches and how its impacts might be measured, but they can’t be leaned on as labour ‘work-arounds’ in place of a more well-rounded focus on the quality of the research offered by the work itself. So, what standards might be set nationally to ensure that submissions focus on the quality of the research itself?
Some considerations and questions towards the next generation of ERA: First is the labour involved in writing research statements, preparing submissions, and assessing both internally (at a university level) and nationally.
Our current methods are labour and cost intensive; the significant administrative burden was, of course, outlined in the ARC submission to the interim report of the Universities Accord.
Yet peer-review is, really, a non-negotiable. What processes might we collectively put forward to guide university collection and national assessment?
Second, what is the relationship between creative practice research and the industries that support and disseminate their public engagement and impact lifecycles? To what extent do these external or industry markers of approval indicate that good research has occurred? To what extent are we leveraging those markers to retrofit strong research statements around our creative practice outputs, and what are the implications of this retrofitting for research quality?
Third, to what extent do we want universities to have autonomy over their own priorities when it comes to assessing creative practice research? We know that universities assign points and metrics to outputs in different ways, which is usually driven by institutional priorities. How might national standards enable and/or restrict this institutional autonomy?
Fourth, is it possible to compare our creative and artistic rigour with traditional academic rigour? As I have gleaned from my own encounters within institutional settings, there persists an attitude that creative practice researchers and research can be naive. Why is this? Perhaps because we’re made to squeeze practitioner knowledges into other modes of reporting that aren’t necessarily a good fit. National guidelines for assessing quality would help to establish standards and dispel such attitudes.
And finally, are there some creative practices that are better suited to academe? How so and why? And how might assessment processes help us to expand academic frontiers to encompass, rather than to exclude or ignore certain practices?
To close, I wanted to offer a few provocations towards the future of assessment and what might be possible:
- What if we allowed ourselves to go deeper into exploratory territory and to recognize that the goal is ‘knowledge creation’, not overproduction?
- What if discipline fields came together to strategise ‘our way’ of saying what is worth doing and how might be the best way to go about it? Of course, this should not preclude interdisciplinary discussion.
- How might we place emphasis on the enablers of quality creative practice research? What do people need for their disciplines and creative contributions to thrive?
- And a final, wild thought: Could we extract ourselves from metrics to instead focus on researcher projects and creative practice research ecologies? What if we shifted emphasis from ‘points accrual’ towards evaluation of sustained research narratives that demonstrate longer term goals and interdisciplinary thought leadership?
Jessica Wilkinson is Associate Professor in Creative Writing at RMIT University. Jessica is a writer, critic, scholar and editor whose research interests include: poetry and poetics; contemporary poetry; poetic biography; ‘nonfiction poetry’; experimental/radical writing; literary theory. She is the founding and Managing Editor of ‘Rabbit: a journal for nonfiction poetry’ (2011-present) and of the Rabbit Poets Series.