‘Real world’ applications of creative practice research
Whilst creative practice research might be ‘naturally outward looking’ and seeks audiences outside of the academy with apparent ‘real world applications’, we need to consider this more deeply as a measure of research value. How might we value research that has cultural but not commercial significance or that might be at the edge of accepted practice and therefore ‘real world applications’ are not immediately obvious. Perhaps ‘relevance’ is a better term instead of ‘application’?
Relevance and impact and ‘outward looking’ does not need to mean that it’s mainstream or safe. Research needs to be challenging and be on the edge. We need to have a way of understanding ‘impact’ which includes the radical – and therefore, potentially marginal.
Creative practice research has the potential to be many things. How do we find models that do not (inadvertently) limit the scope of the kind of research we can do? Some standards and generalisations are important, but others are dis-abling.
On diversity
Creative practice research has a very important contribution to make to the way that the university thinks about diversity as we are able to work with our various mediums and modalities to convey diverse knowledges. In some ways the ‘non’ of our research indicates the possibility of more diverse approaches to research that go far beyond the narrow focus of the academic journal.
The difference between the universities in terms of their size and location can determine not only what kinds of research questions are of interest, but the kind of work that is possible. How do we then evaluate work from a regional context where there might be different access to resources, communities, galleries etc? (This of course is true for all disciplines, not only creative practice research).
On risk
Risk is important. Maintaining a space where people can take risks, and practice at the margins, have the chance for new discovery, needs to be protected, because it doesn’t exist in the industry. In reality, there seems to be a fine balance between these: to make impact, to cultivate industry partners, to bring in money and funding, and to be doing something radical. These things don’t traditionally all align. Can we recognise this and how can we resist conflating them and make room for it all?
Representation
Not enough creative practice researchers are on panels and advisory boards. There are not enough of us participating in the key conversations. There was no creative representation, for example, on the ERA transition working group. We may be small in numbers compared to other researchers, but if we want to do this properly we need to be included – surely this would help the process. We have so many peak bodies, including DDCA, that are engaged and want to be of service, we hope the steering committees recognise this and call on us for advice in things that affect us.
We need greater diversity in the pool of assessors. This would require a bigger investment from the ARC.
World standards
The contemporary art world is such an incredibly complex and multifarious entity, particularly outside of our classic Western understanding. Even before we start evaluating ideas of quality within a creative arts context, how are we then supposed to understand the standing or credibility or rigour of so many different institutions across so many cultural contexts?
Quality assurance is more important than world standing because world standing is, if anything, becoming more nebulous as a category. We are trying to reach consensus in a field where that doesn’t exist.
There are many incredibly great internationally rigorous projects in regional parts of Australia that are heaps more important and rigorous than what happens in New York or Berlin or London. We need to shift that politic and the perhaps the cultural cringe that sits underneath this notion of ‘world standard’.
Excellence and quality
Quality assurance was a really big part of the early stages of the discussion about evaluation of non traditional research outputs. There’s a lack of clarity around how we articulate this and it would be really useful for us to be able to work on that articulation in a collective way. There’s also the question of pure esteem of presentation venues as proxies for excellence. How do we balance this tendency towards evaluation, towards excellence, which suits certain kinds of institutions, versus evaluating quality which might have a more disciplinary character? Perhaps that’s a real problem that we have to collectively work out, because the danger is that, as we try and work nationally on resolving that tension, we potentially reintroduce an institutional segmentation between haves and have nots – along the lines of excellent institutions and not excellent institutions. We have to be sensitive to those different institutional positions and the potential of the ‘excellence’ discourse to start to reintroduce some of those distinctions that hopefully, as practitioners, we’re moving away from.
It’s a great idea to try and have a national multi-artform conversation about what excellence looks like, but we ought to be careful about focusing entirely on excellence. Quality and impact also need to come into this conversation. This might make room for disciplinary differences. For example, in creative writing demonstrating ‘impact’ is incredibly hard (a writer is not having individual conversations with their readers) but there are other things a writer can demonstrate around quality, for example.
Working on interdisciplinary teams
If people are working with non creative researchers on projects, what is their role? Are they equal researchers or are they just the engagement and impact producers that were brought in to make the research proper ‘look good’?
We need to come into these teams with the ability to clearly articulate what we are researching in relation to our own disciplines. But for this, we need to not say: it’s an unfolding process, and I’ll just see what comes up etc etc. If we want to be on disciplinary teams with other disciplines, and want them to come on board, to be included in our work and know how to support it, we do need to proceed by some shared methodological approaches. We need to share some languages.
People have had varying experiences with this: from being a ‘producer for hire’ to being taken seriously by their research team as a researcher with a particular and valued skill set relevant to producing knowledge (as opposed to good looking artefacts).
Funding
Some have found that creative practice researchers shy away from identifying as academic researchers when they make funding applications to arts bodies because they think that it will disadvantage them. However, when it comes to speaking internally about the same project, they will speak about it in terms of research and they will claim that the work has been ‘peer-reviewed’ by the funding body. But in reality it was peer-reviewed according to very different criteria.
Some feel that it isn’t at all fair for academics to be applying for money from external grant funders like the Australian Council, because they feel that they have a stability of employment and by applying for money from the Australian Council they will be taking money from artists far less resourced.
Equally, there are pressures on academics to bring in funding and for creative practice researchers these arts organisations are some of the only avenues available to us.
Some academics will apply for this funding outside of the university system (as independent practitioners) and then take time off work with paid leave in order to undertake that work. This doesn’t seem right: university salary and art money. And then the output is reported as a research output.
The questions that need to be asked are: what kind of research are we doing, why are we doing it, who is it for and who’s involved? These things matter when it comes to funding and where that funding is coming from.
There is a big difference between applying for a grant to do your art work, versus applying for a grant to run a project that would then employ other people to be involved and paid.
More conversations need to be had with arts funding bodies about how to manage these tricky realities. Perhaps we can consider having different panels for individuals, and for individual artists and for academics.
In an experience applying for a DECRA, the main focus from reviewers was on methodology. The main point required in the rejoinder was a defence of creative practice based methodology, which had been rigorously prepared, knowing that it would be a potential downfall. The application went to a HASS panel which was asking very basic questions that showed a complete lack of understanding of this as a methodology.
Perhaps this is a problem with the FOR codes system. In the sciences one knows more or less that the application will be assessed by someone well versed in the methodology being proposed. But for our disciplines this is not at all true, it may very well go to theorists not practitioners. Separate panels, according to methodology, could address this problem.
Another attendee does have a DECRA which was received without any track record of traditional outputs, they were all creative outputs. HASS has the same challenges as the creative visual arts. If you want to preserve the purity of visual art, the more introspective, self-reflective scholarship of visual art than ARC might not be the context for that. Whereas, if you’re able to leverage your research into a more interdisciplinary space then that’s different kind of research and the ARC might be a good funding path for that.
Often you’ll see Future Fellows or Laureates that have a creative practice but often that’s not necessarily the lead piece, it’s often a research question about something else. They might use the methods or the methodology, but it’s actually framed quite specifically as about something else. And that just raises interesting questions around what an excellent creative based research question is. Does it have to be about something else or can it be about the practice, the creative process? The evidence from the ARC, in particular but not exclusively, is that it needs to be about something other than the creative practice itself.
Datasets
What do we do about the presence of subjectivity in this field? Is objective evaluation possible? What are the markers? How do we determine them? Are they always emerging? Can we maintain datasets that assist with this?
Datasets are used widely, globally, to inform rankings, including employee reputation. Do we have the data that we need to be part of that conversation, or is it too complicated, too hard? It’s going to take too long to develop data sets for creative research. The quick wins for universities are in the non creative disciplines that can quickly gather the data that would feed rankings.
It’s far easier for universities to feed into those global ranking systems with existing citation data. But even the citation disciplines have very biassed data – probably to do with how big the university is, how big their marketing budgets are etc. Even though it’s fraught with a whole load of problems, we’ve still seen over the last 10 years a proliferation of rankings which just pushes universities to invest more in the STEM disciplines.