Pages

Thursday 5 January 2017

Reviewing Peer Review

Internal peer review has become increasingly prevalent in universities across the UK. The trend is the result of a push by the research councils for institutions to manage the quality of their applications better, but also to an implicit need to give academics as much advantage as possible in the increasingly competitive world of grant-winning.

In some ways, an internal peer-review system is a no-brainer. Showing your application to others for comment prior to submission is an obvious step, right? Well, yes and no.

Yes, because an objective view can help you see things more clearly, and honest and frank discussion can give you the tough love you need.

No, because the nature of academia isn’t always about sharing. In some disciplines it’s about keeping things secret and trying to set yourself apart from your competitors.

Thus, many academics are naturally divided: they want to make their proposals as good as possible but they don’t want to show their hand too widely. This either discourages them from seeking feedback at all, or they seek it only from friends who may not have the necessary distance to give useful, constructive comments.

For me the main point of peer review was to overcome this, to get academics used to sharing their proposals with people outside their own group before submission. So when it came to creating an internal peer-review system at Kent, I thought the best way would be a fairly light-touch system. I didn’t want a draconian, bureaucratic behemoth that would make people clench their proposals to their breasts ever more tightly.

The system was introduced in 2011 and was framed like this: anyone applying for a research-council grant or a large grant—and by ‘large’ we suggested more than £100,000 in the humanities, more than £200,000 in the social sciences, and more than £300,000 in the natural sciences—or for their first substantial grant, needed to get input from two reviewers.

One reviewer should have knowledge of your field, and so would be able to say whether the work you were proposing was valid and hadn’t been done already. The other should have a knowledge of the funder, so that they could say whether or not it was the kind of thing it would be interested in, however valid the research.

In theory, this should have got around the phone-a-friend culture of much internal peer review, while not causing too much additional anxiety among researchers. However, we further forced the reviewers’ hands by asking them: if you had to give a reason to reject the proposal, what would it be? We gave them a tick-box list of possible options: Was it too incremental, too radical, or too discipline-specific? Was it too cheap or too expensive? Too theoretical or too applied? Were the methods inappropriate, or the team inexperienced?

After a year I reviewed how the system was working.

Initial data suggested that there had been a slight increase in success rates. I likened this improvement to the aggregation of marginal gains that was the foundation of Team Sky’s success in the Tour de France.

However, I recognised the problems with the system. First, having two reviewers meant that the whole thing took longer, and often it was difficult to find two reviewers available and willing to do the job. Second, many bristled at the idea of responding in any template, and refused to complete the ‘reasons for rejection’ section at all.

As a result, we found that we were defaulting to seeking only one review from a small pool of ‘second reviewers’ who we knew had sat on the research councils’ peer-review panels. Unfortunately, this meant that a large burden was being put on relatively few shoulders. Thus, last year I revised the system and decided that we should seek only one review, but from a much wider pool. To do that, I needed to build a peer-review college.

I asked for volunteers. I was half expecting everyone to step back at that point or look shiftily at their feet. Instead, the response was heartening. In the end we established a college of 56 peers, all of whom are experienced reviewers, panelists and grant winners, and all of whom are willing to help their colleagues.

A session run by the Association of Research Managers and Administrators, in which I took part a few years ago, discussed what would make a good peer-review system. As well as the necessary and obvious suggestions—such as useful and timely feedback—the panel highlighted the importance of buy-in. This is vital. Internal peer review, however good, is useful only if it is used.

With a peer-review college made up of willing volunteers, we now have a strong foundation. The reviewers have bought into the system, and by so doing I hope they will act as advocates for it within their schools.

I don’t want a strong, centralised, authoritarian system, but one embedded locally and discreetly. Our role is merely to facilitate. In time, I hope that even this will become unnecessary and people will wonder if there was ever a time when seeking robust, objective review from colleagues was not a natural part of the grant-winning process. I’m sure that day will come.

This article first appeared in Funding Insight in May 2016 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com

No comments:

Post a Comment