Hindawi / Blog / Blog Post

Open Science

The future of responsibly evaluating research

Authors | Opinion | Researchers
The future of responsibly evaluating research

Decisions about where to publish research are often clouded by the pressure that arises from poor metrics. Sam Rose, Publisher at Hindawi reports from LIS-Bibliometrics, a conference focused on shifting to responsible metrics and appropriate evaluation for research.

A few weeks ago, in what feels like a different world, before conferences got canceled and most of us started to work from home, I attended the annual meeting of LIS-Bibliometrics, held at Leeds University (UK) in early March. The conference is aimed primarily at librarians and university staff working in research evaluation and analytics, and there were also representatives and stakeholders from across the industry. I came away feeling optimistic about the future of responsible metrics and working with researchers and peers to put forward alternative open metrics as part of our commitment to open science. 

I was particularly excited to meet a group of delegates with a common interest in moving towards productive metrics and away from Impact Factors as well as generally practicing more open principles. My favourite buzzwords of the day were “responsible metrics” and “responsible research evaluation”. Meeting attendees there with roles at their institutions such as “Officer of Responsible Research Evaluation” showed me the importance of this movement among the librarian workforce and what I believe is now an ongoing commitment to driving it forward. 

The Declaration On Research Assessment (DORA), which has the aim of halting the use of Impact Factors to assess researchers’ individual contributions, was of course on the top of everyone’s agenda. Hindawi is a DORA signatory and values all different types of metrics for measuring the reach of the primary literature. For me, it was particularly interesting to participate in the discussion between different universities, all of whom were at different stages of DORA implementation, and many of whom face the challenge of gaining support from other departments within their institution to make this change. It was heartening to see and hear the passion of Cardiff University, University of Kent, and the University of Southampton who are clear advocates of DORA and are each aiming to drive change at their research institutes. 

Recent DORA signees were particularly in need of support and guidance from other institutions, and there couldn’t have been a better place to do this than at the LSE-Bibliometrics meeting with these passionate advocates sharing their experiences.

Driving the discussions throughout the day was the opening speech by Elizabeth Gadd (Loughborough University, UK; Chair of LIS-Bibliometrics Committee) who presented her five predictions that she expects to see by 2030:

  1. Additional rather than alternative evaluation approaches; there will be no one metric.

  2. Funders will have a greater impact on responsible research evaluation.

  3. REF-after-next will be a metrics-based system for STEM.

  4. Learned societies who cannot fund themselves through journal publishing will play a role in research evaluation.

  5. Greater complexity in bibliometrics, including different output types (e.g. preprints, alternative content).

I for one hope that, for at least some of the above, we get there sooner than 2030 and know that Hindawi will be actively participating in the discussions as they move forward. In particular, (relating to [2] and [4]), I believe it is time for funders and learned societies to take the lead in research evaluation and change the way things are done for the better of scientific advancement and society. This will allow greater interaction with publishers such as ourselves and can help drive open metrics alongside open science. 

Continuing this theme of open metrics, Dr Steven Hill, Director of Research England, UK, discussed the trends that are helping to reshape research evaluation. He firmly believes things are becoming more open and collaborative. He explained that a broader range of output types are and should be considered by bodies evaluating research, particularly including more interactivity in articles rather than static outputs. Whether analyzing complex and bigger datasets of citations (so the metadata behind the metadata) will help give a fuller picture of the article is yet to be seen - and perhaps this is where new services such as Scite will become more useful in assessing the impact of citations.

As well as Scite, Dr Hill showcased other new tools using AI which have potential to revolutionize research assessment. One of the tools developed by UNSILO is now being used by some publishers to triage submitted papers and Hindawi is currently in active discussions with them to see how this new tool can help our own peer review process. Arguably more significantly, peer reviewer robots are being developed. While this use of robots may feel terrifying to many researchers, the technology could help to address one of the biggest problems in scholarly communications - the time available for the academic community to continue contributing to the peer review process as the volume of scientific output continues to increase. Keep in mind that just one robot, in addition to a human, could provide a 50% efficiency increase in peer review and greatly reduce the burden on the existing reviewer pool. If the tools are shown to work, I don’t think any of us should dismiss the use of AI before we’ve looked into it in more detail.

From a publisher’s perspective, it was great to hear institutions spreading the message that researchers should publish their work where it will be read by the most appropriate audience and this should take priority over metrics and will become a much more supportive message than “publish in high impact journals”. It is a clear U-turn away from the current system, which rewards researchers based on the Impact Factors of journals they publish in, not only using a largely irrelevant metric as the primary focus (and in some cases sole focus) of evaluation, but also opening the process to misuse and abuse.

Over the course of the day (of which a full video recording is available here), it was clear the main concern about the future of responsible metrics among delegates was that publishers and commercial vendors will lock institutions into products and specific datasets - something open science would champion against - that we might all be too resistant to change (even if change is something we’ve all agreed too) and that the major funders need to invest in tools and analytics to drive the change from their side (much the same way funders have driven open access). 

These assumptions were, responsibly of course, confirmed by a vote across the audience as the biggest challenges they are facing. For me it was clear that research institutions expect vendors to lead the way in this area. However, there is a common interest in all stakeholders within the research lifecycle being part of the conversation and we at Hindawi can’t wait to continue our role in driving forward alternative open metrics as part of our commitment to open science.

This blog post is distributed under the Creative Commons Attribution License (CC-BY). The illustration is by Hindawi and is also CC-BY.

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.