Nature journals have signed up to the principles of the Declaration on Research Assessment agreement. Credit: Pavel Zhelev/Alamy

Nature Research will this week formally sign up to the principles outlined in the San Francisco Declaration on Research Assessment, commonly known as DORA. Nature Research (the Nature-branded journals, Scientific Reports, Scientific Data and the Nature Partner Journals) has long been editorially aligned with the principles described in DORA, particularly the need to move away from the inappropriate use of the journal impact factor. (A collection of relevant editorials is available on our journal-metrics web page.)

As long ago as 2005, Nature was expressing concern about the problematic dependence on journal impact factors when individual scientists are assessed by their institutions and funders (see Nature 435, 1003–1004; 2005). The skewed distribution of a journal’s citation statistics (by a few very highly cited papers) undermines any fundamental usefulness of the impact factor, and the belief that a researcher’s strengths can be measured by such a statistic is self-evidently absurd. So, too, is the misguided belief that numbers of citations are the only measure of a paper’s scientific value.

Scientists have justifiably complained about the abuse of impact factors for years, and continue to do so. That’s not to deny that the factor has some value as an indicator of a journal’s cumulative scientific impact. But so do other measurements, such as the immediacy index, the eigenfactor score and the article-influence score. Indeed, in assembling these and other indicators last year in the new Nature Research journal-metrics page, we created the ‘two-year median citation score’ as a less-skewed complement to the impact factor. We also provided definitions of each metric to help the reader to understand what they really mean, and to provide context for how our journals are performing.

We have examined researchers’ opinions about metrics over recent months, and what matters to them when choosing where to submit their work. And in the second half of 2016, we carried out a survey of authors.

Some 985 authors from Nature Research and more than 2,500 from Springer Nature overall, who had published a research article during 2015–16, gave us their views, with the largest groups of respondents coming from Europe (47%), Asia and the Middle East (19%) and the United States (15%).

The survey showed a demand for publishers to provide more information about their journals: 85% of authors said that information on journal performance is important to them when deciding where to submit their work, but 48% thought that publishers did not provide enough. For junior researchers with less publishing experience, this information is particularly important.

The survey also revealed that authors were deeply interested in the quantitative and qualitative details of a journal’s peer-review process. Journal choice was influenced by these and other experiences, including interactions with journal editors, an understanding of a journal’s readership, and the overall reputation of a journal and its publisher. The survey did confirm that, despite knowledge of its limitations, the impact factor remains a key metric for researchers, although alternative metrics were considered by many to be as important for journal choice.

Since the survey, we have attempted to provide more-accessible information about what the different metrics mean, and about aspects of the peer-review process that researchers care about The latter is particularly important, given that we employ some 300 professional editors dedicated to delivering efficient and robust peer review.

Accordingly, we have improved the Nature Research metrics page to provide extra information on median times for all the key stages of the submission-to-publication workflow. We’ve also created a new infographic with short, simple explanations of each of the metrics we now offer, which we’ve released under a CCBY licence so that anyone, anywhere, can use it.