Research Assessment

Authorship practices must evolve to support collaboration and open science

Journal authorship practices have not sufficiently evolved to reflect the way research is now done. Improvements to support teams, collaboration, and open science are urgently needed.

Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions

We can regard the wider incentive structures that operate across science, such as the priority given to novel findings, as an ecosystem within which scientists strive to maximise their fitness (i.e., publication record and career success). Here, we …

From policy to practice: Lessons learned from an open science funding initiative

In the past few years, there has been a notable shift in the open science landscape as more countries and international agencies release recommendations and implementation guidelines for open scholarship. In August 2022, the US White House Office of …

Premiering pre-registration at PLOS Biology

Pre-registration promises to address some of the problems with traditional peer-review. As we publish our first Registered Report, we take stock of two years of submissions and the future possibilities of this approach.

Research assessment using a narrow definition of “research quality” is an act of gatekeeping: A comment on Gärtner et al. (2022)

Gärtner et al. (2022) propose a system for quantitatively scoring the methodological rigour of papers during the hiring and promotion of psychology researchers, with the aim of advantaging researchers who conduct open, reproducible work. However, the …

Responsible Research Assessment Should Prioritize Theory Development and Testing Over Ticking Open Science Boxes

We appreciate the initiative to seek for ways to improve academic assessment by broadening the range of relevant research contributions and by considering a candidate’s scientific rigor. Evaluating a candidate's ability to contribute to science is a …

Risk of Bias in Reports of In Vivo Research: A Focus for Improvement

The reliability of experimental findings depends on the rigour of experimental design. Here we show limited reporting of measures to reduce the risk of bias in a random sample of life sciences publications, significantly lower reporting of …