Open and Reproducible Science Summaries
The symbol ◈ stands for non-peer-reviewed work.
The symbol ⌺ stands for summaries on the topic of Diversity, Equity, and Inclusion.
Trust Your Science? Open Your Data and Code (Stodden, 2011)◈
Main Takeaways:
- Computational results suffer from problems of errors in final published conclusions.
- In order to allow independent replication and reproducible work, release the scripts and data files, and if the researcher uses MATLAB for graphs etc, please provide the graphical user interface.
- The standards for code quality are more precise definitions of verification, validation, and error quantification in scientific computing.
- Research workflow involves changes made to data, including analysis, that affects data interpretation.
- To conclude, open data is a prerequisite for verifiable research.
Quote
“Science has never been about open data per se, but openness is something hard fought and won in the context of reproducibility” (p. 22).
Abstract
This is a view on the reproducibility of computational sciences by Victoria Stodden. It contains information on the Reproducibility, Replicability, and Repeatability of code created by the other sciences. Stodden also talks about the rising prominence of computational sciences as we are in the digital age and what that means for the future of science and collecting data.APA Style Reference
Stodden, V. C. (2011). Trust your science? Open your data and code. https://doi.org/10.7916/D8CJ8Q0P
You may also be interested in
- Attitudes Toward Open Science and Public Data Sharing: A Survey Among Members of the German Psychological Society (Abele-Brehm et al., 2019)
- Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
- Open Data in Qualitative Research (Chauvette et al., 2019)
- CJEP Will Offer Open Science Badges (Pexman, 2017)
- Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency (Kidwell et al., 2016)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Using OSF to Share Data: A Step-by-Step Guide (Soderberg, 2018)
- The digital Archaeologists (Perkel, 2020)
Registered reports: a method to increase the credibility of published results (Nosek & Lakens, 2014)
Main Takeaways:
- This editorial discusses the value of pre-registration and replication, as not all articles are published.
- Direct replication adds data that increases the precision of effect size estimates for meta-analytic research. No direct replication, means it is difficult to identify false positives.
- Conceptual replications are more popular than direct replications, as the former conceptualises a phenomenon from its original operationalisation, thus contributing to our theoretical understanding of the effect.
- Direct replication encourages generalisability of effects, providing evidence that the effect was not due to sampling, procedural or contextual error.
- If direct replication produces negative results, this improves the identification of boundary conditions for real effects.
- The benefit of a registered report is that the feedback provided from peer review allows researchers to improve their experimental design.
- Following peer review, the manuscript can be resubmitted for review and acceptance or rejection based on feedback.
- Successful proposals tend to be high-powered, high quality, and faithful replication designs.
- One benefit of a registered report is that this can be all done before the research is conducted.
- Peer reviewers will focus on the methodological quality of the research, allowing conflict of interests to be reduced and peer reviewers can provide a fair assessment of the manuscript.
- The original studies can provide an exaggerated effect size. When this study is replicated, the effect size usually decreases as a result of a larger sample size.
- Registered reports enable exploratory and confirmatory analyses, but a distinction is required. However, more trust can be placed in confirmatory analyses, as it follows a plan and ensures the interpretability of reported p value.
Quote
“No single replication provides the definitive word for or against the reality of an effect, just as no original study provides definitive evidence for it. Original and replication research each provides a piece of accumulating evidence for understanding an effect and the conditions necessary to obtain it. Following this special issue, Social Psychology will publish some commentaries and responses by original and replication authors of their reflections on the inferences from the accumulated data, and questions that could be addressed in follow-up research.” (p. 139)
Abstract
Professor Daniel Laken and Professor Brian Nosek provide an editorial on how pre-registration and registered reports can be used for the journal of Social Psychology in order to increase the credibility of individual results and findings.APA Style Reference
Nosek, B. A., & Lakens, D. (2014). Registered reports : a method to increase the credibility of published results. Social Psychology, 45(3), 137-141. https://doi.org/10.1027/1864-9335/a000192
You may also be interested in
- Registered Reports: A new publishing initiative at Cortex (Chambers, 2013)
- Registered Reports: A step change in scientific publishing (Chambers, 2014)
- Registered Reports: Realigning incentives in scientific publishing (Chambers et al., 2015)
- Registered reports (Jamieson et al., 2019)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- On the persistence of low power in psychological science (Vankov et al., 2014)
Main Takeaways:
Abstract
APA Style Reference
You may also be interested in
Education and Socio-economic status (APA, 2017b) ◈⌺
Main Takeaways:
- Children from low socio-economic status take longer to develop academic skills than children from higher socio-economic status groups (e.g. poor cognitive development), leading to poorer academic achievement.
- Children from low socio-economic status are less likely to attain experiences for the development of reading acquisition and reading competence.
- As a result of fewer learning materials and experiences at home, children from low socio-economic status enter high school with literacy skills 5 years behind their affluent age-matched peers.
- Children from lower socio-economic status households are twice as likely as those from high SES households to show learning related behaviour problems.
- High school dropout rate was evident in low-income families compared to high-income families.
- Placing low-socio-economic status students in higher-quality classrooms will help them earn more disposable income, more likely to attend college, live in affluent neighbourhoods and save more income for retirement.
- Students from low socio-economic status are less likely to have access to resources about colleges (e.g. career offices and familial experience with higher/further education) and are more at-risk of being in debt to student loans than their affluent peers.
- Low income students are less likely to succeed in STEM disciplines, 8 times less likely to obtain a bachelor’s degree by the age of 24 and have less career-related self-efficacy when it came to vocational aspirations than high income students.
- These problems are worsened for people of colour, women, people who are disabled and LGBTIQ-identified individuals.
Abstract
This fact sheet explains the impact socioeconomic status on educational outcomes.APA Style Reference
APA (2017, July). Education and Socioeconomic Status [Blog post]. Retrieved from https://www.apa.org/pi/ses/resources/publications/education
You may also be interested in
- ls There a Positive Correlation between Socioeconomic Status and Academic Achievement? (Quagliata, 2008)
- Ethnic and Racial minorities and socio-economic status (APA, 2017)
- Women and Socio-economic status (APA, 2010)
- Disability and Socio-economic status (APA, 2010)
- Lesbian, Gay, Bisexual and Transgender Persons & Socioeconomic Status (APA, 2010)
Ethnic and Racial minorities and socio-economic status (APA, 2017) ◈ ⌺
Main Takeaways:
- The relationship between SES, race and ethnicity is intimately intertwined. Communities are segregated by socio-economic status, race and ethnicity. Low economic development, poor health conditions and low levels of educational attainment are often comorbidities shared in these communities.
- Discrimination hinders social mobility of ethnic and racial minorities. In the US, 39% of African American children and adolescents, and 33% of Latino children and adolescents are living in poverty, which is more than double than the 14% poverty rate for non-Latino, White and Asian children and adolescents.
- Minority racial groups are more likely to experience multidimensional poverty than their White counterparts. American Indian/Alaska Native, Hispanic, Pacific Islander, and Native Hawaiian families are more likely than Caucasian and Asian families to live in poverty.
- “African Americans (53%) and Latinos (43%) are more likely to receive high-cost mortgages than Caucasians (18%; Logan, 2008).” (p.9).
- African American unemployment rates are double of Caucasian Americans. African American men working full time earn only 72% of Caucasian men's average earnings, and 85% of earnings of Caucasian women.
- African Americans and Latinos are more likely to attend high-poverty schools than Asian Americans and Caucasians. From 2000 to 2013, dropout rates between racial groups narrowed significantly. High school dropouts were highest for Latinos, followed by African Americans and Whites.
- High achieving African American students may be exposed to less rigorous curriculums, attend schools with fewer resources, and have teachers who expect less of them academically than similarly situated Caucasian students.
- 12% of African American college graduates were unemployed, which is more than double the rate of unemployment among all college graduates in the same age range.
- Racial and ethnic minorities have worse health than that of White Americans.
- Health disparities stem from economic determinants, education, geography, neighbourhood, environment, lower-quality care, inadequate access to care, inability to navigate the system, provider ignorance or bias, and stress.
- “At each level of income or education, African American have worse outcomes than Whites. This could be due to adverse health effects of more concentrated disadvantage or a range of experiences related to racial bias (Braveman, Cubbin, Egerter, Williams, & Pamuk, 2010).” (p.10).
- In pre-retirement years, Hispanics and American Indians are much less likely than Whites, African Americans, and Asians to have any health insurance. Negative net worth, zero net worth, and not owning a home in young adulthood are linked to depressive symptoms independent of other socio-economic indicators.
- Hispanics and African Americans report a lower risk of psychiatric disorder relative to White counterparts, but those who become ill tend to have more persistent disorders.
- African Americans, Hispanics, Asians, American Indians, and Native Hawaiians have higher rates of post-traumatic stress disorders than Whites, which is not explained by Socio-economic status and a history of psychiatric disorders. However discrimination is factor that contributes to increasing mental health disorders among the Asian and African American communities (i.e., compared to the White community, African American communities are more frequently diagnosed with schizophrenia, a low prevalence but serious condition).
Abstract
Learn how socioeconomic status affects the lives of many racial and ethnic minorities.APA Style Reference
APA (2017, July). Ethnic and Racial Minorities & Socioeconomic Status [Blog post]. Retrieved from https://www.apa.org/pi/ses/resources/publications/minorities
You may also be interested in
- ls There a Positive Correlation between Socioeconomic Status and Academic Achievement? (Quagliata, 2008)
- Education and Socio-economic status (APA, 2017b)
- Women and Socio-economic status (APA, 2010)
- Disability and Socio-economic status (APA, 2010)
- Lesbian, Gay, Bisexual and Transgender Persons & Socioeconomic Status (APA, 2010)
Faculty promotion must assess reproducibility (Flier, 2017) ⌺
Main Takeaways:
- Inadequate training, increased competition, problems in peer review and publishing, and occasionally scientific misconduct are some of the variables behind irreproducible research in the biomedical field.
- Diverse causes make finding solutions for the problem of irreproducibility difficult, especially, as they must be implemented by independent constituencies including funders and publishers.
- Academic institutions can and must do better to make science more reliable. One of the most effective (but least discussed) measures is to change how we appoint and promote our faculty members.
- Promotion criteria has changed over time. Committees now consider how well a candidate participates in team science, but we still depend on imperfect metrics for judging research publications and our ability to assess reliability and accuracy is underdeveloped.
- Reproducibility and robustness are under-emphasised when job applicants are evaluated and when faculty members are promoted.
- Currently, reviewers of committees are asked to assess how a field would be different without a candidate’s contributions, and to survey a candidate’s accomplishment, scholarship, and recognition.
- The promotion process should also encourage evaluators to say whether they feel candidates’ work is problematic or over-stated and whether it has been reproduced and broadly accepted. If not, they should say whether they believe widespread reproducibility is likely or whether work will advance the field.
- Applicants should also be asked to critically evaluate their research, including unanswered questions, controversies and uncertainties. This signals the importance of assessment and creates a mechanism to judge a candidate’s capacity for critical self-reflection.
- Evaluators should be asked to consider how technical and statistical issues were handled by candidates. Research and discovery is not simple and unidirectional, and evaluators should be sceptical of candidates who oversimplify.
- Institutions need to incentivise data sharing and transparency. Efforts are more urgent as increasingly interdisciplinary projects extend beyond individual investigators’ expertise.
- Success will need creativity, pragmatism and diplomacy, because investigators bristle at any perceived imposition on their academic freedom.
Quote
“Over time, efforts to increase the ratio of self-reflection to self-promotion may be the best way to improve science. It will be a slog, but if we don’t take this on, formally and explicitly, nothing will change.” (p.133)
Abstract
Research institutions should explicitly seek job candidates who can be frankly self-critical of their work, says Jeffrey Flier.APA Style Reference
Flier J. (2017) Faculty promotion must assess reproducibility. Nature, 549(7671),133. https://doi.org/10.1038/549133a
You may also be interested in
- Publication metrics and success on the academic job market (Van Dijk et al., 2014)
- Six principles for assessing scientists for hiring, promotion, and tenure (Naudet et al, 2018)
Women and Socio-economic status (APA, 2010)◈ ⌺
Main Takeaways:
- Socioeconomic status encompasses quality of life attributes and opportunities and privileges afforded to people in society.
- Socio-economic status is a consistent and reliable predictor of outcomes across lifespan.
- Low socio-economic status and its correlates (e.g., lower educational achievement, poverty and poor health) affect society.
- Inequities in health distribution, resource distribution and quality of life are increasing in the US and globally.
- Socio-economic status is a key factor in determining the quality of life for women and, by extension, strongly affects the lives of children and families.
- Inequities in wealth and quality of life for women are long-standing and exist both locally and globally.
- Women are more likely to live in poverty than men.
- Men are paid more than women despite similar levels of education and fields of occupation.
- Reduced income for women coupled with longer life expectancy and increased responsibility to raise children, increase probabilities that women face economic disadvantages.
- Pay gap has narrowed over time but recently the progress has plateaued.
- Women with a high school diploma are paid 80% of what men with the same qualifications are paid.
- Single mother families are more than 5 times as likely to live in poverty as married-couples families.
- Pregnancy affects work and educational opportunities for women and costs associated with pregnancy are higher for women than men.
- 46% of women believed they have experienced gender discrimination.
- Pregnant women with low socio-economic status report more depressive symptoms, suggesting the third trimester may be more stressful for low-income women.
- At 2 and 3 months postpartum, women with low income have been found to experience more depressive symptoms than women with high-income.
- Women with insecure and low-status jobs with little to no decision-making authority experience higher-levels of negative life events, insecure housing tenure, more chronic stressors, and reduced social support.
- Depression and anxiety have increased significantly for poor women in developing countries undergoing restructuring.
- Women with low income develop alcoholism and drug addiction influenced by social stressors linked to poverty.
- Improved balance in gender roles, obligations, pay equity, poverty reduction and renewed attention to maintenance of social capital redress the gender disparities in mental health.
- SES also affects physical health, with women living with breast cancer being11% more likely to die if they live in lower SES communities.
- Low-income women who have no insurance have lowest rates of mammography screening among women aged 40-64, increasing risk of death from breast cancer.
- Obesity and staying obese from adolescence to young adulthood is linked to poverty among women.
- Relative to HIV-positive men, women with HIV have disproportionately low-income in the US.
Abstract
Learn how socioeconomic status affects the lives of women.APA Style Reference
APA. (2017, July). Women & Socioeconomic Status [Blog post]. Retrieved from https://www.apa.org/pi/ses/resources/publications/women
You may also be interested in
- ls There a Positive Correlation between Socioeconomic Status and Academic Achievement? (Quagliata, 2008)
- Education and Socio-economic status (APA, 2017b)
- Disability and Socio-economic status (APA, 2010)
- Ethnic and Racial minorities and socio-economic status (APA, 2017)
- Lesbian, Gay, Bisexual and Transgender Persons & Socioeconomic Status (APA, 2010)
The Gender Gap: Who Is (and Is Not) Included on Graduate-Level Syllabi in Social/Personality Psychology (Skitka et al., 2020) ⌺
Main Takeaways:
- This article investigates whether there is a gender gap in Social/Personality Psychology syllabi.
- One factor contributing to gender gaps is whose work we choose to teach in graduate seminars.
- It is hypothesised that one link in the broad chain of factors contributing to the eminent gender gap is that female authors are likely to be under-represented on graduate course syllabi compared to their male peers (gender gap hypothesis).
- Reasons why female authors might be under-represented on course syllabi could be varied.
- Instructors may internalise cultural prejudices and biases favouring men over women. This might result in a greater preference for male over female-authored papers (i.e., bias hypothesis).
- Another possibility is that instructors might prefer older over contemporary papers (i.e., classic hypothesis).
- Yet another possibility is that there are more male-authored papers available to include in syllabi than female-authored papers (i.e., availability hypothesis).
- The present study investigates whether there is a gender gap in representation on graduate level syllabi and whether it is explained by preference for classic over contemporary papers or relative availability of male- versus female-authored manuscripts.
- Method: The authors identified every social and/or personality PhD program in the US using the Social Psychology Network’s PhD ranking list and Graduate Programs GeoSearch.
- 120 programs were identified and a list of social/personality faculty names and email addresses for each program were put together by going to psychology department websites.
- Main interest was in courses for first-year graduate students.
- Inclusion criteria for syllabi were: (1) course name includes words: social or personality, (2) course was at the graduate level.
- Papers cited in the syllabi were coded for the following characteristics: gender of all authors, each author’s h-index, total number of authors, journal where the article was published, number of citations the article received since publication and topic in social/personality psychology.
- To understand whether the gender representation on graduate syllabi is (or is not) consistent with the number of high-quality papers from which instructors can select, the present study obtained all names of authors, authorship order and year of publication for all papers published in the Journal of Social and Personality Psychology from 1965 to 2017 and published in the Personality and Social Psychology Bulletin from 1974 until April 2018. These journals accounted for 33% of reading on sample course syllabi and formed benchmarks.
- Results: Less than 30% of papers referenced on syllabi were written by female first authors.
- The gender gap on syllabi, differed as a function of instructor gender and decade papers were published: female instructors assigned more recently published papers (post-1990) and female first-authored papers at levels significantly higher than their male counterparts.
- Difference in inclusion rates of female first-authored paper could not be explained by preference for classic over contemporary papers in syllabi or relative availability of female first-authored papers in the published literature.
- The gender gap differed depending on the content of the course. Male and female authors were approximately equally represented on graduate-level syllabi of topics as prejudice, close relationships, culture and health. The gender gap was much larger in syllabi of topics as best practices, replicability, attitude change and persuasion.
- Male and female-authored papers included on syllabi had similar citation rates, although they had different h-index scores.
- Increasing representation of female scholars’ work on graduate course syllabi would have beneficial consequences, moving toward greater gender inclusiveness in social/personality psychology.
Abstract
We contacted a random sample of social/personality psychologists in the United States and asked for copies of their graduate syllabi. We coded more than 3,400 papers referenced on these syllabi for gender of authors as well as other characteristics. Less than 30% of the papers referenced on these syllabi were written by female first authors, with no evidence of a trend toward greater inclusion of papers published by female first authors since the 1980s. The difference in inclusion rates of female first-authored papers could not be explained by a preference for including classic over contemporary papers in syllabi (there was evidence of a recency bias instead) or the relative availability of female first-authored papers in the published literature. Implications are discussed.APA Style Reference
Skitka, L. J., Melton, Z. J., Mueller, A. B., & Wei, K. Y. (2020). The Gender Gap: Who Is (and Is Not) Included on Graduate-Level Syllabi in Social/Personality Psychology. Personality and Social Psychology Bulletin, 0146167220947326. https://doi.org/10.1177/0146167220947326
You may also be interested in
- Unequal effects of the COVID-19 pandemic on scientists (Myers et al., 2019)
- Against Eminence (Vazire, 2017)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
Leveraging a collaborative consortium model of mentee/mentor training to foster career progression of under-represented post-doctoral researchers and promote institutional diversity and inclusion (Risner et al., 2020) ⌺
Main Takeaways:
- The goal of the study is to empower post-doctoral students and make them active participants in the mentoring relationships by emphasising the mentees’ contributions in shaping more productive interactions to be built upon to develop their own skills as a future mentor.
- The study used several metrics by which they assessed the success of this collaborative, multi-institutional approach, using National Research Mentoring Network (NRMN), the Committee on Institutional Cooperation Academic Network (CAN) approach to provide mentor facilitator training for faculty and senior administrators and mentoring-Up training for post-doctoral students.
- Background: “Establishing a functioning consortium needs buy-in and high-level cooperation from all partners. Prior to initiating programming, all potential institutional representatives set initial goals to address campus needs for mentor-up skill development for post-docs and mentor facilitator training for staff and faculty, establish sustainable communities of practice for mentor training and develop mechanisms for central coordination, outreach to campus constituents, templates for recruitment of participants and strategies to sustain collaboration and develop mechanisms for central coordination, outreach to campus constituents, templates for recruitment of participants, and strategies to sustain collaboration.” (p.4)
- Method: “The seven Core Principles [of “Mentoring-UP”] are: 1. Two-way communication, 2. Aligning expectations, 3. Assessing understanding, 4. Fostering independence, 5. Ethics, 6. Addressing equity and inclusion, 7. Promoting professional development. This curriculum provided postdocs opportunities for: i.) self-evaluation and reflection to become aware of their personal biases, attitudes, and behaviors; ii.) exploring strengths, weaknesses, and challenges in their interpersonal and professional relationships; iii.) understanding and learning how to use the mentor principles; and iv.) focusing on cognitive processes that may lead to behavioral changes and strategies to facilitate those changes in a process-based approach over 1.5–2 day workshops.” (p.5)
- Method: “The 1.5–2 day workshops included case studies and activities that: i.) engage mentors in peer discussion of a mentor framework; ii.) explore strategies to improve mentoring relationships; iii.) address mentoring problems; iv.) reflect on mentoring philosophies; v.) and create mentoring action plans to model the interactive, collaborative, and problem-solving ways to develop and implement this set of trainings in the future. The training goals provided tools and mechanisms to implement mentor training venues at the participating institutions, thereby establishing sustainable Mentor-training programs for undergrads, graduate students, postdocs and faculty” (p.5).
- “A specific NRMN-CAN survey was developed for all four postdoc cohorts to ascertain whether mentor training: i.) influenced career progression; ii.) impacted the postdocs’ relationship with their PIs; and iii.) components of the mentor training that were implemented by the postdoc mentees... A dedicated NRMN-CAN survey for faculty and senior administrators was also developed to ascertain whether participation in Mentor Facilitator training led to: i.) implementation of training workshops on their campuses; ii.) the level and number of participants; iii.) and whether facilitated sessions were carried out in partnership with others.” (p.5)
- Results: Post-doctoral students reported improvements in their mentoring proficiency and improved relationships with the Principal Investigators. 29% of post-doc respondents transitioned to faculty positions, and 85% of these respondents were under-represented and 75% were female. 59 out of 120 faculty and administrators provided mentor training to over 3000 undergraduate, graduate and postdoctoral students and faculty on their campus for the duration of this project.
- The findings showed that the majority of post-doctoral students indicate that mentor training positively influenced their relationship with their Mentors in several domains (e.g. confidence building). In addition, this curriculum has guided most post-doctoral students to better understand their mentoring needs, develop strategies to manage their mentoring relationships and empower them to make critical career decisions to pursue an academic career. In addition, early-career scientists stated they had more confidence to pursue an academic career with increased self-efficiency and advocacy.
- Impressively, 29% of the responding postdocs, predominantly females (75%) and underrepresented postdocs (85%) have successfully migrated to faculty. Some postdocs also indicated that their mentor training and experiences were valuable skills when applying for academic positions and definitely aided in adapting to responsibilities as a faculty mentor.
Abstract
Changing institutional culture to be more diverse and inclusive within the biomedical academic community is difficult for many reasons. Herein we present evidence that a collaborative model involving multiple institutions of higher education can initiate and execute individual institutional change directed at enhancing diversity and inclusion at the postdoctoral researcher (postdoc) and junior faculty level by implementing evidence-based mentoring practices. A higher education consortium, the Big Ten Academic Alliance, invited individual member institutions to send participants to one of two types of annual mentor training: 1) “Mentoring-Up” training for postdocs, a majority of whom were from underrepresented groups; 2) Mentor Facilitator training—a train-the-trainer model—for faculty and senior leadership. From 2016 to 2019, 102 postdocs and 160 senior faculty and administrative leaders participated. Postdocs reported improvements in their mentoring proficiency (87%) and improved relationships with their PIs (71%). 29% of postdoc respondents transitioned to faculty positions, and 85% of these were underrepresented and 75% were female. 59 out of the 120 faculty and administrators (49%) trained in the first three years provided mentor training on their campuses to over 3000 undergraduate and graduate students, postdocs and faculty within the project period. We conclude that early stage biomedical professionals as well as individual institutions of higher education benefited significantly from this collaborative mentee/mentor training modelAPA Style Reference
Risner, L. E., Morin, X. K., Erenrich, E. S., Clifford, P. S., Franke, J., Hurley, I., & Schwartz, N. B. (2020). Leveraging a collaborative consortium model of mentee/mentor training to foster career progression of underrepresented postdoctoral researchers and promote institutional diversity and inclusion. PloS one, 15(9), e0238518. https://doi.org/10.1371/journal.pone.0238518
You may also be interested in
- The mental of PhD cry for help (Nature, 2019)
- Postdocs in crisis: science risks losing the next generation (Nature, 2020)
- Unequal effects of the COVID-19 pandemic on scientists (Myers et al., 2019)
An index to quantify an individual’s scientific research output (Hirsch, 2005)
Main Takeaways:
- The author introduces the h index as a tool to quantify (in an unbiased way) the scientific output of researchers.
- Scientists who earn a Nobel prize have unquestionably relevant and impactful research. How do we quantify the impact and relevance of the work produced by other researchers?
- Current evaluation criteria are based on number of publications, number of citations for each paper, journal where papers were published and impact parameters. All these parameters are likely to be evaluated differently by different people.
- H is a preferable index to evaluate scientific output to a researcher.
- Total number of papers: measures productivity, yet does not measure importance of impact of papers.
- Citations measure total impact, yet hard to find and may be accentuated by “big hits” that are not representative of individuals if they are co-authors. Another disadvantage is that this measure gives a higher weight to review articles than original research articles.
- Citations per paper allow comparison of scientists of different ages, yet hard to find, rewards low productivity and penalises high productivity.
- Number of significant papers (defined as number of papers with citations higher than a certain number - let’s say “y” ). While this measure eliminates the disadvantages of the other measures (mentioned above), y is arbitrarily defined and will favour or disfavour individuals randomly. It needs to be adjusted for levels of seniority.
- Number of citations to each of the q most-cited papers (let’s say q=5). While it overcomes many of the disadvantages mentioned above, this measure does not yield a single number and is difficult to obtain and compare.
- The h index overcomes all the disadvantages of the other measures (mentioned above).
- The higher the h, the more accomplished the scientist is. H should increase with time.
- H will smoothly level off as the number of papers increases instead of a discontinuous change in slope.
- In reality, not all papers contribute to the h index. This is especially the case of papers with low citations when the h index of the researcher is already an appreciable number.
- H cannot decrease with time.
- Contrary to other parameters, the h parameter is useful for cumulative achievement continuing over time even after the end of the scientist’s publication.
- The author suggests that h ≈ 12 might be a typical value for advancement to tenure (associate professor) and that h ≈ 18 might be a typical value for advancement to full professor.
- However, a single number can never give more than an estimation to an individual’s multi-faceted profile and many other factors need to be combined to evaluate an individual.
- Differences in typical h values in different fields are expected (determined by average number of papers produced by each scientist in an specific field and size of field). Moreover, scientists working in non-mainstream areas will not achieve the same high h values as those working in highly topical areas.
- High h is a reliable indicator of high accomplishment; the opposite is not always not true.
- Although self-citations can obviously increase a scientist's h, their effect on h is much smaller than on the total citation count.
- Nobel prize winners have an h index of 30, meaning their success did not occur in one stroke of luck but in a body of scientific work.
- H index could be important for rankings of groups or departments in the chosen area and administrators could be interested in this.
Abstract
I propose the index h, defined as the number of papers with citation number >h, as a useful index to characterize the scientific output of a researcher.APA Style Reference
Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National academy of Sciences, 102(46), 16569-16572. https://doi.org/10.1073/pnas.0507655102
You may also be interested in
- High Impact =High Statistical Standards? Not Necessarily So (Tressoldi et al., 2013)
- The Leiden Manifesto for research metrics (Hicks et al., 2015)
High Impact =High Statistical Standards? Not Necessarily So (Tressoldi et al., 2013)
Main Takeaways:
- The present study investigates whether there are differences in statistical standards of papers published in journals with high and low impact.
- Journals with the highest impact factor are often taken to be a measure of high scientific value and rigorous methodological quality.
- The present study investigated how often null hypothesis significance testing and alternative methods are used in leading scientific journals compared to journals with lower impact factors.
- How many studies published in journals with the highest impact factor adopt the recommendations of basing their conclusions on their observed effect sizes and confidence intervals on those effect sizes? Are there differences with journals with lower impact factors in which editorial policy requires the adoption of these recommendations?
- Method: 6 Journals with high impact and 6 journals with low impact were chosen. “Our aim was to compare across journals, using all relevant articles, noting that many variables could contribute to any differences we found.” (p.3).
- Results: In 89% of Nature articles and 42% of Science articles, p values was more commonly used without any mention of confidence intervals, effect sizes, prospective power and model estimation, while other journals, both high- and low-impact factor, report confidence intervals and/or effect size measures.
- The best reporting practice was present in 80% of articles published in New England Journal of Medicine and Lancet, while this dropped to less than 30% for articles published in Science and less than 11% in the articles published in Nature journals.
- Reporting confidence intervals and effect sizes does not guarantee researchers use them in the interpretation of their findings or refer to them in text.
- The lack of interpretation of confidence intervals and effect sizes means that just observing high percentage of confidence intervals and effect sizes reporting may overestimate the impact of the statistical reform.
Quote
“It is not sufficient merely to report ESs and CIs—they need to be used as the basis of discussion and interpretation.” (p.6).
Abstract
What are the statistical practices of articles published in journals with a high impact factor? Are there differences compared with articles published in journals with a somewhat lower impact factor that have adopted editorial policies to reduce the impact of limitations of Null Hypothesis Significance Testing? To investigate these questions, the current study analyzed all articles related to psychological, neuropsychological and medical issues, published in 2011 in four journals with high impact factors: Science, Nature, The New England Journal of Medicine and The Lancet, and three journals with relatively lower impact factors: Neuropsychology, Journal of Experimental Psychology-Applied and the American Journal of Public Health. Results show that Null Hypothesis Significance Testing without any use of confidence intervals, effect size, prospective power and model estimation, is the prevalent statistical practice used in articles published in Nature, 89%, followed by articles published in Science, 42%. By contrast, in all other journals, both with high and lower impact factors, most articles report confidence intervals and/or effect size measures. We interpreted these differences as consequences of the editorial policies adopted by the journal editors, which are probably the most effective means to improve the statistical practices in journals with high or low impact factors.APA Style Reference
Tressoldi, P. E., Giofré, D., Sella, F., & Cumming, G. (2013). High impact= high statistical standards? Not necessarily so. PloS one, 8(2), e56180. https://doi.org/10.1371/journal.pone.0056180
You may also be interested in
- An index to quantify an individual’s scientific research output (Hirsch, 2005)
- The Leiden Manifesto for research metrics (Hicks et al., 2015)
Disability and Socio-economic status (APA, 2010) ◈ ⌺
Main Takeaways:
- The Disabilities Act assures equal opportunities in education and employment for people with disabilities and prohibits discrimination based on disability.
- Despite the Disabilities Act, people with disabilities remain over-represented among America’s poor and under-educated.
- The federal government has two major programs to assist individuals with disabilities: the Social Security Disability Insurance and the Supplemental Security Income.
- The Social Security Disability Insurance is a program for workers who have become disabled and unable to work after paying Social Security taxes for at least 40 quarters. In this program, a higher income yields higher SSDI earnings.
- The Supplemental Security Income is a welfare program for individuals with low income, fewer overall resources and no or an abbreviated work history.
- Current federal benefit for a single person using Supplemental Security Income is $735 a month.
- Despite these programs, people with disabilities are more likely to be unemployed and live in poverty.
- For individuals who are blind and visually impaired, unemployment rates exceed 70 percent while for people with intellectual and developmental disabilities, the unemployment rate exceeds 80 percent. Also, one in ten veterans with disabilities are unemployed.
- The American Association of People with Disabilities estimates that two thirds of people with disabilities are of working age and want to work.
- There are disparities in median incomes for people with and without disabilities, such that individuals with disabilities often earn lower incomes.
- A study surveyed human resources and project managers about perceptions of hiring persons with disabilities. Results show professionals held negative perceptions related to productivity, social maturity, interpersonal skills and psychological adjustment of persons with disabilities.
- Disparities in education have been ongoing for generations. 20.9% of individuals 65 years and older without a disability failed to complete high school, relative to 25.1% and 38.6% of elder individuals with a non-severe or severe disability.
- Great disparities exist when comparing attainment of higher degrees. 15.1% of the population aged 25 and over with disability obtain a bachelor’s degree, whereas 33% of individuals in the same age category with no disability attain the same educational status.
- Individuals with a disability experience increased barriers to obtaining health care as a result of accessibility concerns, such as transportation, problems with communication and insurance.
- Family members who provide care to individuals with chronic or disabling conditions are themselves at risk of developing emotional, mental and physical health problems due to complex caregiving situations.
Abstract
Learn how socioeconomic status affects individuals with disabilities.APA Style Reference
APA (2010). Disability & Socioeconomic Status [Blog post]. Retrieved from https://www.apa.org/pi/ses/resources/publications/disability
You may also be interested in
- ls There a Positive Correlation between Socioeconomic Status and Academic Achievement? (Quagliata, 2008)
- Education and Socio-economic status (APA, 2017b)
- Ethnic and Racial minorities and socio-economic status (APA, 2017)
- Women and Socio-economic status (APA, 2010)
- Lesbian, Gay, Bisexual and Transgender Persons & Socioeconomic Status (APA, 2010)
Registered reports (Jamieson et al., 2019)
Main Takeaways:
- Stage I article submitted with title page, abstract, introduction, methods, analysis plan, and conclusions of a study before carrying out research.
- Stage I includes the article and a cover letter, confirming all support and approval is in place, timeline for completing this study, statement confirming authors share raw data, digital materials, analysis.
- Authors confirm registration of Stage I article with the Open Science Framework or another repository.
- Method section includes justification of sample sizes compared to question, description of participants, problems investigated, a priori justification, procedures to deduce inclusion and exclusion criteria and clear protocol of experimental procedures.
- Data analysis: outline and justify how data is treated including all pre-processing steps.
- Two step review of Stage I paper. Triaged by an editorial team before passing to peer review. Peer reviewers assess importance of research question, introduction, plausibility, quality of hypotheses and methodological quality and appropriateness of data analysis plan, validity of inferential conclusions based on data.
- One of three outcomes: conditional approval, revise decision, or reject. Revise decision allows authors to respond to editors and reviewers criticisms. Reject ends the review process.
- Following conditional approval, authors submit a Stage II article. Stage II article must be consistent with Stage I report. Hypotheses, rationale, and reasoning approved in Stage I must reappear in Stage II.
- Stage II provides a complete and final report of the approved Stage I article, which also includes raw data, digital materials and analyses. Stage II focuses on quality of data reported, soundness of conclusions drawn from data and consistency with arguments and reasoning.
- Are data sufficiently resolved to support conclusions? Does the data answer the authors proposed hypotheses? Does the introduction and analyses match the Stage I submission? Any unregistered and post-hoc analyses justified, methodologically sound, and informative? Are conclusions consistent with collected data.
- Editor can ask for revisions or reject Stage II articles.
Abstract
Professor Randall K. Jamieson provides an editorial on registered reports for the journal Canadian Journal of Psychology and how it works in this specific journal.APA Style Reference
Jamieson, R. K., Bodner, G. E., Saint-Aubin, J., & Titone, D. (2019) Editorial: Registered reports. Canadian Journal of Experimental Psychology, 73, 3-4.
You may also be interested in
- Registered Reports: A new publishing initiative at Cortex (Chambers, 2013)
- Registered Reports: A step change in scientific publishing (Chambers, 2014)
- Registered Reports: Realigning incentives in scientific publishing (Chambers et al., 2015)
- Registered reports : a method to increase the credibility of published results (Nosek & Lakens, 2014)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- On the persistence of low power in psychological science (Vankov et al., 2014)
The mental health of PhD researchers demands urgent attention (Nature, 2019)
Main Takeaways:
- 29% of 5700 respondents to a survey in 2017 listed their mental health as an area of concern while less than half of those sought help for anxiety or depression caused by their PhD study.
- A new survey with 6300 graduate students from around the world show that 71% are satisfied with their experience of research, while 36% had help for their anxiety or depression related to their PhD.
- How can graduate students be both broadly satisfied, but increasingly unwell? One of the reasons might lie on the fact that 1/5 of respondents report being bullied and experience harassment or discrimination.
- Although universities should take more effective action, only ¼ of respondents said their institution provides support while 1/3 said they seek help elsewhere.
- Another reason for graduate students to be broadly satisfied, but increasingly unwell is that career success is measured by publications, citations, funding and impact. To progress, a researcher must hit high scores in all of these measures.
- Most students embark on a PhD as a foundation of an academic career. They believe they will have the freedom to discover and invent. However, problems arise when autonomy is reduced or removed, which occurs when targets for funding, impact and publications become part of the universities’ formal monitoring and evaluation systems.
- As student’s supervisors judge their success or failure, it is not surprising many feel unable to open up about vulnerabilities or mental-health concerns.
- Solutions do not solely lie in institutions doing more to provide on-campus mental health support, but also in the recognition of ill mental health as a consequence of excessive focus on measuring performance.
- Much has been written about how to overhaul the system and find a better way to define success in research, including promoting that many non-academic careers are open to researchers.
- The academic system is making young people ill and the research community needs to protect and empower the next generation of researchers.
Abstract
Without systemic change to research cultures, graduate-student mental health could worsen.APA Style Reference
Nature. (2019). The mental health of PhD researchers demands urgent attention. Nature, 575, 257-258. https://www.nature.com/articles/d41586-019-03489-1
You may also be interested in
- Leveraging a collaborative consortium model of mentee/mentor training to foster career progression of under-represented post-doctoral researchers and promote institutional diversity and inclusion (Risner et al., 2020)
- Postdocs in crisis: science risks losing the next generation (Nature, 2020)
- Boosting research without supporting universities is wrong-headed (Nature, 2020b)
- Seeking an exit plan (Woolston, 2020)
- Unequal effects of the COVID-19 pandemic on scientists (Myers et al., 2019)
Postdocs in crisis: science risks losing the next generation (Nature, 2020)
Main Takeaways:
- Post-doctoral researchers often spend years in a succession of short-term contracts, which creates immense anxiety and uncertainty.
- Nature conducted a survey with postdocs and asked how the pandemic is affecting their current and future career plans, their health and well-being; and whether they feel supported by their supervisors.
- Respondents spam across 93 countries (and different fields), but most are from the US and Europe.
- Results show that the pandemic adds to postdocs’ distress. The pandemic worsened career prospects and supervisors have not done enough to support them during pandemic.
- 51% of respondents are considering leaving active research due to work-related mental health concerns.
- All efforts to help workers are welcome but on their own, small measures will not be enough to save many academic science careers.
- Universities cannot be expected to bear this extra cost. Universities are already feeling the consequences of the pandemic for their finances. This is especially the case of institutions dependent on income from international students’ fees.
- Global student mobility will be much lower than usual in the coming academic year, and some institutions will lose a good fraction of their fee income as a result.
- In places where research is cross-subsidized from tuition-fee income, contract-research workers such as post-docs are most vulnerable to losing their jobs and women and people from minority groups who form a high share of post-doctoral workforce, will likely be disproportionately affected.
- As many post-docs are looking to leave their posts now, anticipating worse is to come, research and university leaders must find innovative ways to support early-career researchers.
- Principal investigators should show flexibility, patience and support for everyone in their group.
- Principal investigators and their institutions must push harder than ever for accessible mental health services.
Abstract
The pandemic has worsened the plight of postdoctoral researchers. Funders need to be offering more than moral support.APA Style Reference
Nature. (2020). Postdocs in crisis: science cannot risk losing the next generation. Nature, 585. 10.1038/d41586-020-02541-9 https://www.nature.com/articles/d41586-020-02541-9
You may also be interested in
- Leveraging a collaborative consortium model of mentee/mentor training to foster career progression of under-represented post-doctoral researchers and promote institutional diversity and inclusion (Risner et al., 2020) ⌺
- The mental health of PhD researchers demands urgent attention (Nature, 2019)
- Boosting research without supporting universities is wrong-headed (Nature, 2020b)
- Seeking an exit plan (Woolston, 2020)
- Unequal effects of the COVID-19 pandemic on scientists (Myers et al., 2019)
Boosting research without supporting universities is wrong-headed (Nature, 2020b) ⌺
Main Takeaways:
- Coronavirus lockdowns have precipitated a crisis in university funding and academic morale.
- Universities all over the world closed their doors. Classes and some research activities were moved online.
- Staff were given little or no time to prepare and few resources or training to help them with online teaching.
- Fewer students are expected to enrol in the coming academic year, instead waiting until institutions open fully. This means young people will lose a year of their education and universities will lose out financially.
- Governments have plans to boost post-lockdown research but these plans will be undermined if universities make job cuts and end up with staff shortages. Universities need support at this crucial time.
- Low- and middle-income countries face extra challenges from sudden transition to online learning. The main concern is for students unable to access digital classrooms (those who live in areas without fast, reliable and affordable broadband or where students have no access to laptops, tablets, smartphones and other essential hardware).
- Teachers report students struggle to keep up since lockdown began. Students from poorer households in remote regions travel to the nearest city to access the Internet and pay commercial internet cafes to download course materials. To solve this issue, governments and funding bodies need to accept that students and universities should be eligible for the same kinds of temporary emergency funding as other industries are asking for.
- Governments have denied requests to negotiate with universities or delayed decisions. In high-income countries, this is partly because universities are functioning and might be seen as less deserving of government help than businesses and professions that had no choice but to close. In poorer countries, public funding for universities is under threat because economies have crashed during lockdowns.
- Cuts in universities’ budgets will disproportionately affect poorest students and more vulnerable members of staff (those with fixed-term contracts).
- Students and staff on short-term contracts would welcome more support from academic colleagues in senior positions and from others with permanent positions.
- Colleagues should make the case for managers that failing to provide more help to low-income students or cutting the number of post-doctoral staff and teaching fellows presents a harm to the next generation of researchers and teachers. It will reduce departments’ capacity to teach and increase load on those who remain.
- Cutting back on scholarly capacity while increasing spending on research and development is wrong-headed, slowing down economic recovery and jeopardising plans to make research more inclusive.
Abstract
Universities face a severe financial crisis, and some contract staff are hanging by a thread. Senior colleagues need to speak up now.APA Style Reference
Nature. (2020). Boosting research without supporting universities is wrong-headed. Nature, 582, 313-314. https://www.nature.com/articles/d41586-020-01788-6
You may also be interested in
- The mental health of PhD researchers demands urgent attention (Nature, 2019)
- Postdocs in crisis: science risks losing the next generation (Nature, 2020)
- Seeking an exit plan (Woolston, 2020)
- Unequal effects of the COVID-19 pandemic on scientists (Myers et al., 2019)
Seeking an exit plan (Woolston, 2020)
Main Takeaways:
- Full impact of COVID-19 pandemic on scientific careers might not be known for years, but hiring freezes and other signs of turmoil at universities shake faith in academia as career options.
- A growing number of PhD students and other early-career researchers start to look at careers in industry, government and other sectors.
- It is unclear how many of these researchers will eventually leave academia out of choice or necessity, but a significant academic exodus is expected.
- It is suggested that the shortage of tenured and tenure-track university positions will deepen in coming years. History shows that, in the United States, recession coincided with a strong shift towards gig or temporary work.
- Academic escapees have to prepare themselves to navigate a new career landscape. As competition for industry jobs will be stiff, it is important to learn how to emphasise skills developed in university careers.
Abstract
The pandemic is prompting some early-career researchers to rethink their hopes for a university post. By Chris Woolston.APA Style Reference
Nature. (2020). Seeking an ‘exit plan’ for leaving academia amid coronavirus worries. Nature 583, 645-646. Doi: 10.1038/d41586-020-02029-6.
You may also be interested in
- Leveraging a collaborative consortium model of mentee/mentor training to foster career progression of under-represented post-doctoral researchers and promote institutional diversity and inclusion (Risner et al., 2020) ⌺
- Postdocs in crisis: science risks losing the next generation (Nature, 2020)
- Boosting research without supporting universities is wrong-headed (Nature, 2020b)
Lesbian, Gay, Bisexual and Transgender Persons & Socioeconomic Status (APA, 2010) ◈ ⌺
Main Takeaways:
- Individuals who identify as Lesbian, gay, bisexual and/or transgender are specially susceptible to socio-economic disadvantages.
- Socioeconomic status is inherently linked to rights, quality of life, and general well-being of Lesbian, Gay, Bisexual and/or transgender persons.
- Low income LGBT individuals and same-sex/gender couples have been found to be more likely to receive cash assistance and food stamps benefits compared to heterosexual individuals or couples.
- Transgender adults were nearly 4 times more likely to have household income of less than $10,000 per year relative to the general population.
- Raising the federal minimum wage benefits LGBT individuals and couples in the United States.
- An increase in minimum wage should reduce poverty rates by 25% for same-sex/gender female couples and 30% for same-sex/gender male couples.
- Due to an increase in minimum wage, poverty rates would be projected to fall for the most vulnerable individuals in same-sex/gender couples, including African American, couples with children, people with disabilities, individuals under 24 years of age, people without high school diplomas or the equivalent, and those living in rural areas.
- The socio-economic position may be linked to experiences of discrimination.
- Gay and bisexual men who earned higher income were less likely to report discrimination relative to those in lower socio-economic positions.
- Discrimination against and unfair treatment of LGBT persons remains legally permitted. 47% of transgender individuals report being discriminated against in hiring, firing and promotion, over 25% had lost a job due to discrimination based on gender identity.
- A lack of acceptance and fear of persecution lead many LGBT youth to leave their homes and live in transitional housing or on the street.
- Many LGBT youth may be rejected by their family of origin or caregivers and forced to leave home as minors.
- LGBT youth experience homeless at a disproportionate rate.
- LGBT homeless youth are more likely than their homeless heterosexual counterparts to have poorer mental and physical health outcomes.
- Although since 2015 states must issue marriage licenses to same-sex couples and recognise same-sex unions, legal barriers continue to exist.
- Workplace and housing discrimination contribute to increasing socio-economic status disparities for LGBT persons and families.
- 20 states and District of Columbia prohibit discrimination in workplace based on sexual orientation and gender identity, while 18 states have no laws prohibiting workplace discrimination against LGBT people.
- 19% of transgender individuals report in a previous study that they were refused a home or apartment and 11% report being evicted because of their gender identity or expression.
Abstract
Evidence indicates individuals who identify as lesbian, gay, bisexual and/or transgender (LGBT) are especially susceptible to socioeconomic disadvantages. Thus, SES is inherently related to the rights, quality of life and general well-being of LGBT persons.APA Style Reference
APA (2010). Lesbian, Gay, Bisexual and Transgender Persons & Socioeconomic Status. [Blog post]. Retrieved from https://www.apa.org/pi/ses/resources/publications/lgbt
You may also be interested in
- ls There a Positive Correlation between Socioeconomic Status and Academic Achievement? (Quagliata, 2008)
- Education and Socio-economic status (APA, 2017b)
- Ethnic and Racial minorities and socio-economic status (APA, 2017)
- Women and Socio-economic status (APA, 2010)
- Disability and Socio-economic status (APA, 2010)
The Focus on Fame distorts Science (Innes-Ker, 2017) ◈ ⌺
Main Takeaways:
- The author argues that instead of focusing on individual merit it is important that science is focused on scientific ideas and collaborative groups.
- Asking if you are famous is a wrong question. It focuses on the individual scientist, as if science is a lonely enterprise of hopeful geniuses.
- We should focus on ideas and knowledge and refining those ideas.
- H-index is not an objective measure. It presupposes that peer-review papers are solid and that citations are a proxy for quality.
- Science is argued to advance in an evolutionary manner. A wealth of ideas is produced, but only some are selected and survive depending on scientific merit and social process (production of papers, citations and engagement of groups of scientists).
- Ideas that engage groups of scientists will grow and change and bring knowledge closer to the truth. Ideas that are not interacted with, on the other hand, will likely die. This is far from focus on eminence and individual fame prevalent in science.
- Competition is a factor but cooperation is vital.
- For ideas to survive, multiple labs need to engage with them as champions or severe adversarial testers.
- If we focus on who may become eminent, we lose some power of the scientific process.
- Eminent scientists would be nowhere without collaborators and adversaries willing to engage with the ideas.
- The tendency to overwhelmingly publish only positive results with no clear avenue for publishing failures to confirm, means scientists are not grappling with the real field.
- Recent work to improve methods, statistics and publishing practices is an example of collaboration.
- In science, scientific ideas are the ones that need to be stress-tested, not scientists.
- We need to move away from the cultural market model of science focusing on individuals rather than on robustness of ideas. Science is a low yield, high risk business.
- Assigning individual merit based on productivity and citation encourages poor scientific practices and discourages collaboration and argumentative engagement with ideas. It results in a waste of talent.
- Objectivity in Science is not a characteristic of individual researchers, but a characteristic of scientific communities.
Abstract
The 2016 symposium on Scholarly Merit focused on individual eminence and fame. I argue, with some evidence, that the focus on individual merit distorts science. Instead we need to focus on the scientific ideas, and the creation of collaborative groups.APA Style Reference
Innes-Ker, Å. (2017). The Focus on Fame Distorts Science. https://psyarxiv.com/vyr3e/
You may also be interested in
- Fame: I’m Skeptical (Ferreira, 2017)
- Let’s Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit (Pickett, 2017)
- “Fame” is the Problem: Conflation of Visibility With Potential for Long-Term Impact in Psychological Science (Shiota, 2017)
- Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values (Corker, 2017)
- Am I Famous Yet? Judging Scholarly Merit in Psychological Science: An Introduction (Sternberg, 2016)
- Against Eminence (Vazire, 2017)
- Giving Credit Where Credit’s Due: Why It’s So Hard to Do in Psychological Science (Simonton, 2016)
- Eminence and Omniscience: Statistical and Clinical Prediction of Merit (Foss, 2016)
- Taking Advantage of Citation Measures of Scholarly Impact: Hip Hip h Index! (Ruscio, 2016)
- Improving Departments of Psychology (Diener, 2016)
- Varieties of Fame in Psychology (Roediger III, 2016)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- Intrinsic and Extrinsic Science: A Dialectic of Scientific Fame (Feist, 2016)
Fame: I’m Skeptical (Ferreira, 2017) ◈ ⌺
Main Takeaways:
- The author argues that fame and quality sometimes diverge and that reliance on fame helps to perpetuate stereotypes that keep women and underrepresented groups from participating in science.
- Most of us believe we have the respect of our peers and acknowledge we wish to be admired and viewed as successful and important.
- No psychologist and no rational person would deny that evaluating people and the quality of their work is necessary and inevitable in any field.
- We like to admit most promising candidates to graduate programs, hire the best faculty, tenure only those who have long productive careers and reward scientists with prizes if they contributed more than most to uncover the nature of psychological processes.
- We must not conflate fame and scientific quality, integrity and impact.
- All of us point to colleagues who completed excellent work but are barely known or who are not famous until long after their research careers have ended.
- Some scientists are well known because they have been called out for unethical practices, including data fabrication and other forms of cheating.
- We need to discriminate between two questions: (i) what one must do to become famous and (ii) what leads a person to end up famous. While the second question is merely an attempt to reconstruct someone’s path to fame, the motivations of the first question need to be challenged.
- Fame should not be a goal in science and valuing people or ideas because they are famous comes at a risk.
- Fame should be viewed with caution and scepticism to avoid temptation to assume that if someone is famous, their work is significant.
- Fame perpetuates discrimination and overlook excellent people and work.
- Science is based on critical thinking. As such, we should never hesitate to question the ideas of someone who is famous.
- We should not refuse to view the work of famous people positively or refuse to give it its due, but we must be careful to think an idea is useful due to the person being famous.
Abstract
Fame is often deserved, emerging from a person’s significant and timely contributions to science. It is also true that fame and quality clearly sometimes diverge: many people who do excellent work are barely known, and some people are famous even though their work is mediocre. Reliance on fame and name recognition when identifying psychologists as candidates for honors and awards helps to perpetuate a range of stereotypes and prevents us from broadening participation in our field, particularly from women and underrepresented groups. The pursuit of fame may also be contributing to the current crisis in psychology concerning research integrity, because it incentivizes quantity and speed in publishing. The right attitude towards fame is to use it wisely if it happens to come, but to focus our efforts on conducting excellent research and nurturing talent in others.APA Style Reference
Ferreira, F. (2017). Fame: I'm Skeptical (2017). https://psyarxiv.com/6zb4f/
You may also be interested in
- The Focus on Fame distorts Science (Innes-Ker, 2017)
- Let’s Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit (Pickett, 2017)
- “Fame” is the Problem: Conflation of Visibility With Potential for Long-Term Impact in Psychological Science (Shiota, 2017)
- Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values (Corker, 2017)
- Am I Famous Yet? Judging Scholarly Merit in Psychological Science: An Introduction (Sternberg, 2016)
- Against Eminence (Vazire, 2017)
- Giving Credit Where Credit’s Due: Why It’s So Hard to Do in Psychological Science (Simonton, 2016)
- Eminence and Omniscience: Statistical and Clinical Prediction of Merit (Foss, 2016)
- Taking Advantage of Citation Measures of Scholarly Impact: Hip Hip h Index! (Ruscio, 2016)
- Improving Departments of Psychology (Diener, 2016)
- Varieties of Fame in Psychology (Roediger III, 2016)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- Intrinsic and Extrinsic Science: A Dialectic of Scientific Fame (Feist, 2016)
Let’s Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit (Pickett, 2017) ◈ ⌺
Main Takeaways:
- Why do we care about judging scientific merit? There is a need to have a system to determine whether to award tenure and promotion to faculty members, leading to a development of criteria in order to judge and measure the scholarly merit of individuals.
- Science is a collective enterprise whose goal is to explain and understand the natural world and to build knowledge. Science cares about advancements and discoveries, not about individuals.
- Individual scientists are valued to the extent that they further the goals of the collective system. However, science comprises lab workers, scientists, institutions, agencies, and broader society.
- At organisation level, features facilitate scientific discovery-organisational autonomy, organisational flexibility, moderate scientific diversity and frequent and intense interaction among scientists with different viewpoints.
- An individual scientist contributes to scientific discovery directly through their own scientific products or indirectly by positively affecting other aspects of the system.
- More senior graduate students train incoming graduate students- when good at this the output of an entire lab can skyrocket as a result.
- Graduate students not only conduct their own personal research but their presence in the lab facilitates scientific progress of others.
- Scientists promote productivity of other scientists by reviewing manuscripts, sharing data, creating and serving scientific organisations, and developing scientific tools and paradigms used by others.
- Individual research scientists do not have resources to create large research centres, but can organise conferences and symposia, create and contribute to scientific discussion platforms, and make their research protocols and data easily shareable.
- Scholarly merit should include an individual’s system-level contributions, not only their productivity.
Abstract
When judging scientific merit, the traditional method has been to use measures that assess the quality and/or quantity of an individual’s research program. In today’s academic world, a meritorious scholar is one who publishes high quality work that is frequently cited, who receives plentiful funding and scientific awards, and who is well regarded among his or her peers. In other words, merit is defined by how successful the scholar has been in terms of promoting his or her own career. In this commentary, I argue that there has been an overemphasis on measuring individual career outcomes and that we should be more concerned with the effect that scholars have on the scientific system in which they are embedded. Put simply, the question we should be asking is whether and to what extent a scholar has advanced the scientific discipline and moved the field forward collectively.APA Style Reference
Pickett, C. (2017). Let's Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit. https://psyarxiv.com/tv6nb/
You may also be interested in
- The Focus on Fame distorts Science (Innes-Ker, 2017)
- Fame: I’m Skeptical (Ferreira, 2017)
- “Fame” is the Problem: Conflation of Visibility With Potential for Long-Term Impact in Psychological Science (Shiota, 2017)
- Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values (Corker, 2017)
- Am I Famous Yet? Judging Scholarly Merit in Psychological Science: An Introduction (Sternberg, 2016)
- Against Eminence (Vazire, 2017)
- Giving Credit Where Credit’s Due: Why It’s So Hard to Do in Psychological Science (Simonton, 2016)
- Eminence and Omniscience: Statistical and Clinical Prediction of Merit (Foss, 2016)
- Taking Advantage of Citation Measures of Scholarly Impact: Hip Hip h Index! (Ruscio, 2016)
- Improving Departments of Psychology (Diener, 2016)
- Varieties of Fame in Psychology (Roediger III, 2016)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- Intrinsic and Extrinsic Science: A Dialectic of Scientific Fame (Feist, 2016)
“Fame” is the Problem: Conflation of Visibility With Potential for Long-Term Impact in Psychological Science (Shiota, 2017)◈ ⌺
Main Takeaways:
- Fame is about visibility – who is seen. Ample evidence documents the influence of heuristics in determining who is visible, and whose contribution is considered important.
- Explicit and implicit beliefs about competence influences peer review when methodological quality or potential impact is ambiguous.
- The author is sceptical about the extent that fame is shaped by the quality of one’s work instead of confidence, dominance, persistence and demographics.
- The pace of academic life accelerates, the pressure to depend on shortcuts in gatekeeping and evaluation will continue to grow.
- The scientific community cannot remove implicit biases, there are ways to deflect the impact of these implicit biases.
- Reviews of submitted work should be blind to identity and demographics, letting the quality of the product stand on its own.
Quote
“We specify criteria for good science flexibly but explicitly and in detail, including thorough and accurate contextualisation in relevant previous work, methodological rigour; innovation and problem solving and implications for theory, future research and/or intervention. We should insist on diversity in career stage, gender, ethnicity and perspective instead of inviting first people who come to mind for invited opportunities such as conference talks, contribution to edited volumes, awards, and participation in committees that determine the direction of our field. We can resist temptation to track women and minorities into high profile, high-demand services roles, thinking that this solves problems of diversity in science. When, in fact, it does not.” (p.7)
Abstract
To be famous is to be widely known, and honored for one’s achievements. The process by which researchers achieve fame or eminence is skewed by heuristics that influence visibility; implications of these heuristics are magnified by a snowball effect, in which current fame leads to bias in ostensibly objective metrics of merit, including the distribution of resources that support future excellence. This effect may disproportionately hurt women and minorities, who struggle with both external and internalized implicit biases regarding competence and worth. While some solutions to this problem are available, they will not address the deeper problems of defining what it means for research to “make a difference” in our field and in society, and consistently holding our work to that criterion.APA Style Reference
Shiota, M. N. (2017) “Fame” is the Problem: Conflation of Visibility With Potential for Long-Term Impact in Psychological Science. https://psyarxiv.com/4kwuq
You may also be interested in
- The Focus on Fame distorts Science (Innes-Ker, 2017)
- Fame: I’m Skeptical (Ferreira, 2017)
- Let’s Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit (Pickett, 2017)
- Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values (Corker, 2017)
- Am I Famous Yet? Judging Scholarly Merit in Psychological Science: An Introduction (Sternberg, 2016)
- Against Eminence (Vazire, 2017)
- Giving Credit Where Credit’s Due: Why It’s So Hard to Do in Psychological Science (Simonton, 2016)
- Eminence and Omniscience: Statistical and Clinical Prediction of Merit (Foss, 2016)
- Taking Advantage of Citation Measures of Scholarly Impact: Hip Hip h Index! (Ruscio, 2016)
- Improving Departments of Psychology (Diener, 2016)
- Varieties of Fame in Psychology (Roediger III, 2016)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- Intrinsic and Extrinsic Science: A Dialectic of Scientific Fame (Feist, 2016)
Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values (Corker, 2017) ◈ ⌺
Main Takeaways:
- The author argues that our current methods of scientific rewards are based on identifying research eminence. This reward system is not in line with scientific values of transparency and universalism and undermines scientific quality.
- Why do we accord knowledge derived from scientific method a privileged position relative to common sense, appeals to authority figures, or other forms of rhetoric?
- If scientists depend on their own expertise as justification to prioritise their claims, we are not better to make truth-claims than religious, political and other leaders.
- Instead, science’s claim on truth comes not from its practitioners’ training and expertise, but rather from its strong adherence to norms of transparency and universalism.
- Universalism means scientists reject claims of special authority. It matters far less who did the research than how it was done.
- How do we square scientific ideals of universalism with scientific culture that fetishizes lone scientific genius?
- We need to recognise the methods used to produce a scientific claim are more important than eminence of a person who produced it.
- Focusing primarily on the individual researcher excellence hurts psychological science, as eminence reflects values that are counterproductive to maximise scientific knowledge.
- The current system privileges quantity over quality, outcome of research instead of the process itself.
- Systematic biases (e.g., structural sexism, racism, and status bias) affect how we identify who qualifies as eminent under status quo.
- Gender, nationality, race or institution should not matter to measure research quality.
- Structural changes should be initiated to help researchers reward and evaluate quality research (i.e., work that is reproducible, transparent and open, and likely to be high in validity).
- We can do a much better job to recognise and reward many activities researchers do that support scientific discovery beyond publishing peer reviewed articles (e.g. develop scientific software, generate large datasets, write data analytic code and construct tutorials to teach others to use it).
- We need to re-evaluate ways to measure researchers’ excellence in light of value and promise of team-driven research. After all, science is a communal endeavour.
- To combat structural and systematic problems linked to recognising eminence, double blind peer reviews need to be considered as standard practice for journal publication, grant funding and awards committee.
- Technological solutions could even be developed to allow departments to blind in early stages of faculty hiring, as blinding is associated with higher levels of diversity.
Abstract
The scientific method has been used to eradicate polio, send humans to the moon, and enrich understanding of human cognition and behavior. It produced these accomplishments not through magic or appeals to authority, but through open, detailed, and reproducible methods. To call something “science” means there are clear ways to independently and empirically evaluate research claims. There is no need to simply trust an information source. Scientific values thus prioritize transparency and universalism, emphasizing that it matters less who has made a discovery than how it was done. Yet, scientific reward systems are based on identifying individual eminence. The current paper contrasts this focus on individual eminence with reforms to scientific rewards systems that help these systems better align with scientific values.APA Style Reference
Corker, K. S. (2017). Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values. https://psyarxiv.com/yqfrd
You may also be interested in
- The Focus on Fame distorts Science (Innes-Ker, 2017)
- Fame: I’m Skeptical (Ferreira, 2017)
- Let’s Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit (Pickett, 2017)
- “Fame” is the Problem: Conflation of Visibility With Potential for Long-Term Impact in Psychological Science (Shiota, 2017)
- Am I Famous Yet? Judging Scholarly Merit in Psychological Science: An Introduction (Sternberg, 2016)
- Against Eminence (Vazire, 2017)
- Giving Credit Where Credit’s Due: Why It’s So Hard to Do in Psychological Science (Simonton, 2016)
- Eminence and Omniscience: Statistical and Clinical Prediction of Merit (Foss, 2016)
- Taking Advantage of Citation Measures of Scholarly Impact: Hip Hip h Index! (Ruscio, 2016)
- Improving Departments of Psychology (Diener, 2016)
- Varieties of Fame in Psychology (Roediger III, 2016)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- Intrinsic and Extrinsic Science: A Dialectic of Scientific Fame (Feist, 2016)
Don’t let transparency damage science (Lewandowsky & Bishop, 2016)
Main Takeaways:
- Scientific communities have launched initiatives to increase transparency, open critique, and data sharing.
- Good researchers include all perspectives but their openness can be abused by opponents who aim to stall inconvenient research.
- Science is prone to attacks but rigour and transparency helps researchers and their universities respond to valid criticism.
- Open data practices should be adopted and scientists should not regard all requests for data as harassment.
- Researchers should explain why they cannot share their data. Confidentiality issues need to be considered, also researchers need control over how data is going to be used if the participant agrees to the sharing of this data.
- Engagement with critics is a fundamental part of scientific practice. Researchers may feel obliged to respond even to trolls but can ignore abusive or illogical critics that make the same points.
- Minor corrections and clarifications after publications should not be seen as a stigma against fellow researchers. Thus,
- Publications should be seen as living documents with corrigenda being accepted (even if unwelcome) as part of scientific progress.
- Self-censorship affects academic freedom and discussion. Publication retractions should be reserved for fraud or grave errors, but often are demanded by people who do not like a paper’s conclusions.
- Complaints may undervalue researchers for legal but contentious science. Harassed scientists feel alone. They should not tolerate harassment dependent on race or gender nor if it is based on controversial science.
- Training and support should be used to aid researchers in the ability to cope with harassment.
Abstract
Professor Stephan Lewandowsky and Professor Dorothy Bishop explain how the research community should protect its members from harassment, while encouraging openness and transparency as it is essential for science.APA Style Reference
Lewandowsky, S., & Bishop, D. (2016). Research integrity: Don't let transparency damage science. Nature, 529(7587), 459-461.http://dx.doi.org/10.1038/529459a
You may also be interested in
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
Unequal effects of the COVID-19 pandemic on scientists (Myers et al., 2019) ⌺
Main Takeaways:
- COVID-19 pandemic disrupted scientific enterprise.
- Policymakers and institutional leaders have started to respond to reduce influences of pandemic on researchers.
- For this study, authors reached out to US- and Europe-based scientists across institutions, career stages and demographic backgrounds.
- The present paper solicited information about working hours and how time allocations changed since the onset of pandemic and asked scientists to report the range of individual and family properties, as these feature moderate effects of pandemic.
- The sample was self-selected and it is likely that those who feel strongly about sharing situations, whether they experienced large positive or negative changes due to the pandemic, were the ones who chose to participate.
- They found a decline in total working hours with the average dropping from 61 hours per week pre-pandemic to 54 hours at time of survey.
- Only 5% of scientists report they worked 42 hours or less before the pandemic. This share increased to 30% of scientists during the pandemic.
- Time devoted to research has changed most during pandemic. Total working hours decreased by 11% on average, but research declined by 24%.
- Scientists working in fields that rely on physical laboratories and on time sensitive experiments report largest declines in research time (in the range of 30-40% below pre-pandemic levels).
- Fields that are less equipment intensive (e.g., mathematics, statistics, computer science and economics) report lowest declines in research time. The difference to other fields can be as large as fourfold.
- There are differences between male and female respondents in how the pandemic influenced their work.
- Female scientists and scientists with young dependents report ability to devote time to their research has been influenced and effects are additive - most impact was for female scientists with young dependents.
- Individual circumstances of researchers best explain changes in time devoted to research during pandemic.
- Career stage and facility closures did not contribute to changes in time allocated to research when everything else is held constant. Gender and young dependents contributed major roles.
- Female scientists reported a 5% larger decline in research time than male scientists, but scientists with at least one child 5 years old or younger experienced a 17% larger decline in research time.
- Having multiple dependents was linked to a further 3% reduction in time spent on research. Scientists with dependents aged 6-11 years were less affected.
- This indicates gender discrepancy can be due to female scientists being more likely to have young children as dependents.
- Results indicate that the pandemic influences members of the scientific community differently.
- Shelter at home is not the same as work from home, when dependents are also at home and need care.
- Unless adequate childcare services are available, researchers with young children continue to be affected irrespective of reopening plans of institutions.
- Pandemic will likely have longer-term impacts that are important to monitor. Further efforts to track effects of pandemic on the scientific workforce need to consider household circumstances.
- Uniform policies do not consider individual circumstances and may have unintended consequences and worsen pre-existing inequalities.
- The disparities may worsen as institutions begin the process of reopening given that different priorities for bench sciences versus work with human subjects or field-work travel may lead to new disparities across scientists.
- Funders seeking to support high-impact programs adopt a similar approach, favouring proposals that are more resilient to uncertain future scenarios.
- Senior researchers have incentives to avoid in-person interactions facilitating mentoring and hands-on training of junior researchers.
- Impact of changes on individuals and groups of scientists could be large in short- and long-term, worsening negative impacts among those at a disadvantage.
- We need to consider consequences of policies adopted to respond to pandemic, as they may disadvantage under-represented minorities and worsen existing disparities.
Quote
“The disparities we observe and the likely surfacing of new impacts in the coming months and years argue for targeted and nuanced approaches as the world-wide research enterprise rebuilds.” (p.882)
Abstract
COVID-19 has not affected all scientists equally. A survey of principal investigators indicates that female scientists, those in the ‘bench sciences’ and, especially, scientists with young children experienced a substantial decline in time devoted to research. This could have important short- and longer-term effects on their careers, which institution leaders and funders need to address carefully.APA Style Reference
Myers, K. R., Tham, W. Y., Yin, Y., Cohodes, N., Thursby, J. G., Thursby, M. C., ... & Wang, D. (2020). Unequal effects of the COVID-19 pandemic on scientists. Nature Human Behaviour, 4, 880-883. https://doi.org/10.1038/s41562-020-0921-y
You may also be interested in
- The Gender Gap: Who Is (and Is Not) Included on Graduate-Level Syllabi in Social/Personality Psychology (Skitka et al., 2020)
- Leveraging a collaborative consortium model of mentee/mentor training to foster career progression of under-represented post-doctoral researchers and promote institutional diversity and inclusion (Risner et al., 2020) ⌺
- The mental health of PhD researchers demands urgent attention (Nature, 2019)
- Postdocs in crisis: science risks losing the next generation (Nature, 2020)
- Boosting research without supporting universities is wrong-headed (Nature, 2020b)
- Against Eminence (Vazire, 2017)
Am I Famous Yet? Judging Scholarly Merit in Psychological Science: An Introduction (Sternberg, 2016)
Main Takeaways:
- Quality, productivity, visibility and impact are judged by the department in terms of scientific merit. These evaluations are subjective.
- Letters from distinguished referees provide a detailed and qualitative assessment of the referee’s future potential and the nature of the individual’s scientific contribution.
- Letter writers are prone to fads, biases and personal idiosyncrasies that can positively or negatively affect the chances of tenure or promotion.
- Quantity of publication is a reasonable measure of the researcher’s productivity. If one does not worry about the prestige of the journal, most articles get published. However, it does not inform us about the quality of the researcher’s work.
- Quantity of publications controlling for impact factors of journals tells us how much, on average, articles in that specific journal get cited. However, the problem is that it focuses on the average, some articles get highly cited and some do not get cited at all. Also, prestigious journals are conservative in what they do publish.
- Number of citations is a good measure and provides us information about how often is the researcher cited over their career. If you are in a controversial topic, or in an area that appeals to the broad audience or work in hot areas, it can provide a unique advantage to those researchers.
- H index is the number of publications cited at least h times and takes into account how quality and quantity affect impact. I10 is the number of publications cited at least 10 times.
- Grants and contracts show scholars have systematically and valued proposed programs of research.
- Editorship shows scholar’s work is recognised in their field. Invited service on a grant panel is another recognition of success in one’s professional endeavours.
- Awards are a useful measure of recognition by peers and measure quality of work instead of citation to work.
- Honorary doctorates are recognitions by broader academic audiences of merit of a scholar’s work.
- There are not many big psychological theorists left. Some would say the shift represents a natural progression as the field becomes more and more of a natural science.
- The big thinkers of yesterday might be taken aback by the amount of work done in modern times.
- The use of neuroimaging, behavioural experiments importance shrinks towards small-scale psychology without theory but with a large theory, they contribute to larger theory. Big thinking pays off.
Quote
“Most of us in academia go through a series of increasingly more challenging evaluations—first to get the PhD, next at the time of hiring, then at the time of reappointment, subsequently at the time of tenure, and finally at the time of promotion to full professor. And when we go through these evaluations, we almost inevitably wonder whether the criteria by which we will be judged are fair and whether the criteria, whatever they are, will be applied fairly.” (p.877)
Abstract
The purpose of this symposium is to consider new ways of judging merit in academia, especially with respect to research in psychological science. First, I discuss the importance of merit-based evaluation and the purpose of this symposium. Next, I review some previous ideas about judging merit—especially creative merit—and I describe some of the main criteria used by institutions today for judging the quality of research in psychological science. Finally, I suggest a new criterion that institutions and individuals might use and draw some conclusions.APA Style Reference
Sternberg, R. J. (2016). “Am I famous yet?” Judging scholarly merit in psychological science: An introduction. Perspectives on Psychological Science, 11(6), 877-881. https://doi.org/10.1177/1745691616661777
You may also be interested in
- The Focus on Fame distorts Science (Innes-Ker, 2017)
- Fame: I’m Skeptical (Ferreira, 2017)
- Let’s Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit (Pickett, 2017)
- “Fame” is the Problem: Conflation of Visibility With Potential for Long-Term Impact in Psychological Science (Shiota, 2017)
- Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values (Corker, 2017)
- Against Eminence (Vazire, 2017)
- Giving Credit Where Credit’s Due: Why It’s So Hard to Do in Psychological Science (Simonton, 2016)
- Eminence and Omniscience: Statistical and Clinical Prediction of Merit (Foss, 2016)
- Taking Advantage of Citation Measures of Scholarly Impact: Hip Hip h Index! (Ruscio, 2016)
- Improving Departments of Psychology (Diener, 2016)
- Varieties of Fame in Psychology (Roediger III, 2016)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- Intrinsic and Extrinsic Science: A Dialectic of Scientific Fame (Feist, 2016)
Against Eminence (Vazire, 2017) ◈ ⌺
Main Takeaways:
- The author argues that the drive for eminence is inherently at odds with scientific values and that insufficient attention to this problem is partly responsible for the recent crisis of confidence in psychology and other sciences.
- Transparency makes it possible for scientists to discriminate robust from shaky findings.
- The Replicability crisis shows a system without transparency does not work.
- Those in charge of setting scientific norms and standards should strive to increase transparency, bolster our confidence that we trust published research.
- However many high level decisions in science are made with a different goal in mind: to increase impact.
- Professional societies and journals prioritise publishing attention-grabbing findings to boost visibility and prestige.
- Seeking eminence is at odds with scientific value and affects scientific gatekeepers’ decisions.
- Editors influenced by the status of submitting authors or prestige of institutions violate the basic premise of science. Science work should be evaluated on its own merit, irrespective of the source.
- Lack of transparency in science is a direct consequence of the corrupting influence of eminence seeking.
- Gatekeepers control incentive structures that shape individual researchers’ behaviour. Therefore they have a bigger responsibility to uphold scientific values and most power to erode those values.
- Individual researchers’ desire for eminence threatens the integrity of the research process.
- All researchers are human and desire recognition for their work. However, there is no good reason to amplify this human drive and encourage scientists to seek fame.
- The glorification of eminence also reinforces inequalities in science. If scientists are evaluated based on ability to attract attention, those with the most prestige will be heard the loudest. Certain groups are overrepresented at a high level of status.
- Eminence propagates privilege and raises barriers to entry for others.
- How should scientific merit be evaluated? What does this mean for committees to select one or few winners?
- First, it is important to admit that a larger number of scientists meet the objective criteria for these recognitions (i.e., do sound science).
- It is also important to admit that selection of one or few individuals is not based on merit but on preference or partiality.
- It is fine to select or recognise members who exemplify their values, but this should not be confused with exceptional scientific merit.
- Whenever possible (for tenure, promotion and when journal space or grant fund permits), we should attempt to reward scientists whose work reaches a more objective threshold of scientific rigour or soundness instead of selecting scientists based on fame.
Abstract
The drive for eminence is inherently at odds with scientific values, and insufficient attention to this problem is partly responsible for the recent crisis of confidence in psychology and other sciences. The replicability crisis has shown that a system without transparency doesn’t work. The lack of transparency in science is a direct consequence of the corrupting influence of eminence-seeking. If journals and societies are primarily motivated by boosting their impact, their most effective strategy will be to publish the sexiest findings by the most famous authors. Humans will always care about eminence. Scientific institutions and gatekeepers should be a bulwark against the corrupting influence of the drive for eminence, and help researchers maintain integrity and uphold scientific values in the face of internal and external pressures to compromise. One implication for evaluating scientific merit is that gatekeepers should attempt to reward all scientists whose work reaches a more objective threshold of scientific rigor or soundness, rather than attempting to select the cream of the crop (i.e., identify the most “eminent”).APA Style Reference
Vazire, S. (2017). Against eminence. https://doi.org/10.31234/osf.io/djbcw
You may also be interested in
- The Gender Gap: Who Is (and Is Not) Included on Graduate-Level Syllabi in Social/Personality Psychology (Skitka et al., 2020)
- The Focus on Fame distorts Science (Innes-Ker, 2017)
- Fame: I’m Skeptical (Ferreira, 2017)
- Let’s Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit (Pickett, 2017)
- “Fame” is the Problem: Conflation of Visibility With Potential for Long-Term Impact in Psychological Science (Shiota, 2017)
- Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values (Corker, 2017)
- Am I Famous Yet? Judging Scholarly Merit in Psychological Science: An Introduction (Sternberg, 2016)
- Unequal effects of the COVID-19 pandemic on scientists (Myers et al., 2019)
- Giving Credit Where Credit’s Due: Why It’s So Hard to Do in Psychological Science (Simonton, 2016)
- Eminence and Omniscience: Statistical and Clinical Prediction of Merit (Foss, 2016)
- Taking Advantage of Citation Measures of Scholarly Impact: Hip Hip h Index! (Ruscio, 2016)
- Improving Departments of Psychology (Diener, 2016)
- Varieties of Fame in Psychology (Roediger III, 2016)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- Intrinsic and Extrinsic Science: A Dialectic of Scientific Fame (Feist, 2016)
Giving Credit Where Credit’s Due: Why It’s So Hard to Do in Psychological Science (Simonton, 2016)
Main Takeaways:
- The article begins with indicators of scientific achievement before discussing some important precautions about the implications of these measures.
- One measure is the lifetime career award that is based on the scientist’s cumulative record that spans over their career lifetime. This indicator is perceived as more reliable than early career award, as an early career award is founded on a much smaller and likely less representative sample of that career. This limitation also applies to an award for a single publication such as best article recognition, as it may not predict the citations that the article later receives in the literature. The article argues that it makes most sense to concentrate on assessing lifetime contributions to psychological science.
- Another measure to assess scientific achievement is an invitation to write a definitive handbook chapter, as it indicates that the scientist is a widely recognised expert on this specific topic.
- A final measure to indicate that a scientist is well known is the simple count of total citations. However, this may be assessed by several alternative citation measures (e.g. h-index and i10).
- However, these measures suffer from poor predictive validity, as the relationship between various predictors and criterion variables tend to be small to moderate, not large enough to make fine discriminations among scientists. In turn, these predictive utilities are contaminated with other potentially biasing factors (e.g. gender, ethnicity, specialty, methodology, ideology, affiliation, and publication type).
- In addition, these measures suffer from interjudge reliability, as a psychologist receives mixed reviews after submitting a manuscript to a high-impact journal. For instance, one referee recommends the author to publish the manuscript with minor revision, while another advises an outright rejection. This frustrates the author and the editor, but proves the discipline lacks a strong consensus on what contributes to science or their specific area.
- This lack of agreement should be reduced if evaluators operate with a larger sample of contributions such as lifetime career awards. However, the same problem from interjudge reliability applies to lifetime awards, as one committee member would argue that the scientist deserves this award, while a minority of the committee may disagree with the final decision and argue another scientist deserves this award. In the end, “the committee chair can then only assure the dissenters that their preferred candidate will most definitely emerge the winner in the next award cycle.” (p.890).
Quote
“Eminence in any scientific discipline will therefore be directly proportional to actual contributions. This expectation would be especially strong given that scientists purport to make inferences based on empirical fact and logical reasoning. Not only would peer assessments prove highly objective, but scientists’ self-assessments of their own contributions should depart relatively little from colleagues in the best position to evaluate their work. In short, a strong consensus should permeate all evaluations. One specific manifestation of this consensus would appear in the awards and honors bestowed on those scientists who have devoted a whole career to producing high-impact work. That is what would happen ideally, but does that happen in fact? And even if the ideal is closely approximated in most sciences, is it also reasonably attained in psychological science?” (p.888)
Abstract
More than a century of scientific research has shed considerable light on how a scientist’s contributions to psychological science might be best assessed and duly recognized. This brief overview of that empirical evidence concentrates on recognition for lifetime career achievements in psychological science. After discussing both productivity and citation indicators, the treatment turns to critical precautions in the application of these indicators to psychologists. These issues concern both predictive validity and interjudge reliability. In the former case, not only are the predictive validities for standard indicators relatively small, but the indicators can exhibit important non-merit-based biases that undermine validity. In the latter case, peer consensus in the evaluation of scientific contributions is appreciably lower in psychology than in the natural sciences, a fact that has consequences for citation measures as well. Psychologists must therefore exercise considerable care in judging achievements in psychological science—both their own and those of others.APA Style Reference
Simonton, D. K. (2016). Giving credit where credit’s due: Why it’s so hard to do in psychological science. Perspectives on Psychological Science, 11(6), 888-892. https://doi.org/10.1177/1745691616660155
You may also be interested in
- The Focus on Fame distorts Science (Innes-Ker, 2017)
- Fame: I’m Skeptical (Ferreira, 2017)
- Let’s Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit (Pickett, 2017)
- “Fame” is the Problem: Conflation of Visibility With Potential for Long-Term Impact in Psychological Science (Shiota, 2017)
- Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values (Corker, 2017)
- Am I Famous Yet? Judging Scholarly Merit in Psychological Science: An Introduction (Sternberg, 2016)
- Against Eminence (Vazire, 2017)
- Eminence and Omniscience: Statistical and Clinical Prediction of Merit (Foss, 2016)
- Taking Advantage of Citation Measures of Scholarly Impact: Hip Hip h Index! (Ruscio, 2016)
- Improving Departments of Psychology (Diener, 2016)
- Varieties of Fame in Psychology (Roediger III, 2016)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- Intrinsic and Extrinsic Science: A Dialectic of Scientific Fame (Feist, 2016)
Eminence and Omniscience: Statistical and Clinical Prediction of Merit (Foss, 2016)
Main Takeaways:
- The author asks the reader to take the perspective of the individual who has the final say in making a tenure, promotion, or hiring decision. The author also asks that you imagine the difference between the fallible human state we are in on such an occasion and what it would be like to be omniscient when making such decisions.
- The author argues that there are two types of eminence: Deep eminence and surface eminence. The former refers to you as omniscient, you know, and future generations will know, if the candidate is doing work moving some part of the discipline toward “capital-T Truth”, while the latter is the basis for our mere-mortal judgment of ‘tenurability’ in a candidate.
- Citation data predicts early prominence at least at extremes of citation distribution. However, longitudinal studies with large sample sizes are required to investigate this question.
Quote
“Diener suggests that the discipline will progress more rapidly if the most highly productive individuals are allowed to be even more productive, which will occur by further unburdening these worthies from their teaching responsibilities. I have three quick reactions to this point. One is that, to a considerable extent, his proposal has already been adopted. The typical teaching assignments in research universities are very substantially lower than they were in the days when modern experimental psychology took off. And it is not unusual to see less productive scholars with teaching assignments that involve, say, larger sections of undergraduates, as well as carrying out other service activities. Second, in many places it is still possible for successful grantees to “buy out” some of their teaching time. By definition, these are members of the publishing crew or they would not have the grant money that allows this exchange. And third, and most importantly, let’s revisit what a university is for. One of its primary goals is to develop the human capital of society. In order to keep faith with the funders of (at least the public) universities, we should be leery of allowing that mission to slip too low in our goal hierarchy.” (p.914)
Abstract
In this article, I review, comment upon, and assess some of the suggestions for evaluating scientific merit as suggested by contributors to this symposium. I ask the reader to take the perspective of the individual who has the final say in making a tenure, promotion, or hiring decision. I also ask that one imagine the difference between the fallible human state we are in on such an occasion and what it would be like to be omniscient when making such decisions. After adopting the terminology of “deep” and “surface” eminence, I consider what an omniscient being would take into account to determine eminence and to guide decision-making. After discussing how some proposed improvements in assessing merit might move us closer to wise decisions, I conclude by noting that both data and judgment are, and will continue to be, necessary. A clerk cannot determine eminence.APA Style Reference
Foss, D. J. (2016). Eminence and omniscience: Statistical and clinical prediction of merit. Perspectives on Psychological Science, 11(6), 913-916. https://doi.org/10.1177/1745691616662440
You may also be interested in
- The Focus on Fame distorts Science (Innes-Ker, 2017)
- Fame: I’m Skeptical (Ferreira, 2017)
- Let’s Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit (Pickett, 2017)
- “Fame” is the Problem: Conflation of Visibility With Potential for Long-Term Impact in Psychological Science (Shiota, 2017)
- Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values (Corker, 2017)
- Am I Famous Yet? Judging Scholarly Merit in Psychological Science: An Introduction (Sternberg, 2016)
- Against Eminence (Vazire, 2017)
- Giving Credit Where Credit’s Due: Why It’s So Hard to Do in Psychological Science (Simonton, 2016)
- Taking Advantage of Citation Measures of Scholarly Impact: Hip Hip h Index! (Ruscio, 2016)
- Improving Departments of Psychology (Diener, 2016)
- Varieties of Fame in Psychology (Roediger III, 2016)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- Intrinsic and Extrinsic Science: A Dialectic of Scientific Fame (Feist, 2016)
Improving Departments of Psychology (Diener, 2016)
Main Takeaways:
- Although we have excellent universities, our selection-based approach to talent and productivity is incomplete for creating the very best departments. What can we do to improve our department?
- Current approach to excellence in scholarship rests largely on hiring the right individuals who have the right talent and motivation.
Abstract
Our procedures for creating excellent departments of psychology are based largely on selection—hiring and promoting the best people. I argue that these procedures have been successful, but I suggest the implementation of policies that I believe will further improve departments in the behavioral and brain sciences. I recommend that we institute more faculty development programs attached to incentives to guarantee continuing education and scholarly activities after the Ph.D. degree. I also argue that we would do a much better job if we more strongly stream our faculty into research, education, or service and not expect all faculty members to carry equal responsibility for each of these. Finally, I argue that more hiring should occur at advanced levels, where scholars have a proven track record of independent scholarship. Although these practices will be a challenge to implement, institutions do ossify over time and thus searching for ways to improve our departments should be a key element of faculty governance.APA Style Reference
Diener, E. (2016). Improving departments of psychology. Perspectives on Psychological Science, 11(6), 909-912. https://doi.org/10.1177/1745691616662865
You may also be interested in
- The Focus on Fame distorts Science (Innes-Ker, 2017)
- Fame: I’m Skeptical (Ferreira, 2017)
- Let’s Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit (Pickett, 2017)
- “Fame” is the Problem: Conflation of Visibility With Potential for Long-Term Impact in Psychological Science (Shiota, 2017)
- Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values (Corker, 2017)
- Am I Famous Yet? Judging Scholarly Merit in Psychological Science: An Introduction (Sternberg, 2016)
- Against Eminence (Vazire, 2017)
- Giving Credit Where Credit’s Due: Why It’s So Hard to Do in Psychological Science (Simonton, 2016)
- Eminence and Omniscience: Statistical and Clinical Prediction of Merit (Foss, 2016)
- Taking Advantage of Citation Measures of Scholarly Impact: Hip Hip h Index! (Ruscio, 2016)
- Varieties of Fame in Psychology (Roediger III, 2016)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- Intrinsic and Extrinsic Science: A Dialectic of Scientific Fame (Feist, 2016)
Varieties of Fame in Psychology (Roediger III, 2016)
Main Takeaways:
- How do we determine the quality and impact of an individual and their research in psychological science? We use progression of fame to indicate how the system works. The author goes on to discuss routes to fame, but emphasizes that more likely than not, it is ‘local’ fame we are achieving that is not long lasting across time.
- Once a researcher succeeds in graduate school, obtains a job in academia, industry or research institute, then it is time to move up in one’s career. Below are different actions one could take:
Quote
“Fame is local, both by area and by time. This point has been made by scholars in other contexts (usually politics or other historical figures), but it is as true of psychology as of any other field. As with other writers in this series, the best advice is to do the research, the writing, and the teaching that you are passionate about. Fame may or may not come for a time, but should not be an all-consuming concern. Even if it comes, it will soon fade away” (p.887)
Abstract
Fame in psychology, as in all arenas, is a local phenomenon. Psychologists (and probably academics in all fields) often first become well known for studying a subfield of an area (say, the study of attention in cognitive psychology, or even certain tasks used to study attention). Later, the researcher may become famous within cognitive psychology. In a few cases, researchers break out of a discipline to become famous across psychology and (more rarely still) even outside the confines of academe. The progression is slow and uneven. Fame is also temporally constricted. The most famous psychologists today will be forgotten in less than a century, just as the greats from the era of World War I are rarely read or remembered today. Freud and a few others represent exceptions to the rule, but generally fame is fleeting and each generation seems to dispense with the lessons learned by previous ones to claim their place in the sun.APA Style Reference
Roediger III, H. L. (2016). Varieties of fame in psychology. Perspectives on Psychological Science, 11(6), 882-887. https://doi.org/10.1177/1745691616662457
You may also be interested in
- The Focus on Fame distorts Science (Innes-Ker, 2017)
- Fame: I’m Skeptical (Ferreira, 2017)
- Let’s Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit (Pickett, 2017)
- “Fame” is the Problem: Conflation of Visibility With Potential for Long-Term Impact in Psychological Science (Shiota, 2017)
- Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values (Corker, 2017)
- Am I Famous Yet? Judging Scholarly Merit in Psychological Science: An Introduction (Sternberg, 2016)
- Against Eminence (Vazire, 2017)
- Giving Credit Where Credit’s Due: Why It’s So Hard to Do in Psychological Science (Simonton, 2016)
- Eminence and Omniscience: Statistical and Clinical Prediction of Merit (Foss, 2016)
- Taking Advantage of Citation Measures of Scholarly Impact: Hip Hip h Index! (Ruscio, 2016)
- Improving Departments of Psychology (Diener, 2016)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- Intrinsic and Extrinsic Science: A Dialectic of Scientific Fame (Feist, 2016)
Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016) ⌺
Main Takeaways:
- Women’s scientific contributions in psychology may not be as numerous or influential as those of men.
- What is the magnitude of the current eminence gender gap?
- Women’s modest inroads into this list of eminent psychologists deserve respect, given this lag between obtaining a doctorate and attaining eminence and formidable barriers that women once faced in pursuing scientific careers.
- Psychologists judge eminence by observing signs such as memberships in selective societies, career scientific achievement awards and honorary degrees.
- Do men exceed women on both quantity and impact of their publication underlies h index?
- Are these metrics tainted by unfair bias against women?
- Does the h-index identify potential socio-cultural and individual causes of the eminence gap?
- Women’s publications are cited less than men. This gap was larger in psychology.
- Women received 20% fewer in psychology varying across subfields.
- Gender gap on h-index and similar metrics has two sources: women publish less than men and articles receive fewer citations.
- Metrics assessing scientific eminence may be tainted by prejudicial bias against female scientists in obtaining grant support, publishing papers, or gaining citations of published papers.
- If psychologists are disadvantaged in publishing their work, bias may be limited to culturally masculine topics or male-dominated research areas.
- Such topics and are no doubt becoming rarer in psychology, given women receive most US doctorates.
- Men’s greater overall citations reflect higher rates of self-citation, women self-cite less often.
- This reflects men’s larger corpus of their own citable papers.
- Prejudicial gender bias is limited and presents ambiguity given most studies are correlational instead of experimental.
- Little is known about possible gender bias in awards for scientific eminence such as science prizes and honorary degrees, which are imperfect indicators of the importance of scientists’ contributions.
- Female scientists’ lesser rates of publication and citation reflect causes other than biases.
- Broader socio-cultural factors shape individual identities and motivations.
- Nature and nurture affects role occupancies so men and women are differently distributed into social roles.
- Women excel in communal qualities of warmth and concern for others and for men to excel in agentic qualities of assertiveness and mastery.
- Women are over-represented in less research intensive but more in teaching-intensive ranks and part-time positions.
- Gender norms discourage female agency may disadvantage to gain status in departmental and disciplinary networks and garner resources.
- Stereotypes erode women’s confidence in ability to become highly successful scientists.
- Eminence gender gaps in psychology and other sciences shrink further over time as new cohorts of scientists advance in their careers.
- Women’s representation among PhD earners has increased dramatically over recent decades.
Abstract
Women are sparsely represented among psychologists honored for scientific eminence. However, most currently eminent psychologists started their careers when far fewer women pursued training in psychological science. Now that women earn the majority of psychology Ph.D.’s, will they predominate in the next generation’s cadre of eminent psychologists? Comparing currently active female and male psychology professors on publication metrics such as the h index provides clues for answering this question. Men outperform women on the h index and its two components: scientific productivity and citations of contributions. To interpret these gender gaps, we first evaluate whether publication metrics are affected by gender bias in obtaining grant support, publishing papers, or gaining citations of published papers. We also consider whether women’s chances of attaining eminence are compromised by two intertwined sets of influences: (a) gender bias stemming from social norms pertaining to gender and to science and (b) the choices that individual psychologists make in pursuing their careers.APA Style Reference
Eagly, A. H., & Miller, D. I. (2016). Scientific eminence: Where are the women?. Perspectives on Psychological Science, 11(6), 899-904. https://doi.org/10.1177/1745691616663918
You may also be interested in
- The Gender Gap: Who Is (and Is Not) Included on Graduate-Level Syllabi in Social/Personality Psychology (Skitka et al., 2020)
- Leveraging a collaborative consortium model of mentee/mentor training to foster career progression of under-represented post-doctoral researchers and promote institutional diversity and inclusion (Risner et al., 2020)
- The Focus on Fame distorts Science (Innes-Ker, 2017)
- Fame: I’m Skeptical (Ferreira, 2017)
- Let’s Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit (Pickett, 2017)
- “Fame” is the Problem: Conflation of Visibility With Potential for Long-Term Impact in Psychological Science (Shiota, 2017)
- Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values (Corker, 2017)
- Am I Famous Yet? Judging Scholarly Merit in Psychological Science: An Introduction (Sternberg, 2016)
- Against Eminence (Vazire, 2017)
- Varieties of Fame in Psychology (Roediger III, 2016)
- Eminence and Omniscience: Statistical and Clinical Prediction of Merit (Foss, 2016)
- Taking Advantage of Citation Measures of Scholarly Impact: Hip Hip h Index! (Ruscio, 2016)
- Improving Departments of Psychology (Diener, 2016)
- Giving Credit Where Credit’s Due: Why It’s So Hard to Do in Psychological Science (Simonton, 2016)
- Intrinsic and Extrinsic Science: A Dialectic of Scientific Fame (Feist, 2016)
Intrinsic and Extrinsic Science: A Dialectic of Scientific Fame (Feist, 2016)
Main Takeaways:
- Fame and desire for a legacy provides meaning in one’s existence.
- Scientists are not the only group driven by a desire to be famous. What does it mean to be famous in science and how do we measure scientific fame?
- There is intrinsic motivation to follow one’s interest, curiosity, gut and intuition for important and undiscovered topics, while there is extrinsic motivation to follow money, grants and/or what is being published in top-tier journals.
- There is a continuum of fame: On one end, there is ‘mundane or imitative’ science - someone conducts a replication or slight advance of already published research. Its impact is little more than personal, influencing the person conducting it but impacts few other people.
- ‘Normal’ science is when one takes an idea or theory from within an existing theoretical paradigm and tests it. Most scientific research falls in the normal category. Its impact is regional and/or narrowly national.
- We have ‘creative science’ - this is moderate to high impact science, heavily cited by other scholars in the field and sometimes garner regional, national, or even international awards.
- Finally, there is ‘rare transformational/revolutionary’ science that changes the entire field and whose impact is both internal and historic.
- If a peer-reviewed article is the currency of scientific career, funding is its bread and butter. Research is not possible without finding increasing amounts of money (for most scientists).
- Generative publications are not only highly cited themselves but also generate other works that are highly cited. If they generate enough new works of high impact, the original publication can be transformative.
- Once published, articles are either ignored or exert some kind of influence on the field.
- Publication and citation counts are reliable and robust measures of creative output in science.
- Scientists could cite any and all work that affects their current research,but this appears to not be the case. Papers with several authors are more likely to be cited due to greater exposure.
- Other more integrated measures of productivity have been developed to correct some problems. H index: when an author of ‘N’ articles has a number of publications cited at least X number of times and the rest of articles receive few citations.
- Traditional and citation-based metrics are impacted by time lag between when an article is published and when citation indexes catch up.
- Altmetrics measures impact derived from online and social media data. Altmetrics assesses article outcomes such as: the number of times an article is viewed, liked, downloaded, discussed, saved, cited, tweeted, blogged, or recommended.
- Altmetric data is faster than traditional citation count and h-index because it is counted immediately upon publication with real-time updates at any given time.
- Publications are necessary but not sufficient conditions for citations, those who publish the most are cited the most.
- It is important to remember there are individuals who publish a lot but not get cited, and those who publish not much but are heavily cited.
- One can do very good work but the field may or may not pay much attention to it.
- Many heavily cited papers make a methodological or statistical advance and are of practical, not theoretical, importance.
- Psychologists would better understand the difference between individual success and disciplinary success. What is good for one’s career is not always what is good for science.
- Researchers have begun to make recommendations to authors, editors, and instructors of research methodology to increase replicability such as pre-registering predictions by increasing transparency and clearly justify sample size and publish raw data.
- Most psychological scientists find a way to marry their intrinsic interests with its extrinsic reward and impact.
Quote
“Finding that sweet spot between the two extremes of joy and recognition may be the best definition of success in science that we can come up with. So if I were to recommend a strategy for up and coming scientists it might be this: develop a research program that combines intrinsic fascination and interest with extrinsic recognition and career advancement. Follow your heart and your head. Explore and develop the riskier, more potentially transformative and creative lines of research at the same time that you develop the safer, more fundable ideas. This might occur by developing two separate lines of research, or better yet, by finding one research program that is both intrinsically motivated and then other people also recognize, appreciate, and reward you for it. If you can do both of these, you stand the best chance of surviving, succeeding, and maybe even becoming famous in the competitive world of academic psychological science” (p.897)
Abstract
In this article, I argue that scientific fame and impact exists on a continuum from the mundane to the transformative/ revolutionary. Ideally, one achieves fame and impact in science by synthesizing two extreme career prototypes: intrinsic and extrinsic research. The former is guided by interest, curiosity, passion, gut, and intuition for important untapped topics. The latter is guided by money, grants, and/or what is being published in top-tier journals. Assessment of fame and impact in science ultimately rests on productivity (publication) and some variation of its impact (citations). In addition to those traditional measures of impact, there are some relatively new metrics (e.g., the h index and altmetrics). If psychology is to achieve consensual cumulative progress and better rates of replication, I propose that upcoming psychologists would do well to understand that success is not equal to fame and that individual career success is not necessarily the same as disciplinary success. Finally, if one is to have a successful and perhaps even famous career in psychological science, a good strategy would be to synthesize intrinsic and extrinsic motives for one’s research.APA Style Reference
Feist, G. J. (2016). Intrinsic and extrinsic science: A dialectic of scientific fame. Perspectives on Psychological Science, 11(6), 893-898. https://doi.org/10.1177/1745691616660535
You may also be interested in
- The Focus on Fame distorts Science (Innes-Ker, 2017)
- Fame: I’m Skeptical (Ferreira, 2017)
- Let’s Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit (Pickett, 2017)
- “Fame” is the Problem: Conflation of Visibility With Potential for Long-Term Impact in Psychological Science (Shiota, 2017)
- Why a Focus on Eminence is Misguided: A Call to Return to Basic Scientific Values (Corker, 2017)
- Am I Famous Yet? Judging Scholarly Merit in Psychological Science: An Introduction (Sternberg, 2016)
- Against Eminence (Vazire, 2017)
- Giving Credit Where Credit’s Due: Why It’s So Hard to Do in Psychological Science (Simonton, 2016)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- Eminence and Omniscience: Statistical and Clinical Prediction of Merit (Foss, 2016)
- Taking Advantage of Citation Measures of Scholarly Impact: Hip Hip h Index! (Ruscio, 2016)
- Varieties of Fame in Psychology (Roediger III, 2016)
- Improving Departments of Psychology (Diener, 2016)
Scientific inbreeding and same-team replication: Type D personality as an example (Ioannidis, 2012)
Main Takeaways:
- Replication is fundamental for the scientific process. However, this is fraught with difficulties, making researchers avoid any form of replication.
- However, there is one specific type of replication that researchers need to be wary of: same-team replications. This type of replication is more likely to lead to spurious confirmation as a result of allegiance and confirmation bias.
- Consistent results are seen as a sign of being a good scientist, leaving no room for objection. As a result, proponents of original theories can shape the literature and control the publication venues that they largely select. In turn, this moulds the results, wording and interpretation of studies that are eventually published.
- Psychological science may be prone to implicit bias and a desire for a ‘clean’ narrative, as opposed to reflecting the messy reality.
- There is large flexibility in definitions, uses of cut-offs, modelling and statistical handling of the data, leading to large room for exploratory analyses and vibration of effects (cf. Simmons et al., 2011).
- Researchers in psychological science perform research in single teams, as opposed to larger collaborative ones. Hence, it’s common practice for these researchers to keep their data, protocols and analyses private.
- “Nevertheless, in fields that have high levels of inbreeding and one team has the lion's share of the major papers, it is likely that submitted papers will hit either one of the team members or an affiliate or devoted follower in the peer-review stage.” (p.409).
- Conscious, subconscious and unconscious bias are very common and related to the scientific discovery process. Independent replication is necessary to alleviate bias and understand true effects of associations and interventions.
- Failure to replicate is not a bad outcome if replications are conducted appropriately and with proper attention to study design and conduct.
- The scientific community (i.e. investigators, funders, reviewers and editors) should facilitate and endorse independent replications. Also, the scientific community should be protecting and promoting this independence, while discouraging and preventing obedient and obliged replications.
Quote
“In several scientific fields, funders, journals, and peers may create disincentives towards replication efforts, considering them second-rate, me-too efforts unworthy of funding and prestigious publication.” (p. 408)
Abstract
Replication is essential for validating correct results, sorting out false-positive early discoveries, and improving the accuracy and precision of estimated effects. However, some types of seemingly successful replication may foster a spurious notion of increased credibility, if they are performed by the same team and propagate or extend the same errors made by the original discoveries. Besides same-team replication, replication by other teams may also succumb to inbreeding, if it cannot fiercely maintain its independence. These patterns include obedient replication and obliged replication. I discuss these replication patterns in the context of associations and effects in the psychological sciences, drawing from the criticism of Coyne and de Voogd of the proposed association between type D personality and cardiovascular mortality and other empirical examples.APA Style Reference
Ioannidis, J. P. (2012). Scientific inbreeding and same-team replication: type D personality as an example. Journal of psychosomatic research, 73(6), 408-410. https://doi.org/10.1016/j.jpsychores.2012.09.014
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
The Nine Circles of Scientific Hell (Neuroskeptic, 2012)
Main Takeaways:
- Neuroskeptic provides a humorous take on Dante’s nine circles of hell, where each circle corresponds to a different questionable research practice that becomes increasingly problematic for scientific integrity.
- The first circle is Limbo, which is reserved for people who turn a blind eye on scientific sins or reward others who engage in them (e.g., by giving them grants).
- The second circle is Overselling, which is reserved for people who overstate the importance of their work to get grants or write better papers.
- The third circle is Post-hoc Storytelling- the scientist fires arrows at random; if a finding is noticed, a demon will explain at length or ramble that it aimed for this precise finding all along.
- The fourth circle is P-value Fishing, which is reserved for those who “try every statistical test in the book” until they get a p-value of less than .05.
- The fifth circle is Creative Use of Outliers, which is reserved for those who exclude “inconvenient” data points.
- The sixth circle is Plagiarism- or presenting another individual’s work as their own work.
- The seventh circle is the Non-publication of Data- scientists can free themselves from this circle if they write an article about it; however, the drawers containing these articles are locked.
- The eighth circle is the Partial Publication of Data, where sinners are chased at random and prodded by demons, in analogy of the selective reporting and massaging of data.
- The ninth circle is Inventing Data, which is reserved for Satan himself (i.e., people who make up their data).
Abstract
In the spirit of Dante Alighieri’s Inferno, this paper takes a humorous look at the fate that awaits scientists who sin against best practice.APA Style Reference
Neuroskeptic. (2012). The nine circles of scientific hell. Perspectives on Psychological Science, 7(6), 643-644. https://doi.org/10.1177/1745691612459519
You may also be interested in
- Psychologists Are Open to Change, yet Wary of Rules (Fuchs et al., 2012)
- Six principles for assessing scientists for hiring, promotion, and tenure (Naudet et al, 2018)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
Check for publication integrity before misconduct (Grey et al., 2020)
Main Takeaways:
- The integrity of a publication can be compromised via accidental errors such as typos, transcription errors or incorrect analyses, or via intentional errors such as image manipulation, data falsification and plagiarism.
- “How publication integrity was compromised is secondary to whether the paper is reliable. Unreliable data or conclusions are problems irrespective of the cause” (p. 167)
- “Resources for editors ... focus on how to manage communications, rather than on how to evaluate reliability and validity. The net effect is inaction: readers remain uninformed about potential problems with a paper, and that can lead to wasted time and resources, and sometimes put patients at risk” (p. 167).
- The authors believe that a major obstacle to evaluating the integrity of publications is a lack of tools. Accordingly, they present the REAPPRAISED checklist to help readers, journal editors and anyone else assess a papers publication integrity. This was originally designed for clinical and animal studies but can also be used more broadly. Irrespective of whether misconduct is suspected, this checklist should be used to facilitate the identification and correction of flawed papers, thus preventing wasted time/resources and protecting patients.
- The REAPPRAISED checklist facilitates evaluation through 11 categories, covering: ethical oversight and funding, research productivity and investigator workload, validity of randomisation, plausibility of results and duplicate data reporting.
- The authors would like to see the checklist used during both manuscript review and post-publication evaluation. They believe that since the checklist separates the assessment of publication integrity from the investigation of research misconduct, it will speed up evaluations. It could also be published alongside retractions and corrections.
- “If multiple concerns are identified [by the checklist], or the concerns identified are often associated with misconduct, the entire body of an author’s work should be systematically assessed” (p. 169).
Quote
“The use of REAPPRAISED will lead to more detailed, efficient, consistent and transparent evaluations of publication integrity, thus faster and more accurate reporting of corrections and retractions...People using the tool will be able to help refine it as they gain experience, and it will help them to develop standards to assess the integrity of publications and act accordingly” (p.169).
Abstract
A tool that focuses on papers — not researcher behaviour — can help readers, editors and institutions assess which publications to trust.APA Style Reference
Grey, A., Bolland, M. J., Avenell, A., Klein, A. A., & Gunsalus, C. K. (2020). Check for Publication Integrity before Misconduct. Nature, 577, 167–169. doi:10.1038/d41586-019-03959-6
You may also be interested in
- Signalling the trustworthiness of science should not be a substitute for direct action against research misconduct (Kornfeld & Titus, 2020)
- Stop ignoring misconduct (Kornfeld & Titus, 2016)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- Publication Pressure and Scientific Misconduct in Medical Scientists (Tijdink et al., 2014)
- Signalling the trustworthiness of science (Jamieson et al., 2020)
- Reply to Kornfeld and Titus: No distraction from misconduct (Jamieson et al., 2020)
Credibility of preprints: an interdisciplinary survey (Soderberg et al., 2020)
Main Takeaways:
- The present study conducted a survey to collect data about cues that could be displayed on preprints to help researchers assess their credibility.
- Preprints are not part of the long peer-review journal process, but the findings can be made available sooner, thus encouraging new work and discoveries by others. This can only happen if preprints are judged as credible. There is some skepticism about the credibility of preprints, particularly in fields for which the concept is new.
- Researchers need to keep up with scholarly work with the emerging evidence in the field and explore new ideas that might inform their research. However, time is a valuable commodity to researchers and effective filters are required to decide whether the researcher should continue to review the research further or stop and move on.
- There has no research conducted to assess the credibility cues on preprints. The authors investigated to which cues are considered important for credibility judgments about preprints and how this varied across disciplines and career stages.
- Method: 3759 researchers from several disciplines (e.g. medicine and psychology) were given a final survey that included questions in four categories: engagement information (e.g. familiarity and favourability of preprints), importance of cues for credibility (e.g. links to available data and independent verification of findings), credibility of service characteristics (e.g. how the user engages with service), and demographics (e.g. age and discipline).
- Results: Most disciplines, especially the social sciences and psychology, favoured preprints. Only 51% of people in the field of medicine favoured preprints. Graduate students and post-doctoral students favoured preprints the most, while full professors favoured preprints the least.
- Results: Individuals who favour and use preprints tended to judge the credibility of preprints as highly important based on cues related to information about open science content and independent verification of author claims, whereas they viewed peer review information and author information as less important. The opposite pattern was observed for individuals who do not favour or use preprints.
- Results: In early 2020, very few preprint services mention these highly important cues.
- The authors concluded that cue related to openness and independent verification of author assertions were rated more highly than cues related to author identities and peer review and usage indicators.
- The opposite pattern of findings were observed for researchers more skeptical of pre-prints.
- There was a broad agreement that transparency of research content (e.g. pre-registration) and evidence of independent verification of content and research claims were the most important to assess credibility of preprints. This pattern was common across all disciplines.
- These shared sets of cues can be applied across scholarly preprint communities to improve the assessment of research credibility.
- Preprint services could improve support of preprint readers’ assessment of research credibility by implementing some of the highly relevant cues with each preprint.
Abstract
Preprints increase accessibility and can speed scholarly communication if researchers view them as credible enough to read and use. Preprint services do not provide the heuristic cues of a journal’s reputation, selection, and peer-review processes that, regardless of their flaws, are often used as a guide for deciding what to read. We conducted a survey of 3759 researchers across a wide range of disciplines to determine the importance of different cues for assessing the credibility of individual preprints and preprint services. We found that cues related to information about open science content and independent verification of author claims were rated as highly important for judging preprint credibility, and peer views and author information were rated as less important. As of early 2020, very few preprint services display any of the most important cues. By adding such cues, services may be able to help researchers better assess the credibility of preprints, enabling scholars to more confidently use preprints, thereby accelerating scientific communication and discovery.APA Style Reference
Soderberg, C.K., Errington, T.M., & Nosek, B.A. (2020). Credibility of preprints: an interdisciplinary survey of researchers. Royal Society of Open Science, 7, 201520. http://doi.org/10.1098/rsos.201520
You may also be interested in
The digital Archaeologists (Perkel, 2018)
Main Takeaways:
- Computation plays a key and larger part in science, scientific articles rarely include their underlying code. Even when codes are included, it can be difficult for others and the original author to execute it.
- Programming languages evolve, together with the computing environment and codes that work flawlessly one day can fail the next.
- Researchers need to maximise code reusability in the future. Reproducibility-minded scientists need to up their documentation games.
- There can be deficiencies in code organisation and code fragments can be run out of order. These can be taken care of by breaking code into modules and implementing code tests and using version control to track changes to the code and note which version produced each set of results.
- New versions of software language are created that are not backwards compatible, making it difficult to reproduce the results.
- Finding the code does not mean it is obvious on how to use the code.
- It is important to document key details of data normalisation, to make it easier to reproduce the findings.
- Developing resources takes time to clean and document code, create test suites, archive data sets and reproduce computational environments. In addition, there are few incentives to conduct these behaviours and there is a lack of consensus on what a reproducible article should look like. Finally, computational systems continue to evolve and it is getting harder to predict which strategies will endure.
- Reproducibility ranges from scientists repeating their own analyses to peer reviewers showing that the code works and applying algorithms to fresh data. The best thing to do is release your source code, so others can browse it and rewrite it as needed.
Quote
“Software is a living thing. And if it’s living it will eventually decay, and you will have to repair it.” (p.658).
Abstract
A computational challenge dares scientists to revive and run their own decades-old code. By Jeffrey M. Perkel.APA Style Reference
Perkel, J. M. (2018). The digital Archaeologists. Nature, 584 (7822), 656-658. https://media.nature.com/original/magazine-assets/d41586-020-02462-7/d41586-020-02462-7.pdf
You may also be interested in
- Trust Your Science? Open Your Data and Code (Stodden, 2011)◈
- Attitudes Toward Open Science and Public Data Sharing: A Survey Among Members of the German Psychological Society (Abele-Brehm et al., 2019)
- Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
- Open Data in Qualitative Research (Chauvette et al., 2019)
- CJEP Will Offer Open Science Badges (Pexman, 2017)
- Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency (Kidwell et al., 2016)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Using OSF to Share Data: A Step-by-Step Guide (Soderberg, 2018)
How much Free Labor? Estimating Reviewers’ Contribution to Publishers (Aczel & Szaszi, 2020) ◈
Main Takeaways:
- This manuscript aimed to estimate the reviewers’ contribution to the publication system in terms of time and salary-based monetised value.
- The journal article is the main product of the academic publishing system. This product is a co-production of scientists and publishers. Scientists provide value by their professional work producing some new knowledge about the world and they provide value by being peer reviewers to validate and improve other scientists’ manuscripts.
- The peer reviewer is rarely recognised and never compensated within this system, although they provide their time and professional knowledge to assess the scientific value and validity of submissions to improve the quality of the manuscript.
- Peer reviewers are, simply put, the gatekeepers of the publishing system, so they work on several manuscripts that go through the review system but are finally rejected.
- However, the work conducted by the peer reviewer of academic publishing is important but the magnitude of this work is unknown. The paper aims to investigate this issue.
- To estimate the salary-based monetised value of the time that reviewers annually work for publishers need three main parameters: number of peer reviews per year, time spent on one review by one reviewer and hourly wage of reviewers.
- Using Publon, the number of peer reviews accepted was estimated to be 55%, while 45% of the manuscripts were rejected after review.
- The global sum of citable documents (i.e. accepted and published articles, reviews, and conference papers published in journals) is 3,900,066. The authors assume these values reflect a 55% acceptance rate, the number of rejected manuscripts are estimated to be 3,190,963.
- The total number of reviews done is 18,082,124 and the authors estimate the number of hours spent on one review as 6 hours.
- Results: The total time reviewers worked on peer-review in 2019 is over 100 million hours,similar to 12,385 years on working review requests, thus the monetary value is above 200 million US dollars in the UK, 600 million US dollars in China and 1.1 billion US dollars in the USA.
Abstract
In this paper, we estimated the time, as well as the salary-based monetary value, of the work scientific journal publishers received in 2019 from reviewers in the peer-review process. In case of uncertainty, we used conservative estimates for our parameters, therefore, the true values are likely to be above our results. The total time reviewers worked on peer-reviews in 2019 is over 100 million hours, equivalent to over 12 thousand years. The estimated monetary value of the time reviewers spend on reviews can be calculated for individual countries. In 2019, for the USA it was over 1.1 billion, for China over 600 million, and for the UK over 200 million USD.APA Style Reference
Aczel, B., & Szaszi, B. (2020, October 9). How Much Free Labor? Estimating Reviewers’ Contribution to Publishers. https://doi.org/10.31222/osf.io/5h9z4
You may also be interested in
Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions (Higginson & Munafo, 2016)
Main Takeaways:
- Scientists are trained to be objective and pursue the discovery of knowledge via exploratory and confirmatory research. However, scientists are human and work within incentive structures that may consciously or unconsciously shape their behaviours.
- The incentive structure is an ecosystem within which scientists strive to maximise their fitness, via publication record, and might predict that individual scientists would strategically adapt consciously or unconsciously to these pressures and module their research strategy in order to boost their career success.
- The first model, conducted by authors, used optimality theory to predict the rational strategy of a scientist possessing finite resources who seek to maximise the career value of their publications.
- The authors observed that as the weight given to novel findings increase, the total number of publications declines. This is because most exploratory studies are not published, as they have low statistical power, thus leading to non-significant findings.
- From an individual career perspective, when statistical power is low, it is better to run many exploratory studies, thus increasing the probability of false positives than to run a smaller number of well-powered studies.
- As the weight given to novel findings increases, so does the investment in exploratory studies, leading to more papers drawing erroneous conclusions to over 50%.
- A second model was created to predict how characteristics of the current scientific ecosystem influence the total scientific value of research. Current incentive structures (e.g. recruitment processes) place weight on findings published in journals with a high Impact Factor and may consider only the best few publications of any individual.
- The model showed that scientific value of research is not maximised, when scientists try to maximise their own success within this ecosystem. If a small number of novel findings count heavily towards career progression, this encourages scientists to focus all research efforts on underpowered exploratory work to maximise the number of publications.
- The model indicates that incentive structures could be redesigned so that the optimal strategy for individual scientists align with the optimal conditions for the advancement of knowledge.
- Put simply, a small reduction in weight being given to novel findings and how quickly the value of total number of publications diminishes would shift individual incentives away from a focus on exploratory work, meaning confirmatory work is more likely to be conducted, increasing the total scientific value of research.
- As journal editors are more stringent on sample size,the more likely studies will be correct increases towards 100. More confirmatory studies are carried out, so the number of studies published increases. When the sample size is very large, the number of exploratory studies approach 0, leading to a decline in the total scientific value of research. This means that journals should be more stringent about required statistical power and sample size.
- Current incentive structures are appropriate if editorial and peer review practices were more stringent regarding sample size and statistical power and strength of statistical evidence required of studies.
- By considering more of a researcher’s output and giving less weight to novel findings, when making appointment and promotion decisions, encourage a change in researcher’s behaviour improving scientific value of research. Journals and journal editors may strive to increase the stringency of the editorial and peer review process, by requiring large sample sizes and greater statistical stringency.
Quote
“Current incentive structures in science, combined with existing conventions such as a significance level of 5%, encourage rational scientists to adopt a research strategy that is to the detriment of the advancement of scientific knowledge. Given finite resources, the importance placed on novel findings, and the emphasis on a relatively small number of publications, scientists wishing to accelerate their career progression should conduct a large number of exploratory studies, each of which will have low statistical power. Since the conclusions of underpowered studies are highly likely to be erroneous [2], this means that most published findings are likely to be false [5]. The results of our model support this conclusion. Indeed, given evidence that with sufficient analytical flexibility (known as p-hacking) almost any dataset can produce a statistically significant (and therefore publishable) finding [16], our results are likely to be conservative. There is therefore evidence from both simulations and empirical studies that current research practices may not be optimal for the advancement of knowledge, at least in the biomedical sciences.” (p.8).
Abstract
We can regard the wider incentive structures that operate across science, such as the priority given to novel findings, as an ecosystem within which scientists strive to maximise their fitness (i.e., publication record and career success). Here, we develop an optimality model that predicts the most rational research strategy, in terms of the proportion of research effort spent on seeking novel results rather than on confirmatory studies, and the amount of research effort per exploratory study. We show that, for parameter values derived from the scientific literature, researchers acting to maximise their fitness should spend most of their effort seeking novel results and conduct small studies that have only 10%±40% statistical power. As a result, half of the studies they publish will report erroneous conclusions. Current incentive structures are in conflict with maximising the scientific value of research; we suggest ways that the scientific ecosystem could be improved.APA Style Reference
Higginson, A. D., & Munafò, M. R. (2016). Current incentives for scientists lead to underpowered studies with erroneous conclusions. PLoS Biology, 14(11), e2000995, https://doi.org/10.1371/journal.pbio.2000995.
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- A 21 Word Solution (Simmons et al., 2012)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) (Ingre, 2013)
- Ironing out the statistical wrinkles in “ten ironic rules” (Lindquist et al., 2013)
- Ten ironic rules for non-statistical reviewers (Friston, 2012)
- Small sample size is not the real problem (Bacchetti, 2013)
- Measurement error and the replication crisis (Loken & Gelman, 2017)
- Misuse of power: in defence of small-scale science (Quinlan, 2013)
- Bite-Size Science and Its Undesired Side Effects (Bertamini & Munafo, 2012)
- Confidence and precision increase with high statistical power (Button et al., 2013)
Testing an active intervention to deter researchers’ use of questionable research practices (Bruton et al., 2019)
Main Takeaways:
- Questionable Research Practices produce harm tangibly and directly that affects prescribed medical care or waste research funds.
- Reforming current practices will be a gradual process at best. It is not entirely clear how best to improve current practices and scientists are often resistant to change.
- The present study used an intervention that assesses a direct psychological means to encourage research integrity to reduce the number of Questionable Research Practices.
- Method: 201 participants had to complete a brief writing task. The participants were split into the consistency condition (i.e. the participants had to write how research integrity is modelled in their work and how it is consistent with their core ethical standards) or control condition (i.e. participants had to write about why fabrication, falsification and plagiarism are ethically objectionable).
- Participants were asked to complete two questionnaires: 1) the extent to which they endorsed 15 questionable research practices and indicate the extent to which each questionable research practice was ethically defensible and willingness to engage in each practice. 2) the impact on others of engaging in questionable research practices.
- Results: The consistency intervention had no significant effect on respondents’ reactions regarding the defensibility of the Questionable Research Practices or their willingness to engage in them.
- Results: Participants in the control condition expressed lower perception of Questionable Research Practice defensibility and willingness.
- The authors concluded that the consistency intervention did not differ from the control intervention on the respondents’ reactions to Questionable Research Practices but may have had the unwanted effect of inducing increased rationalisation about these effects.
Abstract
Introduction: In this study, we tested a simple, active “ethical consistency” intervention aimed at reducing researchers’ endorsement of questionable research practices (QRPs).Methods: We developed a simple, active ethical consistency intervention and tested it against a control using an established QRP survey instrument. Before responding to a survey that asked about attitudes towards each of fifteen QRPs, participants were randomly assigned to either a consistency or control 3–5-min writing task. A total of 201 participants completed the survey: 121 participants were recruited from a database of currently funded NSF/ NIH scientists, and 80 participants were recruited from a pool of active researchers at a large university medical center in the southeastern US. Narrative responses to the writing prompts were coded and analyzed to assist post hoc interpretation of the quantitative data.Results: We hypothesized that participants in the consistency condition would find ethically ambiguous QRPs less defensible and would indicate less willingness to engage in them than participants in the control condition. The results showed that the consistency intervention had no significant effect on respondents’ reactions regarding the defensibility of the QRPs or their willingness to engage in them. Exploratory analyses considering the narrative themes of participants’ responses indicated that participants in the control condition expressed lower perceptions of QRP defensibility and willingness.Conclusion: The results did not support the main hypothesis, and the consistency intervention may have had the unwanted effect of inducing increased rationalization. These results may partially explain why RCR courses often seem to have little positive effect.APA Style Reference
Bruton, S. V., Brown, M., Sacco, D. F., & Didlake, R. (2019). Testing an active intervention to deter researchers’ use of questionable research practices. Research integrity and peer review, 4(1), 1-9. https://doi.org/10.1186/s41073-019-0085-3
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- A 21 Word Solution (Simmons et al., 2012)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- The life of p: “Just significant” results are on the rise (Leggett et al., 2013)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
Raising research quality will require collective action (Munafo, 2019)
Main Takeaways:
- Individual achievement is highlighted as being important for funding, appointments, promotions, tenure, prizes etc. but ignores the deeds that benefit the community, which should be valued explicitly, which should produce usable tools or shared code.
- If the scientific community wants to move towards a transparent model of research, open-research practices need to be rewarded. If the scientific community wants researchers to work well in large collaborations, researchers need to train them in communication skills and collective self-scrutiny.
- Researchers must reflect on how and why questionable research practices and undesirable behaviours arise and persist. What are the flaws that the cultures and practices that are being spread by the institutions’ cultures? Has solid and cumulative work, replication studies and null findings been dis-incentivised due to a focus on groundbreaking findings?
- Institutions are working together to determine how their cultural practices undermine the value of replication, verification and transparency.
- Figuring out which system-level changes are needed and how to make them happen will be someone’s primary responsibility, not a volunteer activity. What changes might ensue?
- Institutions should make the use of data sharing and other-research practices an explicit criterion for promotion.
- Universities in general need to act collectively. Changes to incentives at a single institution is not enough to make new behaviours stick, as these practices can be seen as a form of tax on the scientist’s individual career.
- Only if changes occur across many institutions will the impacts permeate scientific culture. The same is true for training, universities must agree all graduates must reach common standards.
- When it comes to changing the culture of science, numerous initiatives link members of the research community to support robust transparent research such as the UK Reproducibility Network, Center for Open Science in the United States, the QUEST Center in Germany, the Research on Research Institute with eight participating countries and other grassroots networks of researchers in many countries.
Quote
“But these cultural changes might falter. Culture eats strategy for breakfast — grand plans founder on the rocks of implicit values, beliefs and ways of working. Top-down initiatives from funders and publishers will fizzle out if they are not implemented by researchers, who review papers and grant proposals. Grass-roots efforts will flourish only if institutions recognize and reward researchers’ efforts. Funders, publishers and bottom-up networks of researchers have all made strides. Institutions are, in many ways, the final piece of the jigsaw. Universities are already investing in cutting-edge technology and embarking on ambitious infrastructure programmes. Cultural change is just as essential to long-term success.” (p. 183).
Abstract
Institutions must act together to reform research culture, says Marcus Munafò.APA Style Reference
Munafò, M. (2019). Raising research quality will require collective action. Nature, 576(7786), 183. DOI: 10.1038/d41586-019-03750-7
You may also be interested in
- Promoting an open research culture (Nosek et al., 2015)
- Quality Uncertainty Erodes Trust in Science (Vazire, 2017)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Promoting Transparency in Social Science Research (Miguel et al., 2014)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- A journal club to fix science (Orben, 2019)
Small sample size is not the real problem (Bacchetti, 2013)
Main Takeaways:
- Widespread poor research practices raise difficult questions about how to bring about improvements.
- Bacchetti argued that the positive predictive value of p < 0.05 is an unacceptably poor measure of the evidence that a study provides. It ignores the distinction between p = .049 and p < .0001, thus wasting information.
- Estimated effects, confidence intervals and exact p values should be considered when interpreting a study’s results, and these make power irrelevant to interpret completed studies.
- The author argues that any specific result is not weaker evidence because of a small sample size per se than the same p value would be with a larger sample size.
- Each additional subject produces a smaller increment in projected scientific or practical value than the previous one, indicating efficiency is defined by projected value per animal sacrifice, thus it will be worse with a larger planned sample size
Quote
“Power calculations therefore should not overrule cost–efficiency and feasibility, and this is impossible in real research practice anyway. Manipulation of the design, conduct analysis and interpretation of studies towards producing more ‘interesting’ results is a serious problem, as is selective dissemination of studies’ results, but these are not caused by small sample size.” (p.585).
Abstract
Dr Peter Bacchetti provides a commentary on Power failure: why small sample size undermines the reliability of neuroscience and discusses that small sample size does not undermine the reliability of neuroscience.APA Style Reference
Bacchetti, P. (2013). Small sample size is not the real problem. Nature Reviews Neuroscience, 14(8), 585-585. https://doi.org/10.1038/nrn3475-c3
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- A 21 Word Solution (Simmons et al., 2012)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) (Ingre, 2013)
- Ironing out the statistical wrinkles in “ten ironic rules” (Lindquist et al., 2013)
- Ten ironic rules for non-statistical reviewers (Friston, 2012)
- Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions (Higginson & Munafo, 2016)
- Measurement error and the replication crisis (Loken & Gelman, 2017)
- Misuse of power: in defence of small-scale science (Quinlan, 2013)
- Bite-Size Science and Its Undesired Side Effects (Bertamini & Munafo, 2012)
- Confidence and precision increase with high statistical power (Button et al., 2013)
A journal club to fix science (Orben, 2019)
Main Takeaways:
- Early career researchers are taught about replication issues and failures to challenge results of the previous decade. Connecting through social media lets us sidestep conventional hierarchies and scrutinise current research practices.
- Science needs researchers who care about better research to stay invested and this will not happen by telling the next generation of scientists to sit back and hoe. Early-career researchers should not wait passively for coveted improvements, they can create communities and push for bottom-up changes. it back and hope.
- The ReproducibiliTea was started by Sophia Cruwell, Amy Orben and Sam Parsons in 2018 at the experimental psychology department at the University Oxford to promote a stronger open-science community and converse with others about reproducibility. This initiative is now active at more than 27 universities in 8 countries.
- In each meeting, a scientific paper lays the groundwork for a conversation. Concerns vary from field to field and institution to institution, so each club focuses on specific aspects of scientific methods (e.g. open access, pre-registration and data sharing if their supervisors state these practices will undermine their careers) and systems that concern them most.
- ReproducibiliTea has helped trainees to return to their lab groups and advocate for change. Sometimes this approach works and sometimes it does not. ReproducibiliTea members state how valuable this journal club is and how they want to see science is practised.
- Creating Reproducibilitea is easy and visible, does not require jumping over bureaucratic hurdles, and does not require senior support or funding. These groups consist of about a dozen psychology researchers but people from other departments.
- ReproducibiliTea puts open science on radar of other academics and senior staff.
Quote
“In practice, I have found our meetings underscore the idea that open science is a process, not a one-time achievement or a claim to virtue...One attendee told me, “Before, I thought everything was black and white in open science, and now I see there are caveats and difficulties and things to overcome.” ReproducibiliTea’s low-key grass-roots meetings will encourage a new generation of scientists to feel motivated to master these challenges.” (p.465).
Abstract
ReproducibiliTea can build up open science without top-down initiatives, says Amy Orben.APA Style Reference
Orben, A. (2019). A journal club to fix science. Nature, 573(7775), 465. https://doi.org/10.1038/d41586-019-02842-8
You may also be interested in
Preregistered Direct Replications in Psychological Science (Lindsay, 2017)
Main Takeaways:
- Authors of pre-registered direct replications need to provide a compelling case why a replication will make a valuable contribution to understand a phenomenon or theory to psychologists in general.
- The main criterion to be published in psychological science should be related to theoretical significance.
- Direct replications should reproduce the original methods and procedures as closely as possible, with the goal to assess the same effect as the original study.
- The aim of a direct replication is to create conditions that experts agree test the same hypotheses in the same way as the original study. Researchers conducting a pre-registered direct replication should consult the author or authors of the original article.
- These direct replications will be subject to external review, which will consist of an author of the target piece, together with two independent experts.
- Pre-registered direct replications should include justification for sample sizes and outcome-independent quality control checks to maximise the credibility of reported findings.
- Researchers are strongly encouraged to submit proposals for pre-registered direct replications for review before data collection. After the data is collected, the manuscript reporting the replication will be reviewed with the expectation that it will be accepted if the agreed-upon criteria are met.
- Pre-registered direct replications are limited to 1500 words, excluding Methods and Results section. Authors should provide online Supplemental Materials that would be of interest to experts.
Quote
“And, as Walter Mischel (2009) noted in an APS Observer Presidential column, replications sometimes yield more nuanced results that spark new hypotheses and contribute to the elaboration of psychological theories.” (p.1192).
Abstract
A commentary by Professor Stephen D. Lindsay on implementing pre-registered direct replications in the journal of Psychological Science.APA Style Reference
Lindsay, D. S. (2017). Preregistered Direct Replications in Psychological Science. Psychological Science, 28(9), 1191-1192. https://doi.org/10.1177/0956797617718802
You may also be interested in
- Easy preregistration will benefit any research (Mellor & Nosek, 2018)
- Is pre-registration worthwhile? (Szollosi et al., 2020)
- From pre-registration to publication: a non-technical primer for conducting meta-analysis to synthesize correlation data (Quintana, 2015)
- Pre-registration is Hard, And Worthwhile (Nosek et al., 2019)
- Preregistration of Modeling Exercises May Not Be Useful (MacEachern & Van Zandt, 2019)
Attitudes Toward Open Science and Public Data Sharing: A Survey Among Members of the German Psychological Society (Abele-Brehm et al., 2019)
Main Takeaways:
- Public data sharing is the only topic not discussed in open science. We should make data accessible for re-analyses in a secure, reliable and competently managed repository.
- Although there is a positive attitude towards open science, some researchers argue whether data sharing will benefit the careers of early career researchers.
- The present study investigated the attitude towards open science and public data sharing in general, as attitudes not only contribute to the research practice of the individual, but also the undergraduate student, the postgraduate student, the post-doctoral student, colleagues and the wider scientific community.
- Method: 337 people were given scales and open-ended questions with 14 items that measured attitudes toward open science and public data sharing (e.g. what are the long-term consequences if a researcher shares raw data as part of a publication?).
- Method: Attitudes toward open science were separated into hopes and fears.
- Results: More hopes were related to open science and data sharing attitudes than fears. Hopes and fears were highest for ECRs, whereas for professors, hopes and fears were the lowest.
- Attitudes towards open science and public data sharing were positive but there were fears that sharing data may have negative consequences for an individual’s career (e.g. data scooping).
- Professors exhibited the least hopes and fears concerning the consequences of open science and data sharing.
Quote
“This is, of course, true, but the idea of OS is transparency, and the question whether transparency and a higher commitment to data sharing and OS practices will eventually decrease QRPs and, thus, increase the robustness and replicability of psychological effects remains to be determined empirically.” (p.259).
Abstract
Central values of science are, among others, transparency, verifiability, replicability, and openness. The currently very prominent Open Science (OS) movement supports these values. Among its most important principles are open methodology (comprehensive and useful documentation of methods and materials used), open access to published research output, and open data (making collected data available for re-analyses). We here present a survey conducted among members of the German Psychological Society (N = 337), in which we applied a mixed-methods approach (quantitative and qualitative data) to assess attitudes toward OS in general and toward data sharing more specifically. Attitudes toward OS were distinguished into positive expectations (“hopes”) and negative expectations (“fears”). These were uncorrelated. There were generally more hopes associated with OS and data sharing than fears. Both hopes and fears were highest among early career researchers and lowest among professors. The analysis of the open answers revealed that generally positive attitudes toward data sharing (especially sharing of data related to a published article) are somewhat diminished by cost/benefit considerations. The results are discussed with respect to individual researchers’ behavior and with respect to structural changes in the research system.APA Style Reference
Abele-Brehm, A. E., Gollwitzer, M., Steinberg, U., & Schönbrodt, F. D. (2019). Attitudes toward open science and public data sharing. Social Psychology, 50, 252-260. https://doi.org/10.1027/1864-9335/a000384
You may also be interested in
- Trust Your Science? Open Your Data and Code (Stodden, 2011)
- Open Data in Qualitative Research (Chauvette et al., 2019)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Using OSF to Share Data: A Step-by-Step Guide (Soderberg, 2018)
- The digital Archaeologists (Perkel, 2020)
Sharing Data and Materials in Psychological Science (Lindsay, 2017)
Main Takeaways:
- Psychological science requests data and material to be sent from author to reviewers and the public. Anonymity should be preserved. However, this may not be possible when the manuscript is not ethically permitted or practically feasible, thus the corresponding author needs to offer a brief explanation.
- The reported analyses should be made available to reviewers when doing so is ethically permitted and practically feasible. Easy access to the data enables reviewers to assess the substantive claims of the manuscript.
- Psychological Science asks authors who submit a manuscript to explain how reviewers access data for review purposes or, if reviewers cannot access the data, to explain why.
- Researchers are encouraged to specify data-sharing plans in their Institutional Review Board applications and informed consent forms. It can be argued that if the consent form did not mention data sharing, it may be permissible to share non-identifiable data with reviewers.
- Researchers owe it to people who participated to make the data available to improve the extent to which data contributes to science. It can take a lot of work to prepare an archive of non-identifiable data, together with coding schemes, analysis scripts, etc. is sufficiently clear that other researchers can understand the dataset. This is part of being a scientist and the burden reduces with further practice and good workflow processes.
- Authors must make their best judgments as to the granularity of the shared data best balancing costs and benefits. If multiple measures are collected and report only a subset, it needs to be made clear that the dropped measures will be provided to reviewers.
- The reviewers do not have to look at the data linked to the submission, it is their choice if they want to examine the data. The reviewers are asked to report on whether or not they looked at the data and if it affected their evaluation of the manuscript.
- Data sharing allows the reviewers access to stimulus materials, measures, computer code for running experiments, simulations or data analyses etc. Having access to materials helps reviewers to evaluate claims made in the manuscript.
- It may not be ethically appropriate to give reviewers easy access to the materials and there are challenges with sharing materials for research conducted in languages other than English. Authors must make their best judgments to balance the cost and benefits of giving reviewers access to materials.
- Data and materials should be shared, this improves the impact and interpretation of a manuscript. Some materials are copyrighted, such that they cannot be shared. Authors who have invested heavily in developing these materials may not want to give them away to other scientists. Put simply, materials must not always be shared but authors must be encouraged to share materials with other scientists. This sharing of materials will not only benefit the field but also the author themselves.
Quote
“In at least some areas of psychology, it is getting harder to publish. Expectations for methodological rigor and statistical sophistication have risen sharply in the past 6 years. Preregistering one’s research plans, testing sufficient numbers of subjects to attain respectable power or precision, avoiding p-hacking and HARKing (“hypothesizing after the results are known”; Kerr, 1998), replicating one’s findings— these new norms make it harder to produce a primary research report that tells a good story. The upside is that more of our stories will turn out to be true” (p.702).
Abstract
A commentary by Professor Stephen D. Lindsay on sharing data and materials in the journal of Psychological Science.APA Style Reference
Lindsay, D. S. (2017). Sharing Data and Materials in Psychological Science. Psychological Science, 28(6), 699-702. https://doi.org/10.1177/0956797617704015
You may also be interested in
- Trust Your Science? Open Your Data and Code (Stodden, 2011)◈
- Attitudes Toward Open Science and Public Data Sharing: A Survey Among Members of the German Psychological Society (Abele-Brehm et al., 2019)
- Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
- Open Data in Qualitative Research (Chauvette et al., 2019)
- CJEP Will Offer Open Science Badges (Pexman, 2017)
- Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency (Kidwell et al., 2016)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Using OSF to Share Data: A Step-by-Step Guide (Soderberg, 2018)
- The digital Archaeologists (Perkel, 2020)
We Have to Break Up (Cialdini, 2009)
Main Takeaways:
- Mediation is used by psychologists to locate causality and sophisticated psychometric techniques allow mediational accounts of our major findings through the analysis of ancillary data.
- Human conduct is broadly and distinctly cognitive. If psychology failed to focus on systematic scrutiny on cognitive variables and their roles in behaviour. It makes sense psychological science value submissions combining several related studies.
- Data collection and recruitment in the field takes longer than in the laboratory. The package of multiple-study research reports takes numerous years and getting permission to conduct experiments in a naturally occurring environment can take as long as completing several laboratory investigations.
- The author wants to leave psychology, as psychology has become too lax in their responsibilities to the public in this regard. They deserve to know the pertinence of research to their lives, as they paid for that research and are entitled to know what we have learned about them with their money.
- To improve academic psychology, we would need to reassign more value to field research than has been the case in recent times. It should be taught regularly in graduate methods classes and there should be awards for them and given more grace and space in the loftiest of our journals.
Quote
“Finally, truly natural human activities don’t lend themselves to the collection of the kinds of secondary data on which to base mediational analyses; participants in many of the contexts I have employed (e.g., automobile dealerships, hospital parking garages, amusement parks, recycling centers, hotel guestrooms) do not feel bound or inclined to offer such data in order to help some researcher distinguish among theoretical models.” (p. 5).
Abstract
Three mostly positive developments in academic psychology—the cognitive revolution, the virtual requirement for multiple study reports in our top journals, and the prioritization of mediational evidence in our data—have had the unintended effect of making field research on naturally occurring behavior less suited to publication in the leading outlets of the discipline. Two regrettable consequences have ensued. The first is a reduction in the willingness of researchers, especially those young investigators confronting hiring and promotion issues, to undertake such field work. The second is a reduction in the clarity with which nonacademic audiences (e.g., citizens and legislators) can see the relevance of academic psychology to their lives and self-interest, which has contributed to a concomitant reduction in the availability of federal funds for basic behavioral science. Suggestions are offered for countering this problem.APA Style Reference
Cialdini, R. B. (2009). We have to break up. Perspectives on psychological science, 4(1), 5-6. https://doi.org/10.1111/j.1745-6924.2009.01091.x
You may also be interested in
Swan Song Editorial (Lindsay, 2019)
Main Takeaways:
- Eich instituted the use of badges that designate articles with open data, open materials and pre-registration. They wanted authors to disclose all data exclusions, manipulations and measures how they determined sample size. In addition word-count limits from Method and Results were removed to allow authors to report key details of their studies, analyses, and findings.
- A few pre-registered direction replications have been published in Psychological Science to date. More are in the pipeline. Psychological Science published several articles stressing evidence for the absence of a non-trivial effect (i.e. evidence supporting the null).
- Statisticians agree Bayes factor equivalence tests are used to assess the strength of evidence for the null. This is an important step forward for life science. Retractions have been issued at the behest of authors who stepped forward and requested the Corrigendum or Retraction themselves.
- Researchers should be saluted for owning up to the mistakes when appropriate. Errors came to light because authors posted their data and other scientists examined them and found evidence of problems. This is progress.
- Badges are not an end in themselves. The aim is to make it easy for scientists to access data and materials that support reproducibility, robustness, and replicability to encourage detailed pre-registrations allow researchers and readers to discriminate between confirmatory and exploratory analyses.
- Critics argue that commercial publishers' profits are high and that the goals of scientists and commercial publication systems block many people from accessing reports of publicly funded research.
- Early career researchers rely on and show the virtues of preprints. Academic Twitter is against journals that publish articles behind a paywall.
- Journals can set policies that can quickly and dramatically increase transparency. Professional societies use revenue from journals to support their proscience agendas. Professional societies must think about how and why they publish journals and how they can continue to do so in the future.
Abstract
A commentary by Professor Stephen D. Lindsay about his time as the editor of the journal of psychological science.APA Style Reference
Lindsay, D. S. (2019). Swan Song Editorial. Psychological Science, 30(12), 1669-1673. https://doi.org/10.1177/0956797619893653
You may also be interested in
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- A 21 Word Solution (Simmons et al., 2012)◈
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Promoting an open research culture (Nosek et al., 2015)
- Promoting Transparency in Social Science Research (Miguel et al., 2014)
- Psychologists Are Open to Change, yet Wary of Rules (Fuchs et al., 2012)
- Registered Reports: Realigning incentives in scientific publishing (Chambers et al., 2015)
- Registered Reports: A new publishing initiative at Cortex (Chambers, 2013)
- Registered Reports: A step change in scientific publishing (Chambers, 2014)
- Registered reports: a method to increase the credibility of published results (Nosek & Lakens, 2014)
- Registered reports (Jamieson et al., 2019)
- Attitudes Toward Open Science and Public Data Sharing: A Survey Among Members of the German Psychological Society (Abele-Brehm et al., 2019)
- Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
- How scientists can stop fooling themselves (Bishop, 2020b)
- CJEP Will Offer Open Science Badges (Pexman, 2017)
- Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency (Kidwell et al., 2016)
- Trust Your Science? Open Your Data and Code (Stodden, 2011)◈
- Publishing Research With Undergraduate Students via Replication Work: The Collaborative Replications and Education Project (CREP; Wagge et al., 2019)
- Is science really facing a reproducibility crisis, and do we need it to? (Fanelli, 2018)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- A consensus-based transparency checklist (Aczel et al., 2020)
- Tell it like it is (Anon, 2020)
- Is pre-registration worthwhile? (Szollosi et al., 2020)
- Signalling the trustworthiness of science (Jamieson et al., 2020)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
Editorial of Psychological Bulletin (Albarracin, 2015)
Main Takeaways:
- Readers of Psychological Bulletin are accustomed to accurate, balanced treatments of a subject. Whenever quantitative synthesis is possible, precision and accuracy take the form of well executed meta-analysis, a method used over the editorial periods.
- More significant than whether a review is quantitative or qualitative is whether it contributes a cohesive, useful theory. Many of the reviews from recent decades involve integrations of data that highlight fundamental variables and their structure, causal processes, and factors that initiate or disrupt those processes.
- Qualitative reviews can accelerate theoretical advancement, regularly knit connections among subfields of psychology, and may produce the fully theory testing meta-analysis appearing in the journal.
- Psychological Review is the outlet for theory development and specification, Psychological Bulletin can be the outlet for theory testing achieved through systematic research synthesis.
- Authors should be mindful of the need to write for a general psychology audience. The prose must be intelligible, the topic interesting, and interdisciplinary and applied implications explicit.
- Psychological Bulletin generates a cohesive, authoritative, theory-based, and complete synthesis of scientific evidence in the field of psychology.
- Reviewers must present a problem and offer an intellectual solution. Focused questions about the phenomenon are important to move the field of psychology forward.
- Authors should make all possible attempts at estimating and reducing review biases, including translating reports from foreign languages, examining publication biases and survey the grey literature.
- The methods of review and effect size calculation should be properly reported, including details about the coding process, report indexes of inter-coder reliability and verify that the reported methods could be replicated by readers of an article.
Quote
“Critical reviews of psychological problems offer an opportunity for the discipline’s self-study and self-actualization. To continue to support this mission, I am interested in submissions of reviews answering questions the discipline must regularly confront: What is psychology? What questions have psychologists not fully answered yet in a given area or set of areas? Or, what are the fundamental, indispensable constructs of our discipline? Psychological Bulletin is ideally suited to answer those questions through systematic review articles. Psychological Bulletin is also an optimal forum for scientific debates about the magnitude and replicability of psychological phenomena. Scientific error as well as voluntary and accidental misreporting, not to mention the occasional case of fraud, undoubtedly reduce the contribution of virtually any primary study considered in isolation. In recent years, concerns with error and scientific misconduct have received a great deal of attention within and outside of the discipline, but pointing fingers at individual researchers or idealizing the contribution of a particular form of replication is unlikely to alter the cumulative mandate of science. Instead, well-conducted research syntheses will continue to gamer advantage from our collective contributions to excellence in psychological science. I foresee Psychological Bulletin at the center of those endeavors.” (p.5).
Abstract
An editorial by Professor Dolores Albarracin about how to make a robust systematic review and how to get it published in Psychological Bulletin.APA Style Reference
Albarracín, D. (2015). Editorial. Psychological bulletin, 141(1), 1-5. https://doi.org/10.1037/bul0000007
You may also be interested in
Business as Usual (Eich, 2014)
Main Takeaways:
- Research Articles and Research Reports were limited to 4000 and 2500 words and included all of the main text (i.e. introduction, method, results and discussion), along with notes, acknowledgement and appendices.
- The new limits on Research Articles and Research Reports are 2000 and 1000 words, respectively. The word count will include the introduction, discussion, notes, acknowledgement and appendices but not the method and results.
- The authors will have to state why is this knowledge important for the field? How are the claims made in the article justified by the methods used?
- Editors and external referees will evaluate submissions with three questions: what will the reader learn about psychology that they did not know before? The authors are asked to preview their answers as part of the manuscript submission.
- The aim is to make the preview exercise manageable for all parties, while helping everyone be on the same page.
- Authors are asked to report the total number of observations that were excluded and the criterion for exclusion, all tested experimental conditions, including failed manipulations; all administered measures and items and how they determined their sample sizes.
- The manuscript submission portal will have a new section for Disclosure Statement items. Submitting authors check each item in order for their manuscript to proceed to editorial evaluation. Authors declare that they have disclosed all of the required information for each study in the submitted manuscript.
- Psychological Science will promote open scientific practices. Present norms do not provide strong incentives for individual researchers to share data, materials or the research process.
- Journals could provide incentives for scientists to adopt open practices by acknowledging them in publication. The challenge is to establish which open practices should be acknowledged, what criteria must be met to earn acknowledgement, and how acknowledgement would be displayed within the journal.
Quote
“Null-hypothesis significance testing (NHST) has long been the mainstay method of analyzing data and drawing inferences in psychology and many other disciplines. This is despite the fact that, for nearly as long, researchers have recognized essential problems with NHST in general, and with the dichotomous (“significant” vs. “nonsignificant”) thinking it engenders in particular. The problems that pervade NHST are avoided by the new statistics—effect sizes, confidence intervals, and meta analysis.” (p.5).
Abstract
An editorial by Professor Eric Eich about submitting a manuscript in Psychological Science.APA Style Reference
Eich, E. (2014). Business not as usual. Psychological science, 25(1), 3.https://doi.org/10.1177/0956797613512465
You may also be interested in
Trade-Offs in the Design of Experiments (Wiley, 2009)
Main Takeaways:
- Experiments should always include rigorous attempts to identify and reduce unintended influences of one subject on another and make appropriate use of any multi-level statistical design. There can be no argument that treatments must be assigned randomly to experimental units.
- This note focuses on the pervasiveness of trade-offs in the design of experiments, the inappropriate and appropriate uses of multiple tests of a hypothesis and the dangers that counteract the benefits of large samples.
- The decision to include blocking in an experimental design requires repetition of a treatment on any one experimental unit. It may involve measuring the response of each individual more than once or obtaining measurements in each location or in each block of time more than once.
- The advantage of this blocking is the information it produces about variation among blocks. This information can be used to test what might be called the secondary hypotheses of the experiment and the sources of variation that influence the responses to the treatment.
- In addition, by obtaining repeated measures of each experimental unit or subject. If the units were not predicted to differ intrinsically in response to treatments, there remains the possibility of errors of measurement.
- If the accuracy or precision of measurements were low, it could happen that variation in repeated measurements of any one unit were greater than variations in means between units under any one treatment or between treatment.
- Standardisation involves a trade-off with the generality of the results. The results become constrained by those conditions. The advantage of reducing variation in the responses of subjects is counterbalanced by a disadvantage in the generality of the results.
- This is problematic for multiple measurements, since it raises the question: Is the experimenter interested in each number being separate responses to the treatment? Or are they interested in any number of possible responses to the treatment?
- Investigators should detect all possible biases and detect the smallest effects of the treatment possible. Large samples allow detection of small effects because of the influence of sample size on the estimation of sample means. Small samples have more variation among samples, and this variation is a source of bias.
- Bias is some unsuspected systematic difference between experimental units for different treatments. Randomisation of treatment among subjects reduces bias. In the real world, it does not remove all systematic differences between treatment, because randomisation does not assure that all features of experimental units are equally distributed among different treatment groups.
- If the sample size is small, an experiment can only detect a large effect of treatment. Only a large difference in a confounding variable can produce an apparent effect of treatment.
- A large bias is likely to be noticed by the investigator or by reviewers. A large experiment is subject to bias remaining after randomisation of a finite sample. Large random samples are less likely to have large differences. Small differences remain likely. Small differences from bias in large samples can reach a criterion for statistical significance, just as small differences from treatments can.
Abstract
This comment supplements and clarifies issues raised by J. C. Shank and T. C. Koehnle (2009) in their critique of experimental design. First, the pervasiveness of trade-offs in the design of experiments is emphasized (Wiley, 2003). Particularly germane to Shank and Koehnle’s discussion are the inevitable trade-offs in any decisions to include blocking or to standardize conditions in experiments. Second, the interpretation of multiple tests of a hypothesis is clarified. Only when interest focuses on any, rather than each, of N possible responses is it appropriate to adjust criteria for statistical significance of the results. Finally, a misunderstanding is corrected about a disadvantage of large experiments (Wiley, 2003). Experiments with large samples raise the possibility of small, but statistically significant, biases even after randomization of treatments. Because these small biases are difficult for experimenters and readers to notice, large experiments demonstrating small effects require special scrutiny. Such experiments are justified only when they involve minimal human intervention and maximal standardization. Justifications for the inevitable trade-offs in experimental design require careful attention when reporting any experiment.APA Style Reference
Wiley, R. (2009). Trade-Offs in the Design of Experiments. Journal of Comparative Psychology, 123(4), 447-449. doi: 10.1037/a0016094. [ungated]
You may also be interested in
A guideline for whom? (Furukawa, 2016)
Main Takeaways:
- Systematic reviews and meta-analyses need critical evaluation of each included and excluded trial and a critical overview of the totality of thus selected evidence. They have conducted a number of influential randomised controlled trials of psychotherapies themselves.
- The first group who would use this guideline would be people who developed their own programme of psychotherapy, especially for those who had strong allegiance to all therapies that are examined in their own randomised control trials. This would introduce the bias to instill expectation to the same effect among participants that were recruited into the trials.
- The second group who would use this guideline would be people who conduct systematic reviews and critical appraisal of psychotherapy literature, especially when they pay good attention to risks of bias pertaining to proper randomisation, blinding or intention-to-treat principle and publication bias.
- More trials are at high risk of bias for research allegiance recently than before can now recommend.
Quote
“How should the ultimate consumers of medical literature (i.e. patients, families and policy makers) use this guideline? Do they remain at the mercy of the bulk of literature consisting of the original randomised control trials that follow this guideline, of systematic reviews that ignored this guideline, of systematic reviews that ignored this guideline in their evidence synthesis, and of practicing psychotherapists who may be all too easily convinced of the effectiveness of the therapies that they practice?” (p.2).
Abstract
A commentary by Professor T.A. Furukawa about How to prove that your therapy is effective, even when it is not: A guideline.APA Style Reference
Furukawa, T. A. (2016). A guideline for whom?. Epidemiology and psychiatric sciences, 25(5), 439. doi: 10.1017/S2045796015000955
You may also be interested in
- Most psychotherapies do not really work, but those that might work should be assessed in biased studies (Ioannidis, 2016)
- How to prove that your therapy is effective, even when it is not: a guideline (Cuijpers & Cristea, 2016)
Most psychotherapies do not really work, but those that might work should be assessed in biased studies (Ioannidis, 2016)
Main Takeaways:
- Psychotherapies may need to be tested under biased conditions, but bias should be of the right type. Using the weak spots of randomised trials, not concealing treatment allocation to assessors of outcome, analysing only the participants who completed the intervention and ignoring dropouts, using multiple outcome instruments and selectively reporting only the significant ones and not publishing results unless positive represent clear cheating.
- Treatment dropouts and losses to follow-up are frequent even in short-term studies and, indeed, they often reflect lack of effectiveness or poor tolerability.
- Imputation methods are better than ignoring missing observations, but still leave substantial uncertainty. All this means is that at least the other improper and easy-to-handle biases should be eliminated.
- There is absolutely no reason nowadays for a trial not to be performed with robust randomisation, allocation concealment and pre-specified outcomes and not to get published as pre-specified.
- The author leaves room to modify the analysis plan if something exploratory could not be expected a priori. However, this still needs to be transparently acknowledged, the modified analysis plan justified and results interpreted with caution.
- Conversely, psychotherapies emerge from theoretical speculation and currently hit patients with little pre-screening. The odds of success are probably weaker for psychotherapies than for drugs.
- However, raising the expectations of the participants by boosting the placebo effect is also not a bad idea. Most psychotherapies that are effective, probably work primarily through the placebo effect.
- An additional bias is small sample size, as it is more susceptible to the five ‘improper biases’ than larger ones. It is beneficial to use a small sample size to assess whether we would be wasting resources to run large trials on low-yield experimental therapies.
- Large studies at an early stage make sense only if there is a reasonable chance to see a clinically meaningful effect and that clinically meaningful effect is small, thus requiring a large study to detect it.
Quote
“However most psychotherapies that do not work even against nothing will be quickly screened out with small trials, failing even this favourably biased test. Again, incentives should reward publishing such ‘negative’ results and save the field from wasting effort chasing spurious claims.” (p.2).
Abstract
A commentary by Professor John Ioannidis about How to prove that your therapy is effective, even when it is not: A guideline.APA Style Reference
Ioannidis, J. P. A. (2016). Most psychotherapies do not really work, but those that might work should be assessed in biased studies. Epidemiology and psychiatric sciences, 25(5), 436. doi: 10.1017/S2045796015000888
You may also be interested in
- A guideline for whom? (Furukawa, 2016)
- How to prove that your therapy is effective, even when it is not: a guideline (Cuijpers & Cristea, 2016)
Misuse of power: in defence of small-scale science (Quinlan, 2013)
Main Takeaways:
- One unfortunate conclusion is that the results of any small sample study are probably misleading and possibly worthless.
- It can be perfectly acceptable to publish research based on a sample size
- that is as small (N = 16).
- The author does not want researchers to ignore statistical power but it is troubling to think that an unresolved scientific controversy exists because, fundamentally, the issues reside in studies of low statistical power.
Quote
“Indeed, by exploiting established statistical tests together with computation of the Bayes factor, it is relatively easy to expose the strength of evidence for an experimental hypothesis relative to that of the null hypothesis even with small sample.” (p.585).
Abstract
Dr Philip Quinlan provides a commentary on Power failure: why small sample size undermines the reliability of neuroscience and discusses that small sample size does not undermine the reliability of neuroscience.APA Style Reference
Quinlan, P. T. (2013). Misuse of power: in defence of small-scale science. Nature Reviews Neuroscience, 14(8), 585-585. https://doi.org/10.1038/nrn3475-c1
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- A 21 Word Solution (Simmons et al., 2012)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) (Ingre, 2013)
- Ironing out the statistical wrinkles in “ten ironic rules” (Lindquist et al., 2013)
- Ten ironic rules for non-statistical reviewers (Friston, 2012)
- Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions (Higginson & Munafo, 2016)
- Small sample size is not the real problem (Bacchetti, 2013)
- Bite-Size Science and Its Undesired Side Effects (Bertamini & Munafo, 2012)
- Confidence and precision increase with high statistical power (Button et al., 2013)
- Measurement error and the replication crisis (Loken & Gelman, 2017)
Bite-Size Science and Its Undesired Side Effects (Bertamini & Munafo, 2012)
Main Takeaways:
- Psychological science have claimed that short reports are faster communication of results, easy to assimilate with the literature, easy to understand for people outside the field, easy to process for editors and reviewers and allow more dynamic exchange of fresh ideas even if they may turn out to be wrong.
- In addition, short reports lead to an increased pressure on researchers to produce a quantifiable output.
- Citation impact should not be seen as a superior measure of impact, especially once it is adjusted length. For instance, if the same findings were written in a short or long format and if both articles get cited equally, the impact per page would be higher for the short article. However, this would be misleading to state the short article has achieved any greater impact than the long article.
- Short articles in journals means that more articles can be published, they allow the editor to decide quickly about acceptance or rejection of the manuscript, thus speeding up the processing of a manuscript.
- If replication is fundamental to the scientific method, then an advantage of multiexperiment papers is that replication is inherent and usually rather stringently defined.
- Also, short articles are more prone to citation amnesia, especially when a tight word count criteria has to be met. The findings are more newsworthy when the discussion of previous relevant work is less detailed, thus, there is pressure on authors not to go into great depth when researching and discussing previous work. Put simply, ignorance allows researchers to discover “new” things.
- Bite-size articles make false positives or flukes worse. We are all aware of the need for results to be replicated.
- Long articles with multiple experiments show whether an effect can be replicated and supported by converging evidence.
Quote
“By far the most popular and influential measure of quality for journals is their Impact Factor. This is computed on the basis of citations, and therefore should reflect influence in the field. As critics have often pointed out, it does not distinguish between citations that confirm and extend the original findings and citations that criticize and debunk them. In this sense, journals have little formal incentive to minimize bias in the effects reported and to minimize false alarms, although we have no doubt that every good editor is trying to do that...By combining different measures, such as number of articles, Impact Factor, and publication bias, we could arrive at better measures of the quality of a journal in reporting interesting but replicable and valid results. At the moment, there are sophisticated tools to count articles published and number of citations…Despite increased interest in bibliometrics, there is also a growing consciousness of the limitations of any individual index, especially because as soon as one index achieves the status of the measure of choice, with practical implications, authors and institutions will start to adapt and play the system.” (p.70).
Abstract
Short and rapid publication of research findings has many advantages. However, there is another side of the coin that needs careful consideration. We argue that the most dangerous aspect of a shift toward “bite-size” publishing is the relationship between study size and publication bias. Findings based on a single study or a study based on a limited sample size are more likely to be false positive, because the false positive rate remains constant, whereas the true positive rate (the power) declines as sample size declines. Pressure on productivity and on novelty value further exacerbates the problem.APA Style Reference
Bertamini, M., & Munafò, M. R. (2012). Bite-size science and its undesired side effects. Perspectives on Psychological Science, 7(1), 67-71. https://doi.org/10.1177/1745691611429353
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- A 21 Word Solution (Simmons et al., 2012)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) (Ingre, 2013)
- Ironing out the statistical wrinkles in “ten ironic rules” (Lindquist et al., 2013)
- Ten ironic rules for non-statistical reviewers (Friston, 2012)
- Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions (Higginson & Munafo, 2016)
- Misuse of power: in defence of small-scale science (Quinlan, 2013)
- Small sample size is not the real problem (Bacchetti, 2013)
- Confidence and precision increase with high statistical power (Button et al., 2013)
- Measurement error and the replication crisis (Loken & Gelman, 2017)
Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
Main Takeaways:
- The American Psychological Association asks authors to sign a contract that data is available for individuals who wish to re-analyse the data to verify claims put forth in the paper. There has been no published research to assess this scenario in reality.
- The present study examined the willingness to share data for re-analysis linked to strength of evidence and quality of reporting of statistical results.
- Method: Wicherts et al. contacted corresponding authors of 141 papers published in the second half of 2004 in one of four high-ranking journals published by the American Psychological Association and determined whether the effects of outliers contributed to statistical outcomes.
- Method: They included studies from the Journal of Personality and Social Psychology, and the Journal of Experimental Psychology: learning, memory and cognition, as authors were more willing to share data than other journals.
- Method: They included tests results that were complete (i.e. test statistic, degrees of freedom, and p-value reported) and reported as significant effects.
- Results: Higher p-values were more likely in papers from which no data were shared.
- Conclusions: Reluctance to share was linked with weaker evidence and higher prevalence of apparent errors to report results. An unwillingness to share data was linked to reporting errors that affected statistical significance.
- The authors seem to suggest that a reluctance to share data was linked to more errors in reporting of results and with weaker evidence. The unwillingness to share data was more pronounced when errors concerned significance.
- Statistically rigorous researchers archive data better and are more attentive to statistical power than less statistically rigorous researchers.
Quote
“Best practices in conducting analyses and reporting statistical results involve, for instance, that all co-authors hold copies of the data, and that at least two of the authors independently run all the analyses (as we did in this study). Such double-checks and the possibility for others to independently verify results later should go a long way in dealing with human factors in the conduct of statistical analyses and the reporting of results” (pp.6-7).
Abstract
The widespread reluctance to share published research data is often hypothesized to be due to the authors’ fear that reanalysis may expose errors in their work or may produce conclusions that contradict their own. However, these hypotheses have not previously been studied systematically. We related the reluctance to share research data for reanalysis to 1148 statistically significant results reported in 49 papers published in two major psychology journals. We found the reluctance to share data to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance.Our findings on the basis of psychological papers suggest that statistical results are particularly hard to verify when reanalysis is more likely to lead to contrasting conclusions. This highlights the importance of establishing mandatory data archiving policies.APA Style Reference
Wicherts, J. M., Bakker, M., & Molenaar, D. (2011). Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results. PloS one, 6(11), e26828. https://doi.org/10.1371/journal.pone.0026828
You may also be interested in
- Trust Your Science? Open Your Data and Code (Stodden, 2011)
- Attitudes Toward Open Science and Public Data Sharing: A Survey Among Members of the German Psychological Society (Abele-Brehm et al., 2019)
- Open Data in Qualitative Research (Chauvette et al., 2019)
- CJEP Will Offer Open Science Badges (Pexman, 2017)
- Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency (Kidwell et al., 2016)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Using OSF to Share Data: A Step-by-Step Guide (Soderberg, 2018)
Confidence and precision increase with high statistical power (Button et al., 2013)
Main Takeaways:
- High-powered studies will generate formally statistically significant differences for a ‘trivial’ effect. However, in studies with low statistical power, an observed effect will necessarily be large if it is to pass p < .05, but this does not mean that the true effect will be large, or even exist at all.
- Also, the concern only applies if a p-value < .05 to reject or implicitly accept the null hypothesis. Larger studies protect against inferences from trivial effect sizes by enabling a better estimation of the magnitude of the true effect. Researchers would need to move significance testing to effect sizes and confidence intervals to improve matters.
- In addition, the true effect size is not known a priori; what is considered trivial can only be determined when the effect size is known.
- High power provides greater precision in the estimation of the actual effect size so that researchers can assess their importance or triviality with confidence.
- There needs to be greater emphasis on effect size and confidence intervals than on significance testing. The use of significance testing in the absence of any mention of effect size, confidence intervals or prospective power remains the norm.
- Some may argue that effect size is not relevant to the theoretical models they wish to test. That may be true if the models are imprecise about effect sizes. However, data from low-powered studies are not useful for testing a theoretical model because they provide little opportunity to find conclusive evidence for or against a model and therefore provide limited scope for model refinement.
- It would be wonderful if small studies and their research environment were devoid of biases and if all small studies on a particular question of interest could be perfectly integrated. This has not happened. To achieve this would require a major restructuring of the incentives for publishing papers, and especially for publishing novel and positive findings. Simply changing to another p-value threshold does not solve the problem.
Abstract
Dr Katie Button and colleagues respond to all commentaries on Power failure: why small sample size undermines the reliability of neuroscience and discusses that small sample size does undermine the reliability of neuroscience.APA Style Reference
Button, K. S., Ioannidis, J. P., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S., & Munafò, M. R. (2013). Confidence and precision increase with high statistical power. Nature Reviews Neuroscience, 14(8), 585-585. https://doi.org/10.1038/nrn3475-c4
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- A 21 Word Solution (Simmons et al., 2012)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) (Ingre, 2013)
- Ironing out the statistical wrinkles in “ten ironic rules” (Lindquist et al., 2013)
- Ten ironic rules for non-statistical reviewers (Friston, 2012)
- Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions (Higginson & Munafo, 2016)
- Misuse of power: in defence of small-scale science (Quinlan, 2013)
- Small sample size is not the real problem (Bacchetti, 2013)
- Measurement error and the replication crisis (Loken & Gelman, 2017)
- Bite-Size Science and Its Undesired Side Effects (Bertamini & Munafo, 2012)
The Leiden Manifesto for research metrics (Hicks et al., 2015)
Main Takeaways:
- Data are increasingly used to govern science. Evaluation is now led by the data rather than by judgement. Metrics have proliferated: usually well intentioned, not always well informed, often ill applied.
- As scientometricians, social scientists and research administrators, we have watched with increasing alarm the pervasive misapplication of indicators to the evaluation of scientific performance.
- Everywhere, supervisors ask PhD students to publish in high-impact journals and acquire external funding before they are ready. Researchers and evaluators still exert balanced judgement. Yet the abuse of research metrics has become too widespread to ignore.
- Scientists searching for literature with which to contest an evaluation find the material scattered in what are, to them, obscure journals to which they lack access.
- There are ten principles:
Quote
“Abiding by these ten principles, research evaluation can play an important part in the development of science and its interactions with society. Research metrics can provide crucial information that would be difficult to gather or understand by means of individual expertise. But this quantitative information must not be allowed to morph from an instrument into the goal. The best decisions are taken by combining robust statistics with sensitivity to the aim and nature of the research that is evaluated. Both quantitative and qualitative evidence are needed; each is objective in its own way. Decision-making about science must be based on high-quality processes that are informed by the highest quality data.” (p.431).
Abstract
Use these ten principles to guide research evaluation, urge Diana Hicks, Paul Wouters and colleagues.APA Style Reference
Hicks, D., Wouters, P., Waltman, L., De Rijcke, S., & Rafols, I. (2015). Bibliometrics: the Leiden Manifesto for research metrics. Nature, 520(7548), 429-431. doi:10.1038/520429a
You may also be interested in
- High Impact =High Statistical Standards? Not Necessarily So (Tressoldi et al., 2013)
- An index to quantify an individual’s scientific research output (Hirsch, 2005)
Publication bias in the social sciences: Unlocking the file drawer (Franco et al., 2014)
Main Takeaways:
- Editors and reviewers may prefer statistically significant results and reject sound studies that fail to reject the null hypothesis (i.e. publication bias). As a result, authors may not write up and submit papers that have null findings or have their own preferences to not pursue the publication of null results.
- We leveraged Time-sharing Experiments in the Social Sciences (TESS),in which researchers propose survey-based experiments to be run on nationally representative samples. The paper aims to compare the statistical results of TESS experiments in published manuscripts and unpublished results.
- Method: The paper analysed the entire online archive of TESS studies conducted between 2002 and 2012. The analysis was restricted to 221 studies. The outcome of interest is the publication status of each TESS experiment.
- Results: Although around half of the total studies in our sample were published, only 20% of those with null results were published. In contrast, ~60% of studies with strong results and 50% of those with mixed results were published. Although more than 20% of the studies in our sample had null findings, less than 10% of published articles based on TESS experiments report such results. Although the direction of these results may not be surprising, the observed magnitude is remarkably large.
- Results: 15 authors reported that they abandoned the project because they believed that null results have no publication potential even if they found the results interesting personally. 9 authors reacted to null findings by reducing the priority of writing up the TESS study and focusing on other projects. 2 authors whose studies “didn’t work out” eventually published papers supporting their initial hypotheses using findings obtained from smaller convenience samples.
- Researchers might be wasting effort and resources in conducting studies that have already been executed in which the treatments were not efficacious.
- If future researchers conduct similar studies and obtain statistically significant results by chance, then the published literature will incorrectly suggest stronger effects. Hence, even if null results are characterized by treatments that “did not work” and strong results are characterized by efficacious treatments, authors’ failures to write up null findings still adversely affects the universe of knowledge.
- A vital part of developing institutional solutions to improve scientific transparency would be to better understand the motivations of researchers who choose to pursue projects as a function of results.
Quote
“Creating high-status publication outlets for these studies could provide such incentives. The movement toward open-access journals may provide space for such articles. Further, the pre-analysis plans and registries themselves will increase researcher access to null results. Alternatively, funding agencies could impose costs on investigators who do not write up the results of funded studies. Last, resources should be deployed for replications of published studies if they are unrepresentative of conducted studies and more likely to report large effects.” (p.1504).
Abstract
We studied publication bias in the social sciences by analyzing a known population of conducted studies—221 in total—in which there is a full accounting of what is published and unpublished. We leveraged Time-sharing Experiments in the Social Sciences (TESS), a National Science Foundation–sponsored program in which researchers propose survey-based experiments to be run on representative samples of American adults. Because TESS proposals undergo rigorous peer review, the studies in the sample all exceed a substantial quality threshold. Strong results are 40 percentage points more likely to be published than are null results and 60 percentage points more likely to be written up. We provide direct evidence of publication bias and identify the stage of research production at which publication bias occurs: Authors do not write up and submit null findings.APA Style Reference
Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502-1505. DOI: 10.1126/science.1255484
You may also be interested in
- The File-drawer problem revisited: A general weighted method for calculating fail-safe numbers in meta analysis (Rosenberg, 2005)
- The “File Drawer Problem” and Tolerance for Null Results (Rosenthal, 1979)
GRADE: an emerging consensus on rating quality of evidence and strength of recommendations (Guyatt et al., 2008)
Main Takeaways:
- Guideline developers around the world are inconsistent in how they rate quality of evidence and grade strength of recommendations. The British Medical Journal has requested in its “Instructions to Authors” on bmj.com that authors should preferably use the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system for grading evidence when submitting a clinical guidelines article.
- This article will explain why many organisations use formal systems to grade evidence and recommendations and why this is important for clinicians. The authors will focus on the GRADE approach to recommendations.
- A rigorous system of rating the quality of the evidence is required to evaluate the evidence. Observational studies that show inconsistent results, indicates the evidence is of very low quality.
- Insufficient attention to quality of evidence risks inappropriate guidelines and recommendations may lead to the detriment of patients wellbeing. Recognising the quality of evidence will help to prevent these errors.
- Guidelines and recommendations must therefore indicate whether (a) the evidence is high quality and the desirable effects clearly outweigh the undesirable effects, or (b) there is a close or uncertain balance. A simple, transparent grading of the recommendation can effectively convey this key information.
- There are limitations to formal grading of recommendations. Like the quality of evidence, the balance between desirable and undesirable effects reflects a continuum. Some arbitrariness will therefore be associated with placing particular recommendations in categories such as “strong” and “weak.”
- Grading systems that are simple with respect to judgments both about the quality of the evidence and the strength of recommendations facilitate use by patients, clinicians, and policy makers.
- Detailed and explicit criteria for ratings of quality and grading of strength will make judgments more transparent to those using guidelines and recommendations.
- To achieve transparency and simplicity, the GRADE system classifies the quality of evidence in one of four levels—high, moderate, low, and very low. Some of the organisations using the GRADE system have chosen to combine the low and very low categories. Evidence based on randomised controlled trials begins as high quality evidence, but our confidence in the evidence may be decreased for several reasons, including: Study limitations, Inconsistency of results, Indirectness of evidence, Imprecision and Reporting bias.
- Although observational studies (e.g. case-control and cohort studies) start with a “low quality” rating, grading upwards may be warranted if the magnitude of the treatment effect is very large (e.g. hip replacement), if there is evidence of a dose-response relation or if all plausible biases would decrease the magnitude of an apparent treatment effect.
Quote
“When the desirable effects of an intervention clearly outweigh the undesirable effects, or clearly do not, guideline panels offer strong recommendations. On the other hand, when the trade-offs are less certain—either because of low quality evidence or because evidence suggests that desirable and undesirable effects are closely balanced—weak recommendations become mandatory. In addition to the quality of the evidence, several other factors [e.g. uncertainty about the balance between desirable and undesirable effects, Uncertainty or variability in values and Uncertainty about whether the intervention represents a wise use of resources] preferences and affect whether recommendations are strong or weak” (p. 336).
Abstract
Guidelines are inconsistent in how they rate the quality of evidence and the strength of recommendations. This article explores the advantages of the GRADE system, which is increasingly being adopted by organisations worldwide.APA Style Reference
Guyatt, G. H., Oxman, A. D., Vist, G. E., Kunz, R., Falck-Ytter, Y., Alonso-Coello, P., & Schünemann, H. J. (2008). GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. Bmj, 336(7650), 924-926. https://doi.org/10.1136/bmj.39489.470347.AD
You may also be interested in
Seeking Congruity Between Goals and Roles: A New Look at Why Women Opt Out of Science, Technology, Engineering, and Mathematics Careers (Diekman et al., 2010) ⌺
Main Takeaways:
- We present a new perspective on this issue by proposing that interest in some careers and disinterest in others results from the intersection of people’s goals and their preconceptions of the goals afforded by different careers. We hypothesize that people perceive Science, Technology, Engineering and Mathematics (STEM) careers as being especially incompatible with an orientation to care about other people (i.e. communion). Because women in particular tend to endorse communal goals, they may be more likely than men to opt out of STEM careers in favor of careers that seem to afford communion.
- These trends suggest that to explain women’s absence in STEM fields, research should focus on factors that differentiate careers in STEM from other careers. We hypothesize that a critical but relatively unexplored factor may be that many non-STEM careers are perceived as fulfilling communal goals.
- We thus examined (a) whether communal-goal affordances are perceived to differ between STEM and other careers, and (b) whether communal-goal endorsement inhibits STEM interest, given consensual beliefs about the goals these careers afford.
- Method: 333 introductory psychology students provided goal-affordance ratings and information about their mathematics and science experience. Our goal was to determine predictors of differential interest in STEM, male-stereotypic/non-STEM (MST), and female-stereotypic (FST) careers. To create scales reflecting these different stereotypic categories, we used archival and primary data.
- Method: For each core career, participants rated how much they considered the career to fulfill agentic goals (power, achievement, and seeking new experiences or excitement) and communal goals (intimacy, affiliation, and altruism).
- Method: Because career interest was our critical dependent measure, participants rated their interest in the core careers, as well as additional careers.
- Method: Participants rated several goals according to “how important each of the following kinds of goals is to you personally,” on scales ranging from 1 (not at all important) to 7 (extremely important).
- Method: Self-efficacy and experience. Measures of self-efficacy included the scientific, mechanical, and computational subscales of the Kuder Task Self-Efficacy Scale as well as participants’ estimated grades in STEM classes.
- Results: The authors found that STEM careers, relative to other careers, were perceived to impede communal goals. Moreover, communal-goal endorsement negatively predicted interest in STEM careers, even when controlling for past experience and self-efficacy in science and mathematics.
- STEM careers are perceived as inhibiting communal goals: When individuals highly endorse communal goals, they are less interested in STEM. If women perceive STEM as antithetical to highly valued goals, it is not surprising that even women talented in these areas might choose alternative career paths.
- Certainly, traditionally studied predictors of STEM interest, such as agentic motivations or self-efficacy, continue to be critical factors. Our argument is not that the study of communal motivations should replace agentic motivations or self-efficacy, but that this traditional approach overlooks critically important information.
Quote
“It is ironic that STEM fields hold the key to helping many people, but are commonly regarded as antithetical (or, at best, irrelevant) to such communal goals. However, the first step toward change is increasing knowledge about this belief and its consequences. Interventions could not only provide opportunities for girls and young women to succeed in mathematics and science but also demonstrate how STEM fields involve helping and collaborating with other people. For example, our current research investigates how portraying science or engineering careers as more other-oriented fosters positivity.” (p.1056).
Abstract
Although women have nearly attained equality with men in several formerly male-dominated fields, they remain underrepresented in the fields of science, technology, engineering, and mathematics (STEM). We argue that one important reason for this discrepancy is that STEM careers are perceived as less likely than careers in other fields to fulfill communal goals (e.g., working with or helping other people). Such perceptions might disproportionately affect women’s career decisions, because women tend to endorse communal goals more than men. As predicted, we found that STEM careers, relative to other careers, were perceived to impede communal goals. Moreover, communal-goal endorsement negatively predicted interest in STEM careers, even when controlling for past experience and self-efficacy in science and mathematics. Understanding how communal goals influence people’s interest in STEM fields thus provides a new perspective on the issue of women’s representation in STEM careers.APA Style Reference
Diekman, A. B., Brown, E. R., Johnston, A. M., & Clark, E. K. (2010). Seeking congruity between goals and roles: A new look at why women opt out of science, technology, engineering, and mathematics careers. Psychological science, 21(8), 1051-1057. https://doi.org/10.1177/0956797610377342
You may also be interested in
- Unequal effects of the COVID-19 pandemic on scientists (Myers et al., 2019)
- Against Eminence (Vazire, 2017)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- The Gender Gap: Who Is (and Is Not) Included on Graduate-Level Syllabi in Social/Personality Psychology (Skitka et al., 2020)
- Leveraging a collaborative consortium model of mentee/mentor training to foster career progression of under-represented post-doctoral researchers and promote institutional diversity and inclusion (Risner et al., 2020)
Communism, Universalism and Disinterestedness: Re-examining Contemporary Support among Academics for Merton’s Scientific Norms (Macfarlane & Cheng, 2008)
Main Takeaways:
- This paper will re-examine these influential accounts of academic values and develop a contemporary interpretation of the values that underpin commitment to the academic ‘vocation’.
- The four norms of Mertonian science are: Communism (i.e. the results of their research should be the common property of the whole scientific community), disinterestedness (i.e. scientists should have no emotional or financial attachments to their work), universalism (i.e. Academic knowledge should transcend race, class, political and/or religious barriers) and Organised skepticism (i.e. the expectation that academics will continually challenge conventional wisdom in their discipline).
- The paper tests the contemporary relevance of Merton’s institutional norms through a web-based survey of academic values. In questioning the contemporary sway of Merton’s norms they will be contrasted with an alternative set of academic values: capitalism (or individualism), interestedness and particularism.
- Method: A web-based survey instrument was designed to test out the extent to which academics agree with three of Merton’s four institutional norms. Four value statements were designed to test out the extent to which respondents either agreed or disagreed with communism, universalism and disinterestedness.
- Results: The authors observed that there is strong support for communism followed by universalism and weak support for disinterestedness.
- The results of this limited survey indicate that there is still substantial support for at least one of Merton’s norms, namely communism. However, it is also clear that they are being re-shaped by a number of contemporary trends and influences. While most academics in this survey were supportive of communism as a norm there are more capitalist and individualistic forces at work. These policies are designed to maximise the commercial benefits derived from academic work for individual universities rather than share the results of research for the benefit of all.
- Academics are encouraged to seek a public profile rather than shy away from self promotion opportunities.
- The pressures of performativity mean that academics can no longer afford to be as committed as they might like to be to disinterestedness as a norm. Indeed, the survey indicates more opposition than support for this norm. ‘Interestedness’ appears to have displaced disinterestedness. Most respondents indicated that they were comfortable with the idea of contributing to public debate in areas that fall outside their expertise.Contemporary academics may regard such interventions as a right as a citizen and as a legitimate extension of academic freedom.
- Many academics are also prepared to direct or re-direct their efforts toward available funding opportunities. Gaining funding for research and directing one’s research agenda in the direction of the contemporary concerns of public policymakers is a fact of life that has re-shaped attitudes to ‘disinterested’ research.
Quote
“In contrast with communism, capitalism as a norm would imply a belief in maximising individual financial return on academic endeavour in a market economy. Here, conceptualising the role of research as an income generation activity affirms this alternative norm. Particularism implies a belief that knowledge is individually constructed on the basis of social experiences and political forces. It implies a rejection of absolute social, cultural or religious values in favour of moral relativism. A particularist stance might further be characterised by opposition to the cultural hegemony of Western products, systems and modes of thought. Finally, interestedness is closely related to the belief that academic enquiry can never be a value-free, dispassionate analysis of the observed ‘facts’. It is a norm that essentially rejects the positivist paradigm.” (p.77).
Abstract
This paper re-examines the relevance of three academic norms to contemporary academic life – communism, universalism and disinterestedness – based on the work of Robert Merton. The results of a web-based survey elicited responses to a series of value statements and were analysed using the weighted average method and through crosstabulation. Results indicate strong support for communism as an academic norm defined in relation to sharing research results and teaching materials as opposed to protecting intellectual copyright and withholding access. There is more limited support for universalism based on the belief that academic knowledge should transcend national, political, or religious boundaries. Disinterestedness, defined in terms of personal detachment from truth claims, is the least popular contemporary academic norm. Here, the impact of a performative culture is linked to the need for a large number of academics to align their research interests with funding opportunities. The paper concludes by considering the claims of an alternate set of contemporary academic norms including capitalism, particularism and interestedness.APA Style Reference
Macfarlane, B., & Cheng, M. (2008). Communism, universalism and disinterestedness: Re-examining contemporary support among academics for Merton’s scientific norms. Journal of Academic Ethics, 6(1), 67-78. https://doi.org/10.1007/s10805-008-9055-y
You may also be interested in
The bases for generalisation in scientific methods (Rousmaniere, 1909)
Main Takeaways:
- In that all true induction involves generalizing on the basis of particulars, the question of the conditions under which various numbers of particulars are required for a generalization stands as a fundamental question in discussing inductive methods.
- The most marked contrast within the field of generalization is between those cases where we draw our conclusions with apparently reckless trust in a few examples and those where we build our foundations very nearly as wide as the superstructure. Roughly, the difference is between the more exact sciences and the use of statistical methods.
- One part of scientific investigation which calls most for judgment and wide knowledge centers around the question of the place of uniformity, the parts to be sampled, if one is to win a fair idea of the quality of a boatload of grain. Sometimes this method of using a carefully selected group is called into play where the investigator's interest is in pointing out or studying the very variation itself.
- Where such uniformity as is found belongs to the field as a whole, it is true that we also sometimes call for a small group instead of a single example in testing for the nature of that field, but such a group is very different, from a logical point of view, from the selected group. There is no selection here, variation is generally and vaguely accepted as possible, not specifically placed.
Quote
“So long as the conditions that determine variation are believed standard, that is, in play in all the groups compared, the use of averages to represent the different groups is legitimate, but under the suggestion that difference in race varied those conditions in part, belief in the genuineness of that representation withers away.” (p. 205).
Abstract
Rousmaniere discusses the bases for generalisations concerning scientific evidence.APA Style Reference
Rousmaniere, F. H. (1909). The bases for generalization in scientific methods. The Journal of Philosophy, Psychology and Scientific Methods, 6(8), 202–205 https://doi.org/10.2307/2011346
You may also be interested in
False-Positive Citations (Simmons et al., 2017)
Main Takeaways:
- When results in the scientific literature disagree with our intuition, researchers should be able to trust the literature enough to question our beliefs rather than to question the findings. The authors were questioning the findings. Something was broken.
- The authors knew many researchers who readily admitted to dropping dependent variables, conditions, or participants to achieve significance. Everyone knew it was wrong, but they thought it was wrong the way it is wrong to jaywalk. The authors decided to write “False-Positive Psychology” when simulations revealed that it was wrong the way it is wrong to rob a bank.
- An article cannot be influential if it is not read, and no one likes to read boring or hard-to-understand articles. So we tried to make sure that our article was accessible and at least a little bit entertaining. To help accomplish this, the authors ran two experiments demonstrating how p-hacking could allow us to find significant evidence for an obviously false hypothesis. It was not hard to generate a false hypothesis to test, but in a field that seemed ready to believe in lots of things, it was hard to generate one that was obviously false.
- The authors knew that their article could not lead to real change if we just complained about the problem. So they spent a long time thinking about solutions, seeking out one that would require the least of researchers and journals while achieving the most for knowledge and truth. The authors wanted to ask authors to simply describe what they did in their studies. Specifically, the authors proposed that researchers should be required to disclose how they arrived at their sample sizes and to report all of their measures, manipulations, and exclusions.
- If the authors went back in time to 2010, they would not recommend that authors be required to have more than 20 observations per cell, because that led people to focus on the wrong aspect of disclosure and emphasize that you cannot consistently get underpowered studies to work without p-hacking (or an implausible amount of luck). In addition, the authors would modify the n > 20 rule in two ways: choose a larger reference point and not advocate for a strict sample-size cutoff.
- Since 2010, the authors believe in preregistration as it gives researchers the freedom to conduct analyses that could, if disclosed afterward, seem suspicious, such as excluding participants who failed an attention check or running an unusual statistical test. Second, it is simply a more verifiable form of disclosure. Indeed, preregistration is the only way for authors to irrefutably demonstrate that their key analyses were not p-hacked. Preregistration makes you immune to suspicions of p-hacking. Preregistration is now routine in our own labs, and, if you are in the business of collecting and analyzing new data, we see no counterargument to doing it.
- Preregistration does not restrict the ability to conduct exploratory analyses; it merely allows the researcher and the reader to properly distinguish between analyses that were planned and exploratory. In addition, it does not prevent researchers from publishing results that do not confirm their hypothesis; the critical aspect of preregistration is not the prediction that the researcher makes but, rather, the methodological and analytical plan that the researcher specifies.
- It is perfectly acceptable to simply pose a research question and describe exactly how you intend to answer it, and it is perfectly acceptable to publish a finding that did not conform to your original prediction.
Quote
“How could any half serious scientist actively oppose a rule requiring authors to accurately describe their research?...In 2010, approximately 0% of researchers were disclosing all of their methodological details, posting their data and materials, and preregistering their studies. Today, disclosure, data posting, and preregistration are slowly becoming the norm, particularly among the younger generation of researchers. We would like to think that our article had something to do with all of this, but honestly, it is impossible to say, because hundreds of psychologists have worked incredibly hard to improve our science. Without them, our article would have had no influence whatsoever. And without our article, these changes may have happened anyway. It was time. It is time.” (p.256).
Abstract
We describe why we wrote “False-Positive Psychology,” analyze how it has been cited, and explain why the integrity of experimental psychology hinges on the full disclosure of methods, the sharing of materials and data, and, especially, the preregistration of analyses.APA Style Reference
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2018). False-positive citations. Perspectives on Psychological Science, 13(2), 255-259. https://doi.org/10.1177/1745691617698146 . [ungated]
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- A 21 Word Solution (Simmons et al., 2012)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- The life of p: “Just significant” results are on the rise (Leggett et al., 2013)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Scientific inbreeding and same-team replication: Type D personality as an example (Ioannidis, 2012)
Pandemic Darkens Postdocs’ work and careers hopes (Woolson, 2020)
Main Takeaways:
- The pandemic has shuttered or reduced the output of academic labs globally, slashed institutional budgets and threatened the availability of grants, fellowships and other postdoctoral funding sources. The fallout adds up to a major challenge for a group of junior researchers who were already grappling with limited funds, intense job competition and career uncertainties.
- 1% of respondents say that they have been diagnosed with COVID-19, and another 9% suspect that they have had the infection but were never tested. But concerns go far beyond the presence or absence of the virus. Some 61% of respondents say that the pandemic has negatively affected their career prospects, and another 25% say that its cumulative effects on their career remain uncertain.
- Worries about one’s professional future are especially widespread in South America, where 70% of respondents say their careers have already suffered since the start of the pandemic.
- Belief that the pandemic had already negatively affected career prospects were also common in North and Central America (68%), Australasia (68%), Asia (61%), Africa (59%) and Europe (54%). In China, where the virus was first detected, 54% of respondents said their career had already suffered and 25% said they weren’t sure.
- Perceived impacts varied by area of study. Slightly less than half of researchers in computer science and mathematics thought that their career prospects had suffered, compared with 68% of researchers in chemistry, 67% in ecology and evolution, and 60% in biomedicine.
- The impact of the pandemic has now joined the list of the top concerns in the minds of postdocs. Asked to name the three primary challenges to their career progression, 40% of respondents point to the economic impact of COVID-19, nearly two-thirds (64%) note the competition for funding, and 45% point to the lack of jobs in their field.
- 13% of respondents say they have already lost a postdoc job or an offer of one as a result of the pandemic, and 21% suspected the virus had wiped out a job but weren’t sure. More than one-third of researchers in South America report already losing a job, compared with 11% in Europe and 12% in North and Central America.
- 60%of respondents are currently working abroad, a circumstance that only amplifies the pandemic’s potential impact. On top of everything else, many worry about the pandemic’s effect on their visas and their ability to stay in their new country.
- Experiments aren’t the only scientific activities that can suffer during a pandemic. 59% of respondents said that they had more trouble discussing ideas with their supervisor or colleagues, and 57% said that the pandemic had made it harder to share their research findings.
- The survey included questions about supervisors, a role that takes on extra importance during a crisis. 54% of respondents said that their supervisor had provided clear guidance on managing their work during the pandemic, but 32% said that they weren’t receiving that sort of support from above. 29% of respondents strongly or somewhat disagreed that their adviser has done everything they can to support them during the pandemic.
- Female respondents (28%) were more likely than male respondents (25%) to think that their supervisors fell short. The free-comment section of the survey underscores how the pandemic has strained some supervisor–postdoc relationships.
- Some postdocs have found small consolations in the pandemic. Although 26% of respondents say that the pandemic has somewhat or significantly impaired their ability to write papers, 43% say that writing has become easier.
Abstract
Nature’s survey of this key segment of the scientific workforce paints a gloomy picture of interrupted research and anxiety about the future by Chris Woolston.APA Style Reference
Woolston, C. (2020). Pandemic darkens postdocs' work and career hopes. Nature, 585(7824), 309-312. https://doi.org/10.1038/d41586-020-02548-2
You may also be interested in
- Seeking an exit plan (Woolston, 2020)
- Unequal effects of the COVID-19 pandemic on scientists (Myers et al., 2019)
- Leveraging a collaborative consortium model of mentee/mentor training to foster career progression of under-represented post-doctoral researchers and promote institutional diversity and inclusion (Risner et al., 2020) ⌺
- The mental health of PhD researchers demands urgent attention (Nature, 2019)
- Postdocs in crisis: science risks losing the next generation (Nature, 2020)
- Boosting research without supporting universities is wrong-headed (Nature, 2020b)
- Against Eminence (Vazire, 2017)
The importance of stupidity in scientific research (Schwartz, 2008)
Main Takeaways:
- Science makes me feel stupid too. It’s just that I’ve gotten used to it. So used to it, in fact, that I actively seek out new opportunities to feel stupid. I wouldn’t know what to do without that feeling.
- For almost all of us, one of the reasons that researchers liked science in high school and college is that they were good at it. That can’t be the only reason – fascination with understanding the physical world and an emotional need to discover new things has to enter into it too. But high-school and college science means taking courses, and doing well in courses means getting the right answers on tests. If you know those answers, you do well and get to feel smart.
- Once I faced that fact, nobody knew the answer to my research problem, only that it was up to me to solve my research problem. I solved the problem in a couple of days. (It wasn’t really very hard; I just had to try a few things.) The crucial lesson was that the scope of things I didn’t know wasn’t merely vast; it was, for all practical purposes, infinite. That realization, instead of being discouraging, was liberating.
- First, I don’t think students are made to understand how hard it is to do research. And how very, very hard it is to do important research. It’s a lot harder than taking even very demanding courses. What makes it difficult is that research is immersion in the unknown. We just don’t know what we’re doing. We can’t be sure whether we’re asking the right question or doing the right experiment until we get the answer or the result. Admittedly, science is made harder by competition for grants and space in top journals.
- Productive stupidity means being ignorant by choice. Focusing on important questions puts us in the awkward position of being ignorant. One of the beautiful things about science is that it allows us to bumble along, getting it wrong time after time, and feel perfectly fine as long as we learn something each time.
Quote
“We don’t do a good enough job of teaching our students how to be productively stupid – that is, if we don’t feel stupid it means we’re not really trying…Science involves confronting our ‘absolute stupidity’. That kind of stupidity is an existential fact, inherent in our efforts to push our way into the unknown... I think scientific education might do more to ease what is a very big transition: from learning what other people once discovered to making your own discoveries. The more comfortable we become with being stupid, the deeper we will wade into the unknown and the more likely we are to make big discoveries.” (p.1771).
Abstract
This is an essay written by Dr Martin Schwartz about the importance of stupidity in scientific research in helping researchers to understand and answer their research problems.APA Style Reference
Schwartz, M. A. (2008). The importance of stupidity in scientific research. Journal of Cell Science, 121(11), 1771-1771. doi: 10.1242/jcs.033340
You may also be interested in
Constraints on Generality (COG): A Proposed Addition to All Empirical Papers (Simons et al., 2017)
Main Takeaways:
- When a paper identifies a target population and specifies constraints on generality (COG) of findings, researchers conduct direct replications that sample from the target population, leading to more appropriate tests of reliability of the original claim.
- A COG statement indicates why the sample and target population is representative, justifying why subjects, materials, and procedures are representative of broader populations.
- A COG statement does not limit the claim but leads the reader to correctly infer these findings are limited to the groups of populations being tested such as undergraduate students.
- A COG statement establishes boundaries on generality rather than “replication failure.”
- A COG statement inspires follow-up studies building on results by testing generality populations not originally tested.
- A COG statement encourages reviewers and editors more receptive to next-step studies to test constraints identified.
- A COG statement should be included in all papers, so editors support manuscripts with well-justified constraint on generality statements explicitly ground claims of generality.
- Editors can evaluate whether claims are sufficiently important to justify publication.
- A COG statement incentivises cumulative follow-up research, leading to greater reliability, influence and increased citations.
- This COG statement values rigor, honesty, accuracy and supports the conclusion justified by evidence and theory, allowing readers to understand the limits of generalisability.
- If science was more cumulative and self-correcting, broad generalisation might be justifiable.
- A COG statement describes known or anticipated limits on finding and not mediation by unknown factors. It asks how our sample is representative of a broader population.
Abstract
Psychological scientists draw inferences about populations based on samples—of people, situations, and stimuli—from those populations. Yet, few papers identify their target populations, and even fewer justify how or why the tested samples are representative of broader populations. A cumulative science depends on accurately characterizing the generality of findings, but current publishing standards do not require authors to constrain their inferences, leaving readers to assume the broadest possible generalizations. We propose that the discussion section of all primary research articles specify Constraints on Generality (i.e., a “COG” statement) that identify and justify target populations for the reported findings. Explicitly defining the target populations will help other researchers to sample from the same populations when conducting a direct replication, and it could encourage follow-up studies that test the boundary conditions of the original finding. Universal adoption of COG statements would change publishing incentives to favor a more cumulative science.APA Style Reference
Simons, D. J., Shoda, Y., & Lindsay, D. S. (2017). Constraints on generality (COG): A proposed addition to all empirical papers. Perspectives on Psychological Science, 12(6), 1123-1128. https://doi.org/10.1177/1745691617708630 [ungated]
You may also be interested in
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Most people are not WEIRD (Henrich et al., 2010)
#bropenscience is broken science (Whitaker & Guest, 2020) ⌺
Main Takeaways:
- [Scientists have] been spurred into action by a variety of disappointing stories about irreplicable research – due to both purposeful misconduct and variable guidance for transparent reporting standards – as well as inspired by the pre-existing ideals of ‘open science’.
- It is a very narrow demographic of researchers who have the institutional support to spend time on such projects as well as the fortune to be publicly acknowledged for their hard work.
- However, #bropenscience has also been misunderstood and misrepresented. Not all men are bros, and not all bros are men. A bro will often be condescending, forthright, aggressive, overpowering, and lacking kindness and self-awareness.
- Although bros solicit debate on important issues, they tend to resist descriptions of the complexities, nuances, and multiple perspectives on their argument. Bros find it hard to understand – or accept – that others will have a different lived experience. At its worst, #bropenscience is the same closed system as before.
- Broscience creates new breaks within science such as excluding people from participating in open science generally due to the behaviour of a vocal, powerful and privileged minority. It’s a type of exclusionary, monolithic, inflexible rhetoric that ignores or even builds on structural power imbalances.
- Most researchers don’t fit neatly into many of the broposed solutions, and science is not a monolith. The authors have both dealt with published findings that cannot be reproduced, been driven by frustration at the inefficiency of current research practices and have different work ethics and philosophies. This is a feature, not a bug. A diverse and inclusive definition of open science is necessary to truly reform academic practice.
- At its core, open scholarship reminds researchers why they wanted to conduct research in the first place: to learn and to educate.
- Regardless of individual intentions, groups can easily develop and perpetuate elitist, yet informal social structures, recreating the same biases inherent in society at large. Bro-y culture dominates at the leadership level in science and technology because it always has and there aren’t enough explicit processes to deconstruct these biases.
- To avoid perpetuating ‘bropen’ practices, the authors recommend following three core principles: Understanding: You make the work accessible and clear; Sharing: You make the work easy to adapt, reproduce, and spread; and Participation & inclusion: You build shared ownership and agency with contributors through accountability, equity, and transparency to make the work inviting, relevant, safe, and sustainable for all.
- Inclusive actions that you can take to make science more open to underrepresented minorities include: using a microphone at in person events or providing live transcription and sign language translation for online events so that hard of hearing and autistic colleagues (among others) can engage more effectively.
- Editors and tenured faculty members can and should do the most to improve equity and inclusion in academia.
- What are the actions you can take that will improve scholarship for all? Ultimately, the only way to dismantle structural and systemic biases is to listen to those who experience them.
Quote
“It’s likely infeasible to include all the possible open scholarship elements mentioned above in the readers’ work. Therefore, and to change metaphors, the authors encourage the reader to take a healthy and balanced portion from the open science buffet...Binging from the many different topics that fall under open scholarship will leave you feeling overwhelmed and exhausted...take what you can and what benefits you now, and then come back for more when you have the time and mental space to develop a new skill.” (p.2)
Abstract
Kirstie Whitaker and Olivia Guest ask how open ‘open science’ really is.APA Style Reference
Whitaker, K., & Guest, O. (2020). #bropenscience is broken science. The Psychologist, 33, 34-37.
You may also be interested in
- Seeking an exit plan (Woolston, 2020)
- The mental health of PhD researchers demands urgent attention (Nature, 2019)
- Postdocs in crisis: science risks losing the next generation (Nature, 2020)
- Boosting research without supporting universities is wrong-headed (Nature, 2020b)
- Unequal effects of the COVID-19 pandemic on scientists (Myers et al., 2019)
- Against Eminence (Vazire, 2017)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- The Gender Gap: Who Is (and Is Not) Included on Graduate-Level Syllabi in Social/Personality Psychology (Skitka et al., 2020)
- Leveraging a collaborative consortium model of mentee/mentor training to foster career progression of under-represented post-doctoral researchers and promote institutional diversity and inclusion (Risner et al., 2020)
- Prestige drives epistemic inequality in the diffusion of scientific ideas (Morgan et al., 2018)
- On supporting early-career black scholars (Roberson, 2020)
- Open Science Isn’t Always Open to All Scientists (Bahlai et al., 2019)
- The Matthew effect in science funding (Bol et al., 2018)
- ls There a Positive Correlation between Socioeconomic Status and Academic Achievement? (Quagliata, 2008)
- Education and Socio-economic status (APA, 2017b)
- Ethnic and Racial minorities and socio-economic status (APA, 2017)
- Women and Socio-economic status (APA, 2010)
- Disability and Socio-economic status (APA, 2010)
- Lesbian, Gay, Bisexual and Transgender Persons & Socioeconomic Status (APA, 2010)
Participant Attentiveness to Consent Forms (Baker & Chartier, 2018)
Main Takeaways:
- Consent forms rarely receive participants’ full attention. To improve attention in this area, it is important to identify and understand what causes an individual to be inattentive.
- It is also important to understand why individuals decide to consent without being informed, as well as their level of competence. The reasons that many participants admittedly do not understand a document but still sign may include the formal and official style of the document, a feeling of time pressure, and an inadequate style of presentation of the materials included on a consent form document.
- It is necessary to be able to identify careless responses that skew results and cause inconsistencies, in order to draw valid and reproducible conclusions.
- The authors predicted that participants would be unlikely to accurately recall the code word prompt, while our secondary prediction was that participants would be equally inattentive to the code word at the middle of the form and the end of the form.
- Method: 136 gave participants a pencil and the sheets necessary to complete the attentiveness task. A set of eight copies of the consent form was created with the code word being placed above a different section of the form. The consent forms were identical except for the placement of the code word. Each code word was placed directly above the beginning of the new section.
- Method: When participants arrived, we gave them a consent form to sign. We then told participants to read through the consent form and sign it when they had finished reading. After participants signed the consent form, we read the instructions of the filler task orally as we gave them materials to complete the filler experiment. Following the completion of the filler task, participants were given a third sheet of paper, which we described as a demographic sheet. This sheet included the prompt “what is the code word?” asking for the consent form code word.
- Results: The vast majority of participants did not respond to the prompt asking for a code word; some provided an incorrect guess using words such as pizza, baseball, and password. Code word location and the frequency of correct responses were not significantly associated.
- The results did not support that there would be a difference in correct response rate depending on the location of the code word. There was no significant difference across groups in attentiveness. There was a low rate of correct response and an overall low rate of attentiveness to the consent form. This shows that there is a need for ways to improve this inefficacy.
- It is assumed that participants are aware of what they have consented to. By giving consent, participants are essentially saying that they understand and accept responsibility for what is expected of them as well as any distress, harm, or otherwise unexpected outcome that may occur. If participants do not know what they are giving consent to, a number of negative outcomes could occur: participants may not perform the procedures of the experiment correctly, leaving their results invalid or difficult to interpret; participants could endanger themselves due to a health complication involved in the experiment that they remain unaware of; participants could endanger others for their lack of understanding of what is expected of them; and other possibilities not specified. When participants give consent, an experimenter should not be expected to provide a repetition of what they have just given consent to.
- Although the current study demonstrated the problem with attentiveness in paper and pencil consent forms, research has shown that online studies may suffer even more in attention levels.
Abstract
In the present study, we tested to see if participants were attentive to details in the consent form for a psychological experiment before signing it. Our initial hypothesis was that participants might not read attentively, due to perceiving the information to be mundane. Depending on condition, the code word was placed in an early, middle, or late section of the consent form. This experiment allowed us to analyze whether participants read through the consent form, and if they paid more attention to a specific part of the form than others. We asked participants to read through the consent form and sign at the bottom when they were finished. Following their signed consent, we orally gave instructionson how to complete the filler task. At the conclusion of the study, participants were given a prompt to recall the code word. The results of this preregistered study show that, of the 136 participants, only 20 participants correctly recalled the code word. A χ2 test of independence revealed that successfully noticing the code word did not depend on the location on the consent form, χ2(2, N = 136) = 0.67, p = 0.72, φ = 0.07. The results of this study show that students did not differentially respond to different parts of the consent form.APA Style Reference
Baker, D., & Chartier, C. R. (2018). Participant attentiveness to consent forms. Psi Chi Journal of Psychological Research, 23(2), 142–146. https://doi.org/10.24839/2325-7342.JN23.2.141.
You may also be interested in
Protect students and staff in precarious jobs (Anon, 2020b)
Main Takeaways:
- Coronavirus lockdowns have precipitated a crisis in university funding and academic morale. When lockdowns were announced, universities all over the world closed their doors. They moved classes and some research activity online. But staff were given little or no time to prepare and few resources or training to help them.
- Fewer students are expected to enrol in the coming academic year, instead waiting until institutions open fully. That means that young people will lose a year of their education, and universities will lose out financially. Some governments have plans to boost post-lockdown research, but these will be undermined if universities decide to make job cuts and end up with staff shortages. Universities need support at this crucial time.
- Low- and middle-income countries face extra challenges from the sudden transition to online learning. The main concern is for students who are unable to access digital classrooms. This is especially the case for those who live in areas without fast, reliable and affordable broadband, or where students have no access to laptops, tablets, smartphones and other essential hardware.
- Teachers in many countries have been reporting that students in such situations have been struggling to keep up since lockdowns began. In some cases, students from poorer households in remote regions are having to travel to their nearest city to access the Internet, and to pay commercial Internet cafes to download course materials. There is a way to get the technology to under-served areas, and to avert redundancies. But it requires governments and funding bodies to accept that students and universities should be eligible for the same kinds of temporary emergency funding as other industries are asking for.
- What governments in these countries and other countries need to realize is that the impact of such decisions will fall disproportionately on the poorest students and on more vulnerable members of staff. Job cuts are more likely to affect people whose employment is less secure, such as those on fixed-term contracts. And such staff will, in turn, include people from minority groups, who are often over-represented in contract staff.
- There are smaller actions that institutions and academics can take. Students, and staff on short-term contracts, would welcome more support from academic colleagues in senior positions and from others with permanent positions, for example.
Quote
“These colleagues should make the case to their managers that failing to provide more help to low-income students, or cutting the number of postdoctoral staff and teaching fellows will harm the next generation of researchers and teachers. It will also drastically reduce departments’ capacity to teach and increase the load on those who remain, who are often forced to taking on the teaching responsibilities of their former colleagues. Senior colleagues can also request assessments of how any planned redundancies will affect equality and diversity. Cutting back on scholarly capacity is always unwise, but to do so while increasing spending on R&D is wrong-headed. It will slow down economic recovery and jeopardize plans to make research more inclusive. Yet again, the academic precariat finds itself at a disadvantage. Governments, research managers and senior colleagues have a duty to help so that universities can keep these essential and valuable employees.” (p.314).
Abstract
Universities face a severe financial crisis, and some contract staff are hanging by a thread. Senior colleagues need to speak up now.APA Style Reference
Anon (2020) . Protect students and staff in precarious job. Nature, 582, 313-314. https://media.nature.com/original/magazine-assets/d41586-020-01788-6/d41586-020-01788-6.pdf
You may also be interested in
- Seeking an exit plan (Woolston, 2020)
- The mental health of PhD researchers demands urgent attention (Nature, 2019)
- Postdocs in crisis: science risks losing the next generation (Nature, 2020)
- Boosting research without supporting universities is wrong-headed (Nature, 2020b
- Unequal effects of the COVID-19 pandemic on scientists (Myers et al., 2019)
- Don’t erase undergrad researchers and technicians from author lists (Fogarty, 2020)
Semiretirement is treating me well—and it made room for a younger scientist (Larson, 2018)
Main Takeaways:
- The official asked the author to investigate how eliminating mandatory retirement has affected the availability of positions for new assistant professors. The question struck with the author as important but not personally relevant—until the author and their colleagues got their results.
- Their initial intuition was that there would be no substantial long-term effect. The authors expected to find that the number of open positions dipped just after the law’s two changes. After all, the number of available tenure-track faculty slots is essentially fixed—at MIT, there are approximately 1000. To create room for a new faculty member, an existing one has to leave. But after a brief dip, we thought, retirements should return to normal, creating room for new recruits.
- The authors discovered that eliminating the retirement age had reduced the number of new slots for MIT assistant professors by 19%, from 57 to 46 per year. Put simply, without a mandatory retirement age, senior faculty members are much slower to leave. When their paper was published, the author viewed it as just another finding. But eventually, the author had serious reflections about what the results really meant.
- There are simply too many applicants seeking too few positions. The author and other professors older than 65 were blocking the way of many young scholars who seek academic careers. The author started to wonder whether it was time for me to step aside, but the idea of leaving the job the author had been tied to for so long was hard to swallow.
- The dean of the engineering school heard about our paper and asked the author to go over the details with him. It must have resonated with him, because he briefed the department heads about the need for a flexible after-tenure option that would vacate a position and open the way for a new hire. They soon invented “professor, post-tenure,” tossing out an earlier option with the horrendous name “professor without tenure, retired,” (i.e. PWOTR).
- Once “professor, post-tenure” was announced in 2016, the author found it increasingly attractive. It wasn’t the same as “emeritus”—not full retirement. The author could retain their office, teach and supervise students, and be a principal investigator on research grants—all with great flexibility. The author would get to choose which projects they wanted to do and be paid accordingly, up to 49% of my previous salary. They could also access retirement and pension funds.
Quote
“At 74, I in essence removed 9 years from someone else’s career. I should have stepped aside sooner” (p. 3).
Abstract
An article by Professor Richard Larson about semi-retirement to allow older scientists to retire and allow young scientists to enter academia.APA Style Reference
Larson, R. C. (2018). Semiretirement is treating me well—and it made room for a younger scientist. Science. doi:10.1126/science.caredit.aav8986
You may also be interested in
- Seeking an exit plan (Woolston, 2020)
- The mental health of PhD researchers demands urgent attention (Nature, 2019)
- Postdocs in crisis: science risks losing the next generation (Nature, 2020)
The Relation Between Family Socioeconomic Status and Academic Achievement in China: A Meta-analysis (Liu et al., 2020)
Main Takeaways:
- In the current study, we conducted a meta-analysis on the relation between SES and academic achievement within the context of Mainland China.
- The current meta-analysis aimed to investigate the relation between SES and academic achievement among students in the basic education stage in mainland China and possible potential moderators on this relation including year, grade level, type of SES measures, and subjects of academic achievement.
- Method: A study was included if it met the following criteria:
Abstract
Academic achievement is one of the most important indicators for assessing students’ performance and educational attainment. Family socioeconomic status (SES) is the main factor influencing academic achievement, but the relation between SES and academic achievement may vary across different sociocultural contexts. China is the most populous developing country with a large number of schooling students in the basic education stage. Chinese schools are unified and managed by the Ministry of Education, but the central and local governments in accordance with their responsibilities share the investment of educational funds. However, the strength of the relation between SES and academic achievement and possible moderators of this relation remain unclear. Therefore, we conducted a meta-analysis on the relation between SES and academic achievement based on 215,649 students from 78 independent samples in the basic education stage from mainland China. The results indicated a moderate relation between SES and academic achievement (r = 0.243) in general. Moderation analyses indicated that the relation between SES and academic achievement gradually decreased in the past several decades; SES has a stronger correlation with language achievement (i.e., Chinese and English) than science/math achievement and general achievement. These findings were discussed from the perspective of governmental policies on education.APA Style Reference
Liu, J., Peng, P., & Luo, L. (2020). The relation between family socioeconomic status and academic achievement in China: a meta-analysis. Educational Psychology Review, 32(1), 49-76. https://doi.org/10.1007/s10648-019-09494-0
You may also be interested in
- #bropenscience is broken science (Whitaker & Guest, 2020) ⌺
- ls There a Positive Correlation between Socioeconomic Status and Academic Achievement? (Quagliata, 2008)
- Education and Socio-economic status (APA, 2017b)
- Ethnic and Racial minorities and socio-economic status (APA, 2017)
- Women and Socio-economic status (APA, 2010)
- Disability and Socio-economic status (APA, 2010)
- Lesbian, Gay, Bisexual and Transgender Persons & Socioeconomic Status (APA, 2010)
- The Relation Between Family Socioeconomic Status and Academic Achievement in China: A Meta-analysis (Liu et al., 2020)
- Is science only for the rich? (Lee, 2016) ⌺
- Socioeconomic Status and Academic Outcomes in Developing Countries: A Meta-Analysis (Kim et al., 2019)
Funders must mandate and reward open research records (Madsen, 2020) ◈
Main Takeaways:
- Transparent and responsible record-keeping is a pillar of high-quality research. Institutions reward scientists by focusing on quantity, not quality of publications.
- Ralitsa Madsen and Chris Chambers drafted a Universal Funders Policy that mandates and rewards the open deposition of all records linked with a publication.
- The proposal does not apply to all materials generated in the course of a project. This is not a requirement that would not be beneficial or pragmatic in biomedical sciences. This could result in a data dump of limited value.
- The standard of biomedical publication is based on smaller datasets that are often available only from the relevant author upon reasonable request, a practice that hampers transparency.
- For such a policy to be accepted and work long-term, its implementation route might find inspiration in Plan S developments: an initial phase of consultation with diverse stakeholders, followed by a transition period during which researchers and institutions prepare for the ‘new normal’. Finally, funders will need to enforce the mandate.
Quote
“To change a game, its rules must change. Funders can make open science the norm and improve research culture in the process.” (p.200).
Abstract
Dr Ralitsa Madsen discusses how funders must reward open science norms in order to improve research culture.APA Style Reference
Madsen, R. (2020). Funders must mandate and reward open research records. Nature, 586, 200. https://doi.org/10.1038/d41586-020-02395-1.
You may also be interested in
The pressure to publish pushes down quality (Sarewitz, 2016)
Main Takeaways:
- Indeed, the widespread availability of bibliometric data from sources such as Elsevier, Google Scholar and Thomson Reuters ISI makes it easy for scientists to obsess about their productivity and impact, and to compare their numbers with those of other scientists.
- And if more is good, then the trends for science are favourable. The number of publications continues to grow exponentially; it was already approaching two million per year by 2012. More importantly, and contrary to common mythology, most papers do get cited. Indeed, more papers, from more journals, over longer periods of time, are being cited more often.
- Mainstream scientific leaders increasingly accept that large bodies of published research are unreliable. But what seems to have escaped general notice is a destructive feedback between the production of poor-quality science, the responsibility to cite previous work and the compulsion to publish.
- Pervasive quality problems have been exposed for rodent studies of neurological diseases, biomarkers for cancer and other diseases and experimental psychology, amid the publication of thousands of papers.
- So yes, the web makes it much more efficient to identify relevant published studies, but it also makes it that much easier to troll for supporting papers, whether or not they are any good. No wonder citation rates are going up.
Quote
“the enterprise of science is evolving towards something different and as yet only dimly seen. Current trajectories threaten science with drowning in the noise of its own rising productivity...Avoiding this destiny will, in part, require much more selective publication. Rising quality can thus emerge from declining scientific efficiency and productivity. We can start by publishing less, and less often, whatever the promotional e-mails promise us.” (p.147).
Abstract
Scientists must publish less, says Daniel Sarewitz, or good research will be swamped by the ever-increasing volume of poor work.APA Style Reference
Sarewitz, D. (2016). The pressure to publish pushes down quality. Nature, 533(7602), 147-147. doi:10.1038/533147a
You may also be interested in
Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy (Bonney et al.,, 2009)
Main Takeaways:
- Large-scale projects can engage participants in continental or even global data-gathering networks. Pooled data can be analyzed to illuminate population trends, range changes, and shifts in phenologies. Results can be published in the scientific literature and used to inform population management decisions.
- Developing and implementing public participation projects that yield both scientific and educational outcomes requires careful planning. This article describes the model for building and operating citizen science projects that has evolved at Cornell Laboratory of Ornithology (CLO) over the past two decades. The authors hope that their model will inform the fields of biodiversity monitoring, biological research, and science education while providing a window into the culture of citizen science.
- All of the data contributed to CLO citizen science databases are provided by the public and are available at no charge to anyone, amateur or professional, for any noncommercial use. Maintenance and security are provided by database managers housed within the CLO’s information science department.
- There are 9 steps to develop a citizen science project:
Quote
“Innovative and rigorous statistical analysis methods will be required to handle the massive amounts of monitoring data that will be collected across vast geographic scales. Systems for ensuring high-quality data through interactive technological and educational techniques will have to be developed. Research on the best ways for people to learn through the citizen science process, and on how that process may differ among different cultures and languages, also will be needed. To fulfill these requirements, expertise from a diversity of science, education, engineering, and other fields must be harnessed in a collaborative, integrated research effort.” (p.983).
Abstract
Citizen science enlists the public in collecting large quantities of data across an array of habitats and locations over long spans of time. Citizen science projects have been remarkably successful in advancing scientific knowledge, and contributions from citizen scientists now provide a vast quantity of data about species occurrence and distribution around the world. Most citizen science projects also strive to help participants learn about the organisms they are observing and to experience the process by which scientific investigations are conducted. Developing and implementing public data-collection projects that yield both scientific and educational outcomes requires significant effort. This article describes the model for building and operating citizen science projects that has evolved at the Cornell Lab of Ornithology over the past two decades. We hope that our model will inform the fields of biodiversity monitoring, biological research, and science education while providing a window into the culture of citizen science.APA Style Reference
Bonney, R., Cooper, C. B., Dickinson, J., Kelling, S., Phillips, T., Rosenberg, K. V., & Shirk, J. (2009). Citizen science: a developing tool for expanding science knowledge and scientific literacy. BioScience, 59(11), 977-984. https://doi.org/10.1525/bio.2009.59.11.9
You may also be interested in
- Next Steps for Citizen Science (Bonney et al., 2014)
- Citizen Science: Can Volunteers Do Real Research? (Cohn, 2008)
- The Increasing Dominance of Teams In Production of Knowledge (Wuchty et al., 2007)
- A Manifesto for Team Science (Forscher et al., 2020)
- Many hands make tight work(Silberzahn & Uhlmann, 2015)
- Publishing Research With Undergraduate Students via Replication Work: The Collaborative Replications and Education Project (CREP; Wagge et al., 2019)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
Next Steps for Citizen Science (Bonney et al., 2014)
Main Takeaways:
- Despite the wealth of information emerging from citizen science projects, the practice is not universally accepted as a valid method of scientific investigation. Scientific papers presenting volunteer-collected data sometimes have trouble getting reviewed and are often placed in outreach sections of journals or education tracks of scientific meetings. At the same time, opportunities to use citizen science to achieve positive outcomes for science and society are going unrealized. The authors offer suggestions for strategic thinking by citizen science practitioners and their scientific peers—and for tactical investment by private funders and government agencies—to help the field reach its full potential.
- During the past two decades, the number of citizen science projects, along with scientific reports and peer-reviewed articles resulting from their data, has expanded tremendously.
- Much of this growth results from integration of the Internet into everyday life, which has substantially increased project visibility, functionality, and accessibility. People who are passionate about a subject can quickly locate a relevant citizen science project, follow its instructions, submit data directly to online databases, and join a community of peers.
- Some people question the practice of citizen science citing concerns about data quality. With appropriate protocols, training, and oversight, volunteers can collect data of quality equal to those collected by experts. For large projects where training volunteers and assessing their skills can be challenging, new statistical and high-performance computing tools have addressed data-quality issues (e.g. sampling bias).
- Creating projects to achieve social and scientific objectives requires deliberate design that is attentive to diverse interests, including why and how members of the public would even want to be involved. Investments in infrastructure and partnerships that help to create more local projects with both science and social components could leverage under appreciated knowledge sources, including local and traditional knowledge. Such efforts could also inform the questions and issues pursued through citizen science, leading to new research and a stronger science-society relationship.
- Project developers could also look for opportunities to gather truly important information in ways that are currently going unrealized. For example, citizen science could play a stronger role when natural or human caused disasters or other unique data-collection opportunities occur.
- Many existing citizen science projects could be enhanced by preparing protocols and volunteer infrastructure to enable scientifically sound data collection during and after recurring disaster situations (e.g., oil spills).
Quote
“Centers for citizen science could create, organize, and synthesize centralized repositories of volunteer-collected data on topics such as water quality, phenology, biodiversity, astronomy, precipitation, and human health. Centers also could help to coordinate questions being asked of citizen science data, methods of answering those questions, and techniques for achieving educational and community-development goals for participants. As such, centers for citizen science would be excellent strategic investments for both private and government foundations..” (p.1437).
Abstract
Strategic investments and coordination are needed for citizen science to reach its full potential.APA Style Reference
Bonney, R., Shirk, J. L., Phillips, T. B., Wiggins, A., Ballard, H. L., Miller-Rushing, A. J., & Parrish, J. K. (2014). Next steps for citizen science. Science, 343(6178), 1436-1437. DOI: 10.1126/science.1251554
You may also be interested in
- Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy (Bonney et al.,, 2009)
- Citizen Science: Can Volunteers Do Real Research? (Cohn, 2008)
- The Increasing Dominance of Teams In Production of Knowledge (Wuchty et al., 2007)
- A Manifesto for Team Science (Forscher et al., 2020)
- Many hands make tight work(Silberzahn & Uhlmann, 2015)
- Publishing Research With Undergraduate Students via Replication Work: The Collaborative Replications and Education Project (CREP; Wagge et al., 2019)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
Citizen Science: Can Volunteers Do Real Research? (Cohn, 2008)
Main Takeaways:
- Working with citizen scientists is hardly new. The practice goes back at least to the National Audubon Society’s annual Christmas bird count, which began in 1900. About 60,000 to 80,000 volunteers now participate in that survey. What is new is the number of studies that use citizen scientists, the number of volunteers enlisted in the studies, and the scope of data they are asked to collect.
- But why citizen scientists? Why depend on amateurs who may make mistakes, may not fully understand the context of the study, or may produce data that might be unreliable? Why not hire scientists, graduate students, and field technicians? One obvious reason is money.
- But can citizen scientists learn to use equipment, read results, and collect data that are as accurate, reliable, and usable as those generated by professional researchers? Yes if they are explained the procedures and how to use them. Nothing we’re doing is so difficult that volunteers can’t do it if they are properly trained.
- Citizen scientists are typically people who care about the wild, feel at home in nature, and have at least some awareness of the scientific process.
- In the end, what have citizen scientists achieved? Has their labor actually helped advance scientific knowledge? Yes.
Quote
“Citizen scientists have also collected data that helped scientists develop guidelines for land managers to preserve habitat. Nevertheless, despite years of practice, the use of citizen science is still an evolving art. “We’re playing it by ear,” says NPS’s Mitchell. “We are designing studies and involving citizen scientists as we go along.” Stevens anticipates that the ATC’s volunteers and other citizen scientists will help provide information that policymakers need to understand ecological changes on the public lands they manage. “The environment belongs to all of us,” Stevens says, adding: “We want to give people a chance to get involved in its preservation in a whole new way.” (p.197).
Abstract
Collaborations between scientists and volunteers have the potential to broaden the scope of research and enhance the ability to collect scientific data. Interested members of the public may contribute valuable information as they learn about wildlife in their local communities.APA Style Reference
Cohn, J. P. (2008). Citizen science: Can volunteers do real research?. BioScience, 58(3), 192-197. https://doi.org/10.1641/B580303
You may also be interested in
- Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy (Bonney et al.,, 2009)
- Next Steps for Citizen Science (Bonney et al., 2014)
- The Increasing Dominance of Teams In Production of Knowledge (Wuchty et al., 2007)
- A Manifesto for Team Science (Forscher et al., 2020)
- Many hands make tight work(Silberzahn & Uhlmann, 2015)
- Publishing Research With Undergraduate Students via Replication Work: The Collaborative Replications and Education Project (CREP; Wagge et al., 2019)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
Most people are not WEIRD (Henrich et al., 2010)
Main Takeaways:
- Much research on human behaviour is based on Western, Educated, Industrialized, Rich, and Democratic people (WEIRD).
- They are the most unusual and psychological distinct individuals in the world.
- Most research ignores the importance of generalizability and researchers tend to assume cognition and behaviours will be the same across all cultures.
- However, across cultures, there are differences in terms of perceptual illusions, cultural biases and stereotypes.
- There is a need for cross-cultural evidence in order to have a better understanding of cognition and behaviour.
Quote
“Recognizing the full extent of human diversity does not mean giving up on the quest to understand human nature. To the contrary, this recognition illuminates a journey into human nature that is more exciting, more complex, and ultimately more consequential than has previously been suspected” (p.29)
Abstract
To understand human psychology, behavioural scientists must stop doing most of their experiments on Westerners, argue Joseph Henrich, Steven J. Heine and Ara Norenzayan.APA Style Reference
Henrich, J., Heine, S. & Norenzayan, A. (2010) Most people are not WEIRD. Nature 466, 29. https://doi.org/10.1038/466029a
You may also be interested in
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Constraints on Generality (COG): A Proposed Addition to All Empirical Papers (Simons et al., 2017)
Don’t erase undergrad researchers and technicians from author lists (Fogarty, 2020)
Main Takeaways:
- The author was an intern in a laboratory and was told they would be made an author but their work when submitted was relegated to the acknowledgement section. This was their wake-up call about the need to speak up for myself regarding authorship and to speak out against the unfair convention to reduce the contribution of undergraduates and technicians to scientific research.
- As an undergraduate, the author was excited to be involved in science and make discovery but is not aware of publications being the true currency of science. The undergraduate was told they would be first or second author when they published the work but the specifics were not discussed.
- After the author left the lab, they were in contact with the post-doctoral student and graduate student who took over the project, eager to be a co-author on their first paper, they contacted the graduate student and postdoctoral student to help write the manuscript, especially during graduate school. When the author emailed them to check in, they found out that the paper had been written, submitted and accepted for publication but the author was excluded from the process and author list. They were acknowledged as a technician.
- The author had fulfilled the criteria of being an author according to the guidelines from International Committee of Medical Journal Editors. However, the work was trivialised, as it was done when the author was an undergraduate technician.
- The author argues they should have initiated explicit conversations about authorship from the beginning, with all members of the project but they should have written up the results and method in a format that could be readily included in the paper.
- The community needs to address the fact that undergraduate students and technicians should not be excluded from authorship lists. A researcher’s title does not make their contribution any less significant. Junior researchers are critical to a project’s success and researchers need to make sure that everyone who contributes gets the credit that they deserve.
Abstract
An editorial by Dr Emily Fogarty who discussed that technicians and undergraduates should be included as an author and involved in the process of writing a manuscript.APA Style Reference
Fogarty, E. (2020). Don’t erase undergrad researchers and technicians from author lists. Science. doi:10.1126/science.caredit.abf8865.
You may also be interested in
- Boosting research without supporting universities is wrong-headed (Nature, 2020b)
- Protect students and staff in precarious jobs (Anon, 2020b)
- Contributorship, Not Authorship: Use CRediT to Indicate Who Did What (Holcombe, 2019)
The Increasing Dominance of Teams In Production of Knowledge (Wuchty et al., 2007)
Main Takeaways:
- A shift toward teams also raises new questions of whether teams produce better science. Teams may bring greater collective knowledge and effort, but they are known to experience social network and coordination losses that make them underperform individuals even in highly complex tasks.
- From this viewpoint, a shift to teamwork may be a costly phenomenon or one that promotes low impact science, whereas the highest-impact ideas remain the domain of great minds working alone.
- Method: The authors studied 19.9 million research articles in the Institute for Scientific Information (ISI) Web of Science database and an additional 2.1 million patent records covering research publications in science and engineering since 1955, social sciences since 1956, and arts and humanities since 1975.
- Results: For science and engineering, social sciences, and patents, there has been a substantial shift toward collective research. In the sciences, team size has grown steadily each year and nearly doubled, from 1.9 to 3.5 authors per paper, over 45 years.
- Results: Shifts toward teamwork in science and engineering have been suggested to follow from the increasing scale, complexity, and costs of big science. Surprisingly then, we find an equally strong trend toward teamwork in the social sciences, where these drivers are much less notable.
- Results: The citation advantage of teams has also been increasing with time when teams of fixed size are compared with solo authors. In science and engineering (e.g. papers with two authors received 1.30 times more citations than solo authors in the 1950s but 1.74 times more citations in the 1990s). In general, this pattern prevails for comparisons between teams of any fixed size versus solo authors.
- Results: The authors found that removing self-citations can produce modest decreases in the relative team impact measure in some fields; for example, RTIs fell from 3.10 to 2.87 in medicine and 2.30 to 2.13 in biology. Removing self-citations can reduce the relative team impact by 5 to 10%, but the relative citation advantage of teams remains essentially intact.
- Teams now dominate the top of the citation distribution in all four research domains. In the early years, a solo author in science and engineering or the social sciences was more likely than a team to receive no citations, but a solo author was also more likely to garner the highest number of citations.
- A team-authored paper has a higher probability of being extremely highly cited (e.g. a team-authored paper in science and engineering is currently 6.3 times more likely than a solo authored paper to receive at least 1000 citations). Lastly, in arts and humanities and in patents, individuals were never more likely than teams to produce more-influential work.
- It never appeared to be the domain of solo authors in arts and humanities and in patents. Second, solo authors did produce the papers of singular distinction in science and engineering and social science in the 1950s, but the mantle of extraordinarily cited work has passed to teams by 2000.
- Since the 1950s, the number of researchers has grown as well, which could
- promote finer divisions of labor and more collaboration. Similarly, steady growth in knowledge may have driven scholars toward more specialization, prompting larger and more diverse teams. However, we found that teamwork is growing nearly as fast in fields where the number of researchers has grown relatively slowly.
- Declines in communication costs could make teamwork less costly as well. Shifting authorship norms may have influenced coauthorship trends in fields with extremely large teams (e.g. biomedicine). However, our results hold across diverse fields in which norms for order of authorship, existence of postdoctorates, and prevalence of grant-based research differ substantially.
Abstract
We have used 19.9 million papers over 5 decades and 2.1 million patents to demonstrate that teams increasingly dominate solo authors in the production of knowledge. Research is increasingly done in teams across nearly all fields. Teams typically produce more frequently cited research than individuals do, and this advantage has been increasing over time. Teams now also produce the exceptionally high impact research, even where that distinction was once the domain of solo authors. These results are detailed for sciences and engineering, social sciences, arts and humanities, and patents, suggesting that the process of knowledge creation has fundamentally changed.APA Style Reference
Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing dominance of teams in production of knowledge. Science, 316(5827), 1036-1039. DOI: 10.1126/science.1136099
You may also be interested in
- Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy (Bonney et al.,, 2009)
- Next Steps for Citizen Science (Bonney et al., 2014)
- A Manifesto for Team Science (Forscher et al., 2020)
A Manifesto for Team Science (Forscher et al., 2020)
Main Takeaways:
- The lack of progress is empirically supported by the results of replication studies whether in the same or new setting, when separate teams develop research strategies and analysis plans to address the same research question and same dataset respectively, and when separate teams write code to execute the same analysis. The authors argue that there is one cause for these problems: insufficient resource investment in the typical psychology study and that team science could help address this cause by efficiently scaling up the resources that can be invested in a given study.
- An individual researcher bears the bulk of the cost of each remedy, but discussions of problems in psychology tend to assume that implementations of these remedies are zero-sum. If implementations for remedies to psychology’s problems are indeed zero-sum, then emphasizing one aspect of rigor necessarily means sacrificing other aspects, as resources are finite.
- The authors suggest that the views that prioritize one aspect of rigor over the others all share an important, mistaken assumption: that the pool of resources available for investment in a given study is, on average, fixed. This is because scientists within that research ecosystem face a collective action problem: as long as scientists are rewarded for publishing more studies, any single scientist who decides to invest more resources in fewer studies will be outcompeted by scientists who invest less resources in more studies.
- There are two ways to solve this reward ecosystem: directly change the institutional incentives that prioritize quantity of publication or devise new institutions (team science institutions) that allow blocs of scientists to increase resource investment in concert.
- Team science institutions solve the collective action problem by coordinating the actions of large groups of scientists. In return for their efforts and resources, the institution gives each individual scientist a publication. Once a group of scientists have signed onto a team science project, the institution serves a coordinating role, pooling the resources across all the individual scientists and focusing them on a common project. This is becoming more common in psychology and involves the smaller teams to develop proposals based on their ideas and submit them for consideration by the larger consortium. The larger team can even explicitly build in mechanisms to solicit proposals from teams whose perspectives may differ from the scientific mainstream – such as those from non-Western regions.
- However, there are three obstacles to team science: incentivising labour within the collaboration; developing and maintaining infrastructure to coordinate team science activities; and dealing with institutions designed around research conducted by smaller teams. These can be solved by:
Quote
“We believe that the biggest problems in science require many minds. By leveraging the combined talents of many minds, the combined efforts of many labs, and the combined resources of many institutions, we believe that team science will be instrumental in the movement to build more reliable, informative, and rigorous science” (p.12).
Abstract
Progress in psychology has been frustrated by challenges concerning replicability, generalizability, strategy selection, inferential reproducibility, and computational reproducibility. Although often discussed separately, we argue that these five challenges share a common cause: insufficient investment of resources in the typical psychology study. We suggest that the emerging emphasis on team science can help address these challenges by allowing researchers to pool their resources to efficiently and drastically increase the amount of resources available for a single study. However, we also anticipate that team science will create new challenges for the field to manage, such as the potential for team science institutions to monopolize power, become overly conservative, and make mistakes at a grand scale. If researchers can overcome these new challenges, we believe team science has the potential to spur enormous progress in psychology and beyond.APA Style Reference
Forscher, P. S., Wagenmakers, E., Coles, N. A., Silan, M. A., & IJzerman, H. (2020, May 20). A Manifesto for Team Science. https://doi.org/10.31234/osf.io/2mdxh
You may also be interested in
- Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy (Bonney et al.,, 2009)
- Next Steps for Citizen Science (Bonney et al., 2014)
- The Increasing Dominance of Teams In Production of Knowledge (Wuchty et al., 2007)
- Citizen Science: Can Volunteers Do Real Research? (Cohn, 2008)
- Many hands make tight work(Silberzahn & Uhlmann, 2015)
- Publishing Research With Undergraduate Students via Replication Work: The Collaborative Replications and Education Project (CREP; Wagge et al., 2019)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
Science faculty’s subtle gender biases favour male students (Moss-Racusin et al., 2012) ⌺
Main Takeaways:
- The present research investigates whether faculty gender bias exists within academic biological and physical sciences, whether it might exert an independent effect on the gender disparity as students progress through the pipeline to careers in science, and finally, whether, given an equally qualified male and female student, science faculty members would show preferential evaluation and treatment of the male student to work in their laboratory. Also, the authors investigated whether faculty members’ perceptions of student competence would help to explain why they would be less likely to hire a female (relative to an identical male) student for a laboratory manager position.
- Science faculty’s perceptions and treatment of students would reveal a gender bias favoring male students in perceptions of competence and hireability, salary conferral, and willingness to mentor; Faculty gender would not influence this gender bias. Hiring discrimination against the female student would be mediated (i.e., explained) by faculty perceptions that a female student is less competent than an identical male student. Finally, participants’ preexisting subtle bias against women would moderate (i.e., impact) results, such that subtle bias against women would be negatively related to evaluations of the female student, but unrelated to evaluations of the male student.
- Method: 127 faculty participants from Biology, Chemistry and Physics provided feedback on materials of an undergraduate science student who stated their intention to go on to graduate school and those who applied for a science laboratory manager position and evaluated a real student who received faculty participants’ ratings as feedback to help their career development.
- Method: Participants were randomly assigned to one of two student gender conditions: male or female. Using validated scales, participants rated student competence, their own likelihood of hiring the student, selecting an annual starting salary for the student, indicated how much career mentoring they would provide and also had to fill in the Modern Sexism Scale.
- Results: Faculty participants rated the male applicant as significantly more competent and hireable than the (identical) female applicant. These participants also selected a higher starting salary and offered more career mentoring to the male applicant. The gender of the faculty participants did not affect responses, such that female and male faculty were equally likely to exhibit bias against the female student. Mediation analyses indicated that the female student was less likely to be hired because she was viewed as less competent. The authors found that preexisting subtle bias against women played a moderating role, such that subtle bias against women was associated with less support for the female student, but was unrelated to reactions to the male student.
- A female student was seen as less competent and less worthy of being hired than an identical male student with a smaller starting salary and less career mentoring. The subtle gender bias is important to address as it could translate into large real-world disadvantages in judgment and treatment of female science students.
- The female student was less likely to be hired than male student because the former was seen as less competent. Faculty participants’ pre-existing subtle bias against women undermined perceptions and treatment of the female, not male, student, indicating chronic subtle biases may harm women within academic science.
- Female faculty members were just as likely as their male colleagues to favour the male student. Faculty members’ bias was independent of their gender, scientific discipline, age and tenure status indicating this bias is not consciously done and is unintentionally generated from widespread cultural stereotypes.
- Faculty participants reported liking the female more than male students indicates that faculty members’ are not overtly hostile toward women. Faculty members of both genders are affected by enduring cultural stereotypes about a women’s lack of science competence, which translate into biases in student evaluation and mentoring.
- Not only do women encounter biased judgments regarding their competence and hireability but receive less faculty encouragement and financial rewards than identical male counterparts.
Abstract
Despite efforts to recruit and retain more women, a stark gender disparity persists within academic science. Abundant research has demonstrated gender bias in many demographic groups, but has yet to experimentally investigate whether science faculty exhibit a bias against female students that could contribute to the gender disparity in academic science. In a randomized double-blind study (n = 127), science faculty from research-intensive universities rated the application materials of a student—who was randomly assigned either a male or female name—for a laboratory manager position. Faculty participants rated the male applicant as significantly more competent and hireable than the (identical) female applicant. These participants also selected a higher starting salary and offered more career mentoring to the male applicant. The gender of the faculty participants did not affect responses, such that female and male faculty were equally likely to exhibit bias against the female student. Mediation analyses indicated that the female student was less likely to be hired because she was viewed as less competent. We also assessed faculty participants’ preexisting subtle bias against women using a standard instrument and found that preexisting subtle bias against women played a moderatingAPA Style Reference
Moss-Racusin, C. A., Dovidio, J. F., Brescoll, V. L., Graham, M. J., & Handelsman, J. (2012). Science faculty’s subtle gender biases favor male students. Proceedings of the national academy of sciences, 109(41), 16474-16479. https://doi.org/10.1073/pnas.1211286109
You may also be interested in
- Unequal effects of the COVID-19 pandemic on scientists (Myers et al., 2019)
- Against Eminence (Vazire, 2017)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- #bropenscience is broken science (Whitaker & Guest, 2020) ⌺
- Examining Gender Bias in Student Evaluations of Teaching for Graduate Teaching Assistants (Khazan et al., 2019) ⌺
Contributorship, Not Authorship: Use CRediT to Indicate Who Did What (Holcombe, 2019)
Main Takeaways:
- Science today is highly collaborative, and results usually reflect the work of multiple people. Many of those who do this work are evaluated on their contributions to science when they apply for a job, a promotion, or a grant. One might expect this evaluation to be based simply on one’s contributions to science. In many fields, evaluation is formally based on the list of publications that include the researcher as an author. But there can be a disconnect between those who are authors and those who make contributions. While author typically means “writer”, funders and promotion committee members are interested in various types of substantive contributions—who actually provided the words in the paper is, often, not the paramount concern.
- Many journals and societies specifically enshrine manuscript drafting or revising as a requirement for co-authorship. This is a problem for funders and others interested in both writing related contributions and also contributions that are not writing related.
- More broadly, the concept of authorship itself may no longer be appropriate. With standard practice today, the ambiguity regarding the contribution(s) of a co-author to a paper extends well beyond whether there was a contribution to the writing. Surveys of researchers show strong disagreement both with the most widely used authorship guidelines and also high levels of inconsistency between researchers regarding the categories of contributions that merit authorship.
- To resolve the ambiguity in the meaning of author lists, one would have to ask the authors what the contributions were of each author and what, if any, were the significant contributions of others. Under a contributorship model, authors are required to state this information. And in a world where a number of researchers often contribute one or more of a variety of things to a research project, it seems illogical for researchers not to say something about who did what in a way that is visible and systematically enables credit for those contributions.
- The transparency and removal of ambiguity stemming from a declaration of contributions will have multiple benefits. Universities, laboratories, and institutions make hiring and funding decisions based on what they can determine about a person’s past scientific contributions. These hiring organizations can have very different interests in specific kinds of contributions. When hiring a staff scientist in a large laboratory, or a statistician in a research group, one is looking for different types of contributions than if one is hiring a faculty member expected to run a laboratory. Thus, to best serve the various stakeholders involved in science, what is needed is to know what the significant contributions were to a project and who made them.
- There are benefits if a standardised contributorship system were adopted by large numbers of journals such as:
Quote
“Authorship criteria today in medicine and many other fields are not simple, as they typically involve the combination of a mandatory writing requirement with other criteria, the nature of which (such as being optional or required) vary from journal to journal. The idea of indicating who did what is simpler. There is still the difficult issue of how substantive a contribution needs to be for a researcher to be credited, but this is an issue that any system faces. There seems to be little reason to not move toward a contributorship model; it provides a more inclusive and realistic recognition of their research contributions” (p.9).
Abstract
Participation in the writing or revising of a manuscript is, according to many journal guidelines, necessary to be listed as an author of the resulting article. This is the traditional concept of authorship. But there are good reasons to shift to a contributorship model, under which it is not necessary to contribute to the writing or revision of a manuscript, and all those who make substantial contributions to a project are credited. Many journals and publishers have already taken steps in this direction, and further adoption will have several benefits. This article makes the case for continuing to move down that path. Use of a contributorship model should improve the ability of universities and funders to identify effective individual researchers and improving their ability to identify the right mix of researchers needed to advance modern science. Other benefits should include facilitating the formation of productive collaborations and the creation of important scientific tools and software. The CRediT (Contributor Roles Taxonomy) taxonomy is a machine-readable standard already incorporated into some journal management systems and it allows incremental transition toward contributorship.APA Style Reference
Holcombe, A. O. (2019). Contributorship, not authorship: Use credit to indicate who did what. Publications, 7(3), 48. https://doi.org/10.3390/publications7030048
You may also be interested in
- Boosting research without supporting universities is wrong-headed (Nature, 2020b)
- Protect students and staff in precarious jobs (Anon, 2020b)
- Don’t erase undergrad researchers and technicians from author lists (Fogarty, 2020)
Show the dots in plots (Anon, 2017)
Main Takeaways:
- The type of graph, its dimensions and layout, colour palettes and gradients, the data intervals displayed in the axes, specific data comparisons, and above all, the presence or absence of individual data points, error bars and information on statistical significance, can strongly affect how the graphed dataset is interpreted.
- However, they are also commonly used to present small samples of continuous data, especially in biomedical fields. There are reasons for this: because of their shape and area, bars are easy to see at a glance; therefore, they are effective when comparing data and visualizing trends; and they make it easy to see the relative position of the data along the axes. However, bar graphs can be misleading.
- Moreover, providing only statistical parameters (e.g. mean ± s.d.) can suggest that the data underlying any particular bar are normally distributed and contains no outliers, when this may not be the case.
- Graphing error bars with the s.e.m. indicates the precision of the mean is commonly done because they are shorter than error bars representing the s.d. (which instead quantifies variability). The authors discourage this practice.
- All these issues can be avoided by displaying every data point. This journal strongly suggests that the individual data points (in addition to error bars and other statistical information) be graphed, in particular for relatively small samples and for bar graphs, and when statistical significance is claimed.
- Data deposition is recommended, as it can provide easy access to colleagues who wish to further analyse or make use of the data, increases reporting transparency, encourages the eventual reproducibility of the findings, ensures data preservation, increases the overall usability of datasets, especially when they are large, and enables convenient citation to the data (with a doi).
- Data presentation should not be an afterthought; the visuals affect how the story is told and perceived. Display items in papers should highlight the relevant data and make their interpretation easy.
- Bar graphs can make comparisons easier to see at a glance, even for continuous variables when categorized. However, the individual data points should be displayed.
Quote
“Figure captions should be clear, provide all the necessary statistical information, and guide the reader through the story. In fact, an engaging narrative is structured, showcases the protagonists, and provides relevant context. In bar graphs, the data points are the context. Show them.” (p.1).
Abstract
We encourage our authors to display data points in graphs, and to deposit the data in repositories.APA Style Reference
Anon (2017). Show the dots in plots. Nature Biomedical Engineering, 1 (0079). https://doi.org/10.1038/s41551-017-0079
You may also be interested in
Bar graphs depicting averages are perceptually misinterpreted: The within-the-bar bias (Newman & Scholl, 2012)
Main Takeaways:
- People who view bar graphs to reflexively attend to the bars, and so to mistakenly prioritize regions within the bars over equivalent regions outside the bars, even when this is not justified. Thus, when viewers are shown a bar graph that depicts a mean and are then asked to judge the likelihood that a particular value was part of its underlying distribution, they will judge points that fall within the bar as being more likely than points equidistant from the mean, but outside the bar—as if the bar somehow “contained” the relevant data. The authors investigated this bias.
- Method:
Abstract
Perhaps the most common method of depicting data, in both scientific communication and popular media, is the bar graph. Bar graphs often depict measures of central tendency, but they do so asymmetrically: A mean, for example, is depicted not by a point, but by the edge of a bar that originates from a single axis. Here we show that this graphical asymmetry gives rise to a corresponding cognitive asymmetry. When viewers are shown a bar depicting a mean value and are then asked to judge the likelihood of a particular data point being part of its underlying distribution, viewers judge points that fall within the bar as being more likely than points equidistant from the mean, but outside the bar—as if the bar somehow “contained” the relevant data. This “within-the-bar bias” occurred (a) for graphs with and without error bars, (b) for bars that originated from both lower and upper axes, (c) for test points with equally extreme numeric labels, (d) both from memory (when the bar was no longer visible) and in online perception (while the bar was visible during the judgment), (e) both within and between subjects, and (f) in populations including college students, adults from the broader community, and online samples. We posit that this bias may arise due to principles of object perception, and we show how it has downstream implications for decision making.APA Style Reference
Newman, G. E., & Scholl, B. J. (2012). Bar graphs depicting averages are perceptually misinterpreted: The within-the-bar bias. Psychonomic bulletin & review, 19(4), 601-607. https://doi.org/10.3758/s13423-012-0247-5
You may also be interested in
Use caution when applying behavioural science to policy (Ijzerman et al., 2020)
Main Takeaways:
- Researchers in the social and behavioural sciences periodically debate whether their research should be used to address pressing issues in society.
- Confident applications of social and behavioural science findings, then, require first and foremost an assessment of the evidence quality and weighing heterogeneity and the trade-offs and opportunity costs that follow. The scientific community must identify reliable findings that can be applied, have been investigated in the nations for which the application is intended and are derived from investigations using diverse stimuli.
- In the social and behavioural sciences researchers start with defining problem(s) in collaboration with the stakeholders most likely to implement the interventions evidence readiness level 1 (ERL1). These concepts can then be further developed in consultation with people in the target settings to gather preliminary information about how settings or context might alter processes (ERL2).
- From there, researchers can conduct systematic reviews and other meta-syntheses to select evidence that could potentially be applied (ERL3). These systematic reviews require a number of bias-detection techniques. Some findings may be reliable, but the onus is on the scientific community to identify which are and which are not and which generalize or don’t. Yet, these systematic reviews must still be done with an awareness that the currently available statistical techniques do not completely correct for bias and that the resultant findings are at most at ERL3.
- Following this, one can gather information about stimulus and measurement validity and equivalence for application in the target setting (ERL4). Next, researchers and local experts—should consider the potential benefits and harms associated with applying potential solutions (ERL5) and generate estimates of effects in a pilot sample (ERL6).
- With preliminary effects in hand, the team can then begin to test for heterogeneity in low-stakes (ERL7) and higher-stakes (ERL8)samples and settings, which would build the confidence necessary to apply the findings in the real target setting or crisis situation (ERL9).
- Even at ERL9, evidence evaluation continues; applications of social and behavioural work, particularly in a crisis, should be iterative, so high-quality evidence is fed back to evaluate the effectiveness of the intervention and to develop critical and flexible improvements.
- Feedback should be grounded in collaboration between basic and applied researchers, as well as with stakeholders, to ensure that the resulting evidence is relevant and actionable. Failure to continually re-evaluate interventions in light of new data could lead to unnecessary harm, where even the best evidence was inadequate to predict the intervention’s real-world effects.
- A benchmarking system such as the ERL requires us to think carefully about the nature of our research that can be applied credibly and guides where research investments should be made.
- Behavioural scientists from different cultures then discuss how interventions may need to differ in nature across context and cultures. The multidisciplinary and multi-stakeholder nature of ERLs requires us to fundamentally rethink how we produce, and communicate confidence in, application-ready findings. The current crisis provides a chance for social and behavioural scientists to question how we understand and communicate the value of our scientific models in terms of ERLs.
- Even if findings are at ERL3 after assessing evidence quality of primary studies, researchers have little way of knowing how much positive, or unintended negative, consequences an intervention might have when applied to a new situation. Researchers are concerned to see social and behavioural scientists making confident claims about the utility of scientific findings for solving COVID-19 problems without regard for whether those findings are based on the kind of scientific methods that would move them up the ERL ladder. The absence of recognised benchmarking systems makes this challenging. While it is tempting to instead qualify uncertainty by using non-committal language about the possible utility of existing findings (for example, ‘may’, ‘could’), this approach is fundamentally flawed because public conversations generally ignore these rhetorical caveats. Scientists should actively communicate uncertainty, particularly when speaking to crises.
Quote
“And now, in 2020, psychologists and other social and behavioural scientists are arguing that our research should inform the response to the new coronavirus disease (henceforth COVID-19)1,2. We are a team mostly consisting of empirical psychologists who conduct research on basic, applied and meta-scientific processes. We believe that scientists should apply their creativity, efforts and talents to serve our society, especially during crises. However, the way that social and behavioural science research is often conducted makes it difficult to know whether our efforts will do more good than harm.” (p.1).
Abstract
Social and behavioural scientists have attempted to speak to the COVID-19 crisis. But is behavioural research on COVID-19 suitable for making policy decisions? We offer a taxonomy that lets our science advance in ‘evidence readiness levels’ to be suitable for policy. We caution practitioners to take extreme care translating our findings to applications.APA Style Reference
IJzerman, H., Lewis, N. A., Przybylski, A. K., Weinstein, N., DeBruine, L., Ritchie, S. J., ... & Anvari, F. (2020). Use caution when applying behavioural science to policy. Nature Human Behaviour, 4(11), 1092-1094. https://doi.org/10.1038/s41562-020-00990-w
You may also be interested in
Why Has the Number of Scientific Retractions Increased? (Steen et al., 2013)
Main Takeaways:
- Substantial fraction of all retractions are due to research misconduct and there has been an estimated 10-fold increase in retractions for scientific fraud (e.g., data fabrication or falsification) since 1975.
- An explanation for the apparent increase in the rate of fraud is not immediately obvious. If the literature truly does self-correct, then research fraud should ultimately be futile. Yet there is reasonable evidence that scientific misconduct is both common and under-reported.
- Therefore, it is not clear whether the increase in retractions is a result of an increase in the rate of publication of flawed articles or an increase in the rate at which flawed articles are recognized and withdrawn. The goal of this study is to gain a deeper understanding of the increase in retracted scientific publications by analyzing trends in the time interval from publication to retraction.
- Method: The PubMed database of the National Center for Biotechnology was used. Information was searched on 3 May 2012, using the limits of ‘‘retracted publication, English language.’’ A total of 2,047 articles were identified. Each article was classified according to the cause of retraction, using published retraction notices, proceedings from the Office of Research Integrity (ORI), Retraction Watch (http://retractionwatch.wordpress.com), and other sources (e.g., the New York Times). Retractions were classified as resulting from fraud (e.g., data fabrication or falsification), suspected fraud, scientific error, plagiarism, duplicate publication, other cause (e.g., publisher error, authorship disputes, copyright infringement), or unknown.
- Method: An apparent increase in recent retractions might result: (1) if the time to retract has increased in recent years, so that editors are reaching further back in time to retract (e.g., if the introduction of plagiarism-detection software has lead to the detection of long published articles that need to be retracted for plagiarism); (2) if peer scrutiny has increased, so that flawed work is detected more quickly; or (3) if there are reduced barriers to retraction, such that retraction occurs more swiftly (or for different reasons) now than in the past. The time required to retract an article was calculated as the number of months from when a hard-copy version of the article was published in a journal (i.e., as opposed to an online electronic version) to when the retraction notice was published we sorted first authors by name, to determine how many retractions were associated with each first author. The authors then compared first authors with a single retraction to first authors with multiple retractions.
- Results: Among 714 retracted articles published in or before 2002, retraction required 49.82 months; among 1,333 retracted articles published after 2002, retraction required 23.82 months. This suggests that journals are retracting papers more quickly than in the past, although recent articles requiring retraction may not have been recognized yet. To test the hypothesis that time-to-retraction is shorter for articles that receive careful scrutiny, time-to-retraction was correlated with journal impact factor (IF). Time-to-retraction was significantly shorter for high-IF journals, but only ,1% of the variance in time-to-retraction was explained by increased scrutiny. The first article retracted for plagiarism was published in 1979 and the first for duplicate publication in 1990, showing that articles are now retracted for reasons not cited in the past. The proportional impact of authors with multiple retractions was greater in 1972–1992 than in the current era. From 1972–1992, 46.0% of retracted papers were written by authors with a single retraction; from 1993 to 2012, 63.1% of retracted papers were written by single-retraction authors.
- The rate of publication has increased, with a concomitant increase in the rate of retraction. Editors are retracting articles significantly faster now than in the past. The reasons for retraction have expanded to include plagiarism and duplicate publication.
- Journals are reaching further back in time to retract flawed work. There has been an increase in the number and proportion of retractions by authors with a single retraction. Discovery of fraud by an author prompts reevaluation of an author’s entire body of work.
- Greater scrutiny of high-profile publications has had a modest impact on retractions The recent spike in retractions thus appears to be a consequence of changes both in institutional policy and in the behavior of individual authors.
- Similarly, the first articles retracted for error or plagiarism were published in 1979, and the first article retracted for duplicate publication was published in 1990. Retraction is more widely recognized as a remedy for a flawed publication in the modern era, and the reasons for retraction have expanded over time.
- Recognition of serial misconduct has increased in recent years, although retractions by authors with only one retraction are more common and proportionally more important, these individual authors have had a grossly disproportionate impact on retractions from the literature.
Quote
“Data fabrication and falsification are not new phenomena in science. Gregor Mendel, the father of genetics, may have modified or selectively used data to support his conclusions [40] and statistical analysis suggests that Mendel’s ‘‘data… [are] biased strongly in the direction of agreement with expectation…. This bias seems to pervade the whole data [set]’’ [26]. However, there now appear to be lower barriers to retraction as a remedy to correct the scientific literature. Our results (Fig. 5) suggest that the overall rate of retraction may decrease in the future as editors continue to process a glut of articles requiring retraction” (p.8).
Abstract
Background: The number of retracted scientific publications has risen sharply, but it is unclear whether this reflects an increase in publication of flawed articles or an increase in the rate at which flawed articles are withdrawn. Methods and Findings: We examined the interval between publication and retraction for 2,047 retracted articles indexed in PubMed. Time-to-retraction (from publication of article to publication of retraction) averaged 32.91 months. Among 714 retracted articles published in or before 2002, retraction required 49.82 months; among 1,333 retracted articles published after 2002, retraction required 23.82 months (p < 0.0001). This suggests that journals are retracting papers more quickly than in the past, although recent articles requiring retraction may not have been recognized yet. To test the hypothesis that time-to-retraction is shorter for articles that receive careful scrutiny, time-to-retraction was correlated with journal impact factor (IF). Time-to-retraction was significantly shorter for high-IF journals, but only .1% of the variance in time-to-retraction was explained by increased scrutiny. The first article retracted for plagiarism was published in 1979 and the first for duplicate publication in 1990, showing that articles are now retracted for reasons not cited in the past. The proportional impact of authors with multiple retractions was greater in 1972–1992 than in the current era (p < 0.001). From 1972–1992, 46.0% of retracted papers were written by authors with a single retraction; from 1993 to 2012, 63.1% of retracted papers were written by single-retraction authors (p < 0.001). Conclusions: The increase in retracted articles appears to reflect changes in the behavior of both authors and institutions. Lower barriers to publication of flawed articles are seen in the increase in number and proportion of retractions by authors with a single retraction. Lower barriers to retraction are apparent in an increase in retraction for ‘‘new’’ offenses such as plagiarism and a decrease in the time-to-retraction of flawed work.APA Style Reference
Steen, R. G., Casadevall, A., & Fang, F. C. (2013). Why has the number of scientific retractions increased?. PloS one, 8(7), e68397. https://doi.org/10.1371/journal.pone.0068397
You may also be interested in
- Signalling the trustworthiness of science should not be a substitute for direct action against research misconduct (Kornfeld & Titus, 2020)
- Reply to Kornfeld and Titus: No distraction from misconduct (Jamieson et al., 2020)
- Stop ignoring misconduct (Kornfeld & Titus, 2016)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- Publication Pressure and Scientific Misconduct in Medical Scientists (Tijdink et al., 2014)
- Signalling the trustworthiness of science (Jamieson et al., 2020)
- Check for publication integrity before misconduct (Grey et al., 2020)
The association between early career informal mentorship in academic collaborations and junior author performance (AlShebli et al., 2020)⌺
Main Takeaways:
- This paper has been retracted: https://retractionwatch.com/2020/12/21/nature-communications-retracts-much-criticized-paper-on-mentorship/
- By mentoring novices, senior members pass on the organizational culture, best practices, and the inner workings of a profession. In this way, the mentor–protégé relationship provides the social glue that links generations within a field.
- The authors study mentorship in scientific collaboration, where a junior scientist is supported by potentially multiple senior collaborators, without them necessarily having formal supervisory roles.The authors also identify 3 million mentor-protege pairs and survey a random sample, verifying that their relationship involved some form of mentorship.
- Method: This dataset includes records of scientific publications specifying the date of the publication, the authors’ names and affiliations, and the publication venue. It also contains a citation network in which every node represents a paper and every directed edge represents a citation. While the number of citations of any given paper is not provided explicitly, it can be calculated from the citation network in any given year. Additionally, every paper is positioned in a field-of-study hierarchy, the highest level of which comprises 19 scientific disciplines.
- The authors derive two key measures: the discipline of scientists and their impact and additional measures such as the scientists’ gender. Whenever a junior scientist (with academic age = 7) publishes a paper with a senior scientist (academic age > 7), the former is defined as a protégé, and the latter is delineated as a mentor. The author analyze every mentor–protégé dyad that satisfies all of the following conditions: (i) the protégé has at least one publication during their senior years without a mentor; (ii) the affiliation of the protégé is in the US throughout their mentorship years; (iii) the main discipline of the mentor is the same as that of the protégé; (iv) the mentor and the protégé share an affiliation on at least one publication; (v) during the mentorship period, the mentor and the protégé worked together on a paper whose number of authors is 20 or less; and (vi) the protégé does not have a gap of 5-years or more in their publication history.
- Results: The author finds that mentorship quality predicts the scientific impact of the papers written by protégés post mentorship without their mentors. The author observed that increasing the proportion of female mentors is associated not only with a reduction in post-mentorship impact of female protégés, but also a reduction in the gain of female mentors.
- The authors found that both have an independent association with the protégé’s impact post mentorship without their mentors. Interestingly, the big-shot experience seems to matter more than the hub experience, implying that the scientific impact of mentors matters more than the number of their collaborators. The association between the big-shot experience and the post-mentorship outcome persists regardless of the discipline, the affiliation rank, the number of mentors, the average age of the mentors, the protégé’s gender, and the protégé’s first year of publication.
- The present study suggests that female protégés who remain in academia reap more benefits when mentored by males rather than equally-impactful females. The specific drivers underlying this empirical fact could be multifold, such as female mentors serving on more committees, thereby reducing the time they are able to invest in their protégés or women taking onless recognized topics that their protégés emulate.
- Additionally, findings also suggest that mentors benefit more when working with male protégés rather than working with comparable female protégés, especially if the mentor is female. These conclusions are all deduced from careful comparisons between protégés who published their first mentored paper in the same discipline, in the same cohort, and at the very same institution.
- One potential explanation could be that, historically, male scientists had enjoyed more privileges and access to resources than their female counterparts, and thus were able to provide more support to their protégés. Alternatively, these findings may be attributed to sorting mechanisms within programs based on the quality of protégés and the gender of mentors.
- The gender-related findings suggest that current diversity policies promoting female–female mentorships, as well-intended as they may be, could hinder the careers of women who remain in academia in unexpected ways. Female scientists, in fact, may benefit from opposite-gender mentorships in terms of their publication potential and impact throughout their post-mentorship careers.
Abstract
We study mentorship in scientific collaborations, where a junior scientist is supported by potentially multiple senior collaborators, without them necessarily having formal supervisory roles. We identify 3 million mentor–protégé pairs and survey a random sample, verifying that their relationship involved some form of mentorship. We find that mentorship quality predicts the scientific impact of the papers written by protégés post mentorship without their mentors. We also find that increasing the proportion of female mentors is associated not only with a reduction in post-mentorship impact of female protégés, but also a reduction in the gain of female mentors. While current diversity policies encourage same-gender mentorships to retain women in academia, our findings raise the possibility that opposite-gender mentorship may actually increase the impact of women who pursue a scientific career. These findings add a new perspective to the policy debate on how to best elevate the status of women in science.APA Style Reference
AlShebli, B., Makovi, K., & Rahwan, T. (2020). The association between early career informal mentorship in academic collaborations and junior author performance. Nature communications, 11(1), 1-8. https://doi.org/10.1038/s41467-020-19723-8
You may also be interested in
- Unequal effects of the COVID-19 pandemic on scientists (Myers et al., 2019)
- Against Eminence (Vazire, 2017)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- #bropenscience is broken science (Whitaker & Guest, 2020) ⌺
- Science faculty’s subtle gender biases favour male students (Moss-Racusin et al., 2012) ⌺
- Examining Gender Bias in Student Evaluations of Teaching for Graduate Teaching Assistants (Khazan et al., 2019) ⌺
Open Data in Qualitative Research (Chauvette et al., 2019)
Main Takeaways:
- This article argues that as a result of epistemological, methodological, legal and ethical issues, not all qualitative data is appropriate for open access.
- Open data allows researchers to test or refute new theories by validating research findings.
- Although data is becoming more available, we need to consider hotly debated issues concerning open data and that not all data is created equally, especially data from qualitative research.
- Qualitative research is not equally useful when decontextualized and requires contextualisation. Secondary analyses occur in teams or between collaborators when insider knowledge is shared.
- Qualitative research design is not beneficial to secondary analysis. Researchers become part of the research and may bias the data. Also, preconceptions should not be removed from the analyses.
- Personal knowledge is important for phenomenological research.
- Open data is not captured in transcripts and participants may conduct research to become active contributors to the research process. Field notes are written by researchers.
- Blanket consent forms have been recommended by some researchers in order to keep the participants’ data indefinitely and to make it potentially reusable by anyone.
- However, confidentiality and anonymity becomes an issue for participants with open data, especially in small sample sizes.
- In addition, there are other issues pertaining to open data such as sensitive issues, nature of questions and disclosure of information that may be harmful to the individual and researcher.
Quote
“Requirements for data access must consider the uniqueness and context of the data in each qualitative study. Consideration should be given to policies that grant the original research team adequate opportunities for involvement in publication of secondary analyses, perhaps with the rights to authorship to future publications if circumstances warrant. Alternatively, opportunities to comment on the new analysis and interpretation, considering the investigators’ understanding of the unique context of the study, would provide some additional accountability” (p.4).
Abstract
There is a growing movement for research data to be accessed, used, and shared by multiple stakeholders for various purposes. The changing technological landscape makes it possible to digitally store data, creating opportunity to both share and reuse data anywhere in the world for later use. This movement is growing rapidly and becoming widely accepted as publicly funded agencies are mandating that researchers open their research data for sharing and reuse. While there are numerous advantages to use of open data, such as facilitating accountability and transparency, not all data are created equally. Accordingly, reusing data in qualitative research present some epistemological, methodological, legal, and ethical issues that must be addressed in the movement toward open data. We examine some of these challenges and make a case that some qualitative research data should not be reused in secondary analysis.APA Style Reference
Chauvette, A., Schick-Makaroff, K., & Molzahn, A. E. (2019). Open data in qualitative research. International Journal of Qualitative Methods, 18, 1609406918823863. https://doi.org/10.1177/1609406918823863
You may also be interested in
- Trust Your Science? Open Your Data and Code (Stodden, 2011)
- Attitudes Toward Open Science and Public Data Sharing: A Survey Among Members of the German Psychological Society (Abele-Brehm et al., 2019)
- Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014
Reply to Nuijten et al.: Reanalyses actually confirm that US studies overestimate effects in softer research (Fanelli & Ioannidis, 2014)
Main Takeaways:
- Instead, Nuitjen et al. synthesized unscaled metaregression coefficients. Lack of scaling is problematic because the topics considered in the meta-analyses are very different and distributions of unscaled effects have several outliers. Nuitjen et al. back their claims of no “US effect” by plotting datapoints in histograms, but this violates the very essence of metaanalysis, which is to weight datapoints by an inverse function of their Standard error. Moreover, in the authors’ study, they only observed the US effect among behavioral studies, so graphs should be partitioned by method.
- The authors used the same code as Nuitjen et al. and found the same results as what the authors observed in their original articles such that the US effect was observed in behavioural studies.
- Nuitjen et al.’s reanalyses actually verify our conclusions. Nevertheless, additional studies should be encouraged to probe for the potential presence and magnitude of US effects also in other disciplines and larger datasets.
Abstract
A reply by Professor Daniele Fanelli and Professor John Ioannidis to Naudet and colleagues, the authors observed that the findings from the original article remain true even with a different analytical approach.APA Style Reference
Fanelli, D., & Ioannidis, J. P. (2014). Reply to Nuijten et al.: Reanalyses actually confirm that US studies overestimate effects in softer research. Proceedings of the National Academy of Sciences, 111(7), E714-E715. https://doi.org/10.1073/pnas.1322565111
You may also be interested in
Is there a Reproducibility Crisis? (Baker, 2016)
Main Takeaways:
- More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature’s survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research.
- The data reveal sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.
- But sorting discoveries from false leads can be discomfiting. Although the vast majority of researchers in our survey had failed to reproduce an experiment, less than 20% of respondents said that they had ever been contacted by another researcher unable to reproduce their work.
- When work does not reproduce, researchers often assume there is a perfectly valid (and probably boring) reason. What’s more, incentives to publish positive replications are low and journals can be reluctant to publish negative findings. In fact, several respondents who had published a failed replication said that editors and reviewers demanded that they play down comparisons with the original study.
- Nevertheless, 24% said that they had been able to publish a successful replication and 13% had published a failed replication. Acceptance was more common than persistent rejection: only 12% reported being unable to publish successful attempts to reproduce others’ work; 10% reported being unable to publish unsuccessful attempts.
- One-third of respondents said that their labs had taken concrete steps to improve reproducibility within the past five years. Rates ranged from a high of 41% in medicine to a low of 24% in physics and engineering. Free-text responses suggested that redoing the work or asking someone else within a lab to repeat the work is the most common practice. Also common are efforts to beef up the documentation and standardization of experimental methods.
- One of the best-publicized approaches to boosting reproducibility is pre-registration, where scientists submit hypotheses and plans for data analysis to a third party before performing experiments, to prevent cherry-picking statistically significant results later.
- Respondents were asked to rate 11 different approaches to improving reproducibility in science, and all got ringing endorsements. Nearly 90% — more than 1,000 people — ticked “More robust experimental design” “better statistics” and “better mentorship”. Those ranked higher than the option of providing incentives (such as funding or credit towards tenure) for reproducibility-enhancing practices. But even the lowest-ranked item — journal checklists — won a whopping 69% endorsement.
- About 80% of respondents thought that funders and publishers should do more to improve reproducibility.
Quote
““It’s healthy that people are aware of the issues and open to a range of straightforward ways to improve them,” says Munafo. And given that these ideas are being widely discussed, even in mainstream media, tackling the initiative now may be crucial. “If we don’t act on this, then the moment will pass, and people will get tired of being told that they need to do something.” (p.454).
Abstract
A Nature survey lifts the lid on how researchers view the ‘crisis’ rocking science and what they think will help.APA Style Reference
Baker, M. (2016). Reproducibility crisis. Nature, 533(26), 353-66. doi:10.1038/533452a
You may also be interested in
- Publishing Research With Undergraduate Students via Replication Work: The Collaborative Replications and Education Project (Wagge et al., 2019)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- On the persistence of low power in psychological science (Vankov et al., 2014)
- Research Culture and Reproducibility (Munafo et al., 2020)
Behaviour and the standardization fallacy (Wurbel, 2000)
Main Takeaways:
- Because specific strains of mice, husbandry practice and test protocol may influence behavioural effects of mutations in unpredictable ways, scientists strive for consensus on rigorous standards to maximize reproducibility of results across laboratories.
- Increasing reproducibility of results through standardization, however, accentuates and obscures the very problem— that of reporting artefacts that are idiosyncratic to particular circumstances.
- Standardization serves to reduce individual differences within study populations (within-experiment variation) in order to facilitate detection of treatment effects, and to reduce differences between studies (between-experiment variation) in order to maximize the reproducibility of results.
- External validity stands for “how applicable your results are to other situations (environmental contexts), populations or species”. External validity is an inherent feature of a result and will not be affected by standardization. However, standardization increases the risk of detecting effects with low external validity (or of missing effects with high external validity). In contrast, reproducibility can be increased simply by equating situations more carefully and, hence, tells us nothing about external validity. A result that is highly reproducible under highly standardized conditions may therefore poorly generalize to other conditions, whereas high external validity necessarily goes together with high reproducibility, even when conditions are poorly equated between replicate studies.
- It is important to note that using a single standardized genetic or environmental background or test situation for the characterization of mutants makes it impossible to distinguish artefacts from informative effects. Systematic variation of situations is the only means to determine the nature, and demonstrate external validity, of genetic effects on behaviour.
Abstract
An editorial about the standardisation fallacy by Dr Hanno Wurbel.APA Style Reference
Würbel, H. (2000). Behaviour and the standardization fallacy. Nature genetics, 26(3), 263-263. https://doi.org/10.1038/81541
You may also be interested in
Research Culture and Reproducibility (Munafo et al., 2020)
Main Takeaways:
- There is ongoing debate regarding the extent to which research claims are robust and credible.
- Modern research-intensive universities present a paradox. On the one hand, they are dynamic, vibrant institutions where researchers use cutting-edge methods to advance knowledge. On the other, their traditions, structures, and ways of working remain rooted in the 19th century model of the independent scientist. A growing realization of this, and the impact it might have on the performance of research intensive institutions, has led to growing interest in examining and understanding research culture.
- The vast majority of scientists choose their career because they are passionate about their subject and are excited by the possibility of advancing human knowledge. However, this passion can serve as a double-edged sword. When the authors are personally invested in our own research, then their ability to objectively analyze data may be negatively affected. The authors may see patterns in noise, suffer from confirmation bias, and so on. The authors have argued that open research practices – protocol preregistration, data and material sharing, the use of preprints and so on – can protect against these kinds of cognitive biases. Promoting transparency in methods and data sharing should encourage greater self- and peer-appraisal of research methods.
- Although the conventional journal article format, with restrictions on word count and display items may not encourage this, exciting innovations are emerging that offer new approaches to scientific communication – preprint servers, post-publication peer review (e.g., F1000), the 'Registered Reports' article format, and data repositories. Given these innovations, there is really no reason to provide only a partial account of one’s research.
- Open research also highlights the extent to which our current scientific culture relies heavily on trust. This may have been appropriate in the 19th century era of the independent scientist (although even that is debatable), but it does not provide a strong basis for robust science in the highly charged and competitive environment of modern science. At present, it is difficult for research consumers to know whether what is reported in an article is a complete and honest account of what was actually done and found.
- This desire for narrative is reflected in something that many early-career researchers are told – that their data needs to 'tell a story' (i.e. scientists should write in a clear and compelling way). However, the focus on narrative has come to dominate to such an extent that perhaps the story matters more than the truth. Scientists are rarely incentivized by the system for being right – they are rewarded for papers, grants, and so on, but not (directly) for getting the right answer – and their success in writing papers and winning grants often reflects their storytelling rather than their science.
- Is this the fault of the journals? There is a place for high-risk, high-return findings – those which may well be wrong but which if right would turn out to be transformative (which essentially is what groundbreaking research is). It is our institutions – their hiring and promotion practices – and to an extent the authors ourselves – the community of scientists – that fetishize publication in specific journals. By disproportionately lauding and rewarding high-risk, high-return activity, we risk incentivizing science in a manner similar to the way in which the banking system was incentivized before 2008 – the focus on high-return investment vehicles that looked reliable and robust but were in fact built on sand. And that did not end well.
Quote
“Fundamentally, the authors need to better align our research culture with the demands of 21st century research. The authors need to move away from a model that relies on trust in individual researchers towards one where the system is inherently trustworthy. This will require a focus on realigning incentives such that what is good for scientists’ careers is good for scientists, as well as on recognizing that excellence in research is not generated by individuals but by teams, departments, institutions, and international collaborations. These teams require a diverse range of skills, each of which is crucial to the success of the wider effort.” (p.92)
Abstract
There is ongoing debate regarding the robustness and credibility of published scientific research. We argue that these issues stem from two broad causal mechanisms: the cognitive biases of researchers and the incentive structures within which researchers operate. The UK Reproducibility Network (UKRN) is working with researchers, institutions, funders, publishers, and other stakeholders to address these issues.APA Style Reference
Munafò, M. R., Chambers, C. D., Collins, A. M., Fortunato, L., & Macleod, M. R. (2020). Research culture and reproducibility. Trends in Cognitive Sciences, 24(2), 91-93. https://doi.org/10.1016/j.tics.2019.12.002
You may also be interested in
- Publishing Research With Undergraduate Students via Replication Work: The Collaborative Replications and Education Project (Wagge et al., 2019)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- On the persistence of low power in psychological science (Vankov et al., 2014)
- Is there a Reproducibility Crisis? (Baker, 2016)
Measurement error and the replication crisis (Loken & Gelman, 2017)
Main Takeaways:
- Measurement error adds noise to predictions, increases uncertainty in parameter estimates, and makes it more difficult to discover new phenomena or to distinguish among competing theories. A common view is that any study finding an effect under noisy conditions provides evidence that the underlying effect is particularly strong and robust. Yet, statistical significance conveys very little information when measurements are noisy.
- In noisy research settings, poor measurement can contribute to exaggerated estimates of effect size.
- Should we assume that if statistical significance is achieved in the presence of measurement error, the associated effects would have been stronger without noise? The author caution against the fallacy of assuming that that which does not kill statistical significance makes it stronger.
- Measurement error and other sources of uncontrolled variation in scientific research therefore add noise.
- It is understandable, then, that many researchers have the intuition that if they manage to achieve statistical significance under noisy conditions, the observed effect would have been even larger in the absence of noise.
- In settings with uncontrolled researcher degrees of freedom, the attainment of statistical significance in the presence of noise is not an impressive feat.
- For the largest samples, the observed effect is always smaller than the original. But for smaller N, a fraction of the observed effects exceeds the original.
- The authors are concerned researchers are sometimes tempted to use the “iron law” reasoning to defend or justify surprisingly large statistically significant effects from small studies. If it really were true that effect sizes were always attenuated by measurement error, then it would be all the more impressive to have achieved significance.
Quote
“A key point for practitioners is that surprising results from small studies should not be defended by saying that they would have been even better with improved measurement. Furthermore, the signal-to-noise ratio cannot in general be estimated merely from internal evidence. It is a common mistake to take a t-ratio as a measure of strength of evidence and conclude that just because an estimate is statistically significant, the signal-to-noise level is high. It is also a mistake to assume that the observed effect size would have been even larger if not for the burden of measurement error...measurement error (or other uncontrolled variation) should not be invoked automatically to suggest that effects are even larger”. (p.585).
Abstract
The assumption that measurement error always reduces effect sizes is false.APA Style Reference
Loken, E., & Gelman, A. (2017). Measurement error and the replication crisis. Science, 355(6325), 584-585.
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- A 21 Word Solution (Simmons et al., 2012)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) (Ingre, 2013)
- Ironing out the statistical wrinkles in “ten ironic rules” (Lindquist et al., 2013)
- Ten ironic rules for non-statistical reviewers (Friston, 2012)
- Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions (Higginson & Munafo, 2016)
- Small sample size is not the real problem (Bacchetti, 2013)
- Bite-Size Science and Its Undesired Side Effects (Bertamini & Munafo, 2012)
- Confidence and precision increase with high statistical power (Button et al., 2013)
Markets for replication (Brandon & List, 2015)
Main Takeaways:
- Dreber et al. take the innovative approach of considering whether markets can play a crucial role. The Dreber et al. study focuses on the replicability of recent publications in top psychology journals.
- Brandon and List argued that the crux of just about every empirical study is the P value. Researchers pose a null hypothesis meant to capture the status quo line of thinking. Data are then analyzed and if the P value < .05, then the researcher rejects the null hypothesis and an alternative hypothesis is evident. However, the mechanics of the inference problem call into question this simple approach. Inference not only relies on reported P values, but also priors and the power of the test.
- The scientific community wants to identify true findings then they need replications. Even in those cases when they allow only a small false-positive rate (a P value of 0.01), the author needs three successful replications before they can be very confident that the observed relationship is a true relationship. Furthermore, as the scientific community allows a larger false-positive rate, more replications are necessary.
Quote
“Although these ideas can work on the margin, unless there is a sweeping change in academic culture, the returns to publishing original work will always dominate work on replications. Prediction markets, such as those used in Dreber et al. (5), offer a different type of incentive for replications: financial returns. Imagine a market wherein academics can trade on the outcome of replications and a small cut on transactions funds the work of actually conducting the replications. Such a market may suffer the liquidity problems that have doomed other prediction markets (e.g., Intrade), but in light of the ideas currently on the table, this one is worth strong consideration.” (p.15268).
Abstract
A response by Professors Alec Brandon and John A. List who reply to a paper by Professor Anna Dreber and colleagues.APA Style Reference
Brandon, A., & List, J. A. (2015). Markets for replication. Proceedings of the National Academy of Sciences, 112(50), 15267-15268. https://doi.org/10.1073/pnas.1521417112
You may also be interested in
Ten common statistical mistakes to watch out for when writing or reviewing a manuscript (Ijzerman et al., 2020)
Main Takeaways:
- In the authors’ view, the most appropriate checkpoint to prevent erroneous results from being published is the peer-review process at journals, or the online discussions that can follow the publication of preprints. The primary purpose of this commentary is to provide reviewers with a tool to help identify and manage these common issues.
- To promote further discussion of these issues, and to consolidate advice on how to best solve them, the authors encourage readers to offer alternative solutions to ours by annotating the online version of this article (by clicking on the ’annotations’ icon). This will allow other readers to benefit from a diversity of ideas and perspectives.
- The authors hope that greater awareness of these common mistakes will help make authors and reviewers more vigilant in the future so that the mistakes become less common. The 10 mistakes are:
Abstract
Inspired by broader efforts to make the conclusions of scientific research more robust, we have compiled a list of some of the most common statistical mistakes that appear in the scientific literature. The mistakes have their origins in ineffective experimental designs, inappropriate analyses and/or flawed reasoning. We provide advice on how authors, reviewers and readers can identify and resolve these mistakes and, we hope, avoid them in the future.APA Style Reference
Makin, T. R., & de Xivry, J. J. O. (2019). Science Forum: Ten common statistical mistakes to watch out for when writing or reviewing a manuscript. Elife, 8, e48175. DOI: 10.7554/eLife.48175
You may also be interested in
Comparing journal-independent review services (ASAPbio, 2020) ◈
Main Takeaways:
- Preprinting not only accelerates the dissemination of science, but also enables early feedback from a broad community. Therefore, it’s no surprise that there are many innovative projects offering feedback, commentary, and peer reviews on preprints. Such feedback can range from the informal (tweets, comments, annotations, or a simple endorsement) to the formal (an editor-organized process that can provide an in-depth assessment of the manuscript leading to a formal acceptance/endorsement like in a journal). This organized, journal-independent peer review might accomplish several goals: it can provide readers with context to evaluate the paper and foster constructive review that is focused on improving the science rather than gatekeeping for a particular journal. It can also be used as a way to validate the scientific content of a preprint, supporting its value as a citable reference for the scientific literature. When preprints are submitted to a journal, journal-independent peer review can be used by editors to speed up their editorial decisions. Additionally, since 15 million hours of reviewers’ time is wasted every year in re-reviewing revised manuscripts, transparent peer review on preprints could be one way to make the entire publishing process more efficient for reviewers, authors, and editors alike.
- Preprint Review does not currently have formal journal participation outside of eLife, but it can be said to provide a journal recommendation because one of the outcomes is acceptance at eLife. Peerage of Science provides authors with recommendations on which journal to submit to; Peer Community In and Review Commons do not.
- Peerage of Science is the only service in our comparison that sends all submitted manuscripts for peer review. Preprint Review attempts to send all manuscripts for review but is sometimes limited by workload and availability of editors. Peer Community In only performs review if an associate editor accepts to handle the manuscript within 20 days, and Review Commons selects papers for review in consultation with an editorial board, looking for significant advancements for the field.
- The services take a variety of approaches to transparency. At Preprint Review, posting the preprint and reviews is mandatory. Peer Community In posts reviews publicly and publishes a recommendation text with a DOI only if the paper is accepted by the service; otherwise they are transferred confidentially to the author. Both Peerage of Science and Review Commons allow authors to opt-in to post reviews publicly. Reviewers are named by default (though they can opt-out) when reviewing for Peer Community In, whereas the identity of reviewers is not communicated to the authors by default in the case of Review Commons, Peerage of Science, and Preprint Review.
Abstract
An editorial about comparing journal-independent peer reviews.APA Style Reference
ASAPbio (2020, July). Comparing journal-independent review services [Blog post]. Retrieved from https://asapbio.org/comparing-review-services
You may also be interested in
- Publication Prejudices: An Experimental Study of Confirmatory Bias in the Peer Review System (Mahoney, 1977)
- Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial (van Rooyen et al., 1999)
- The Peer Reviewers’ Openness Initiative: incentivising open research practices through peer review (Morey et al., 2016)
Problems with p values: why p values do not tell you if your treatment is likely to work (Price et al., 2020)
Main Takeaways:
- In null hypothesis significance test, p value is a conditional probability, conditioned on the null hypothesis being true. Authors calculate p values because it is easy, not because they are the probability that they want.
- P values cannot be used to indicate if the null hypothesis is true or false. It is incorrectly assumed that if p = .05 there is a 1 in 20 probability that the data arose by chance alone.
- P value exaggerates on weight of evidence against null hypothesis. The scientific community needs to discuss the false discovery rate, which is the proportion of reported discoveries that are false positives. The false discovery rate can be calculated from the power, type I error and an estimate of the prevalence of effects among the many ideas that the scientists may test.
- Type I error rate is long-term probability of false positive for a single experiment repeated with exact replication. It is not the same as false discovery rate, which applies to a single run of each experiment.
- Even when experiments are perfectly designed and executed without bias, missing data or multiple statistical testing, the false discovery rate in null hypothesis significance testing using a p < .05 threshold has been estimated to be in the range 30%-60%, depending on the field of research and the power of the study.
- In biomedical research, researchers often do not know the effect size as it is frequently small, sampling is difficult and variance is often large and poorly known. Researchers only do the experiment once and have only a one-point estimate of the p value.
- Additionally, scientific theories are only weakly predictive and do not generate precise numerical quantities that can be checked in quantitative experiments as is possible in the physical sciences. Null hypothesis significance testing does not perform well under these circumstances. The p value is strictly the probability of obtaining data as extreme or more so, given the null hypothesis is true.
- The most common error was to claim that the p value is the probability that the data was generated by chance alone. Beyond misinterpretations of p values, there are also widespread problems with multiple testing, sometimes inadvertent, which grossly inflates the proportion of false positive results (i.e. p-hacking).
- However, confidence intervals are also misinterpreted and often used to produce identical results to a p<0.05 significance test. Reducing the p value threshold will reduce the false positive rate at the cost of an increase in the false negative rate, particularly in under-powered studies. A more intuitive methodology to use is Bayesian statistics; this calculates the probability that a hypothesis is true given the data; this is mostly what researches and readers actually assume the p value to be.
- This is much more meaningful than the frequentist CI, which is again based on performance over many repetitions but is measured only once. In the past, the two major objections to Bayesian methods have been the difficulty of calculating intractable integrals and the use of prior probabilities.
- We urge authors and editors to demote the prominence of p values in journal articles, have the actual null hypothesis formally stated at the beginning of the article and promote the use of the more intuitive (but harder to calculate) Bayesian statistics.
- The p value in null hypothesis significance testing is conditioned on the null hypothesis being true.
- This means that a p value of 0.05 does not mean that the probability our data arose by chance alone is 1 in 20..
- In fact, the chance of us mistakenly rejecting the null hypothesis and concluding we have a successful treatment is more in the region of 30%–60%.
- Scientific journals and textbooks need to be explicit on how p values are used and defined.
- Use of the more intuitive Bayesian statistics should become more widespread.
- The main point of the article is that frequentism only works in repeated testing scenarios, while it does not work in one time experiments. Finally, researchers should move towards Bayesian statistics and forego frequentist statistics.
Abstract
Dr Robert Price and colleagues argue about the problems of p values and support the idea that researchers should adopt Bayesian statistics.APA Style Reference
Price, R., Bethune, R., & Massey, L. (2020). Problem with p values: why p values do not tell you if your treatment is likely to work. Postgraduate medical journal, 96(1131), 1. http://dx.doi.org/10.1136/postgradmedj-2019-137079
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- A 21 Word Solution (Simmons et al., 2012)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) (Ingre, 2013)
- Ironing out the statistical wrinkles in “ten ironic rules” (Lindquist et al., 2013)
- Ten ironic rules for non-statistical reviewers (Friston, 2012)
- Confidence and precision increase with high statistical power (Button et al., 2013)
- Misuse of power: in defence of small-scale science (Quinlan, 2013)
- Small sample size is not the real problem (Bacchetti, 2013)
- Measurement error and the replication crisis (Loken & Gelman, 2017)
- Bite-Size Science and Its Undesired Side Effects (Bertamini & Munafo, 2012)
Socioeconomic Status and Academic Outcomes in Developing Countries: A Meta-Analysis (Kim et al., 2019)
Main Takeaways:
- This study is the first meta-analytic effort to focus on socio-economic status (SES) and academic outcomes in developing countries.
- As a growing number of countries reach minimum thresholds of school expansion and quality, and as they move toward privatization, families play an increasingly large role in the social stratification process, resulting in a stronger relation between SES and academic outcomes
- Meta-analyses up to date have tended to exclude developing countries, despite the potential theoretical contributions such studies can make.
- The paper aims to assess the overall association between SES and student
- achievement in developing countries? How does the strength of the association differ by countries’ economic levels of development, type of SES measure, grade level, and gender? Also, What is the overall association between SES and student attainment in developing countries? How does the strength of the association differ by countries’ economic levels of development, type of SES measure, grade level, and gender?
- How does study quality influence the ES of studies, and are there any differences in study quality by country studied, year of publication, and publication type?
- Since there have not been any meta-analyses of SES and educational outcomes of developing countries up to date, there is little information about how the effect size might be different for achievement and attainment.
- Studies collectively point to a positive association between SES and educational attainment but do not shed light on the overall strength of the relation between SES and attainment and how this compares to the relation between SES and achievement.
- Method: This meta-analysis included 49 empirical studies representing 294 correlations reporting the relation between SES and academic outcomes. All studies used students as the unit of analysis, and represented a total sample of 2,828,216 students, with samples ranging from 70 to 2,248,598.
- To be included in the meta analysis:
Abstract
Despite the multiple meta-analyses documenting the association between socioeconomic status (SES) and achievement, none have examined this question outside of English-speaking industrialized countries. This study is the first meta-analytic effort, to the best of our knowledge, to focus on developing countries. Based on 49 empirical studies representing 38 countries, and a sample of 2,828,216 school-age students (grades K–12) published between 1990 and 2017, we found an overall weak relation between SES and academic outcomes. Results for attainment outcomes were stronger than achievement outcomes, and the effect size was stronger in more economically developed countries. The SES-academic outcome relation was further moderated by grade level and gender. There were no differences in the strength of the relation by specific SES measures of income/consumption, education, and wealth/home resources. Our results provide evidence that educational inequalities are wider in higher income countries, creating a serious challenge for developing countries as they expand school access.APA Style Reference
Kim, S. W., Cho, H., & Kim, L. Y. (2019). Socioeconomic Status and Academic Outcomes in Developing Countries: A Meta-Analysis. Review of Educational Research, 89(6), 875-916. https://doi.org/10.3102/0034654319877155 [ungated]
You may also be interested in
- Is science only for the rich? (Lee, 2016) ⌺
- #bropenscience is broken science (Whitaker & Guest, 2020) ⌺
- ls There a Positive Correlation between Socioeconomic Status and Academic Achievement? (Quagliata, 2008)
- Education and Socio-economic status (APA, 2017b)
- Ethnic and Racial minorities and socio-economic status (APA, 2017)
- Women and Socio-economic status (APA, 2010)
- Disability and Socio-economic status (APA, 2010)
- Lesbian, Gay, Bisexual and Transgender Persons & Socioeconomic Status (APA, 2010)
- The Relation Between Family Socioeconomic Status and Academic Achievement in China: A Meta-analysis (Liu et al., 2020)
How scientists can stop fooling themselves over statistics (Bishop, 2020b)
Main Takeaways:
- Lab scientists should not be allowed to handle dangerous substances without safety training, researchers should not be allowed to be near a p value or similar measures of probability until researchers can demonstrate they understand their meaning.
- Preconceived notions allow us to see a structure that is not there, whereas contradictory views provided from new data tend to be ignored.
- People under-estimate how noisy small samples can be and conduct studies that lack the necessary power to detect an effect.
- The more variables are investigated, the more likely a p value is to be significant due to type I error.
- Basic statistical training is insufficient or counterproductive, providing misplaced confidence.
- Simulated data allows students to discover how easy it is to find false results that are significant. Students learn with simulation that small sample sizes are useless to show a moderate difference.
- Researchers need to build lifelong habits to avoid being led astray by specific confirmation bias.
- It is easy to forget papers that counter our own instacts, albeit the papers had no flaws. Keeping tracks of these papers enables us to understand the blind spots and how to avoid them.
Abstract
Sampling simulated data can reveal common ways in which our cognitive biases mislead us.APA Style Reference
Bishop, D. (2020). How scientists can stop fooling themselves over statistics. Nature, 584(7819), 9. https://doi.org/10.1038/d41586-020-02275-8
You may also be interested in
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Publication Prejudices: An Experimental Study of Confirmatory Bias in the Peer Review System (Mahoney, 1977)
- Quality Uncertainty Erodes Trust in Science (Vazire, 2017)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- A consensus-based transparency checklist (Aczel et al., 2020)
- Tell it like it is (Anon, 2020)
- Is pre-registration worthwhile? (Szollosi et al., 2020)
- The life of p: “Just significant” results are on the rise (Leggett et al., 2013)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Scientific inbreeding and same-team replication: Type D personality as an example (Ioannidis, 2012)
Variability in the analysis of a single neuroimaging dataset by many teams (Botvinik-Nezer et al., 2020)
Main Takeaways:
- The present study investigated the degree and effect of analytical flexibility on functional magnetic resonance imaging (fMRI) results in practice. The authors estimated the beliefs of researchers in the field concerning degree of variability in analysis outcomes using prediction markets to test whether peers in the field could predict the findings.
- Method: 70 teams were provided raw data, an optional preprocessed version of the dataset and were asked to analyse data to test 9 ex-ante hypotheses. They were given up to 100 days to report whether each hypothesis was supported based on whole-brain-corrected analysis.
- Method: The research group were instructed to perform analysis as they would in their own laboratory and report binary decisions based on their own criteria for whole-brain corrected results for specific regions.
- Results: The analytical flexibility resulted in sizable variation in the results of hypothesis tests, even for teams whose statistical maps were highly correlated at intermediate stages of the analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Notably, a meta-analytical approach that aggregated information across teams yielded a significant consensus in activated regions. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset.
- It is hard to estimate the reproducibility of single studies performed using a single analysis pipeline. Teams with highly correlated underlying unthresholded statistical maps showed different hypothesis outcomes.
- Prediction markets performed on the outcome of analyses showed general over-estimation by researchers of likelihood of significant findings across hypotheses reflecting marked optimism bias by researchers in the field.
- Complex datasets should be analysed using several analysis pipelines, and by more than one research team. fMRI can provide reliable answers to scientific questions, as strongly shown in meta-analytical results across teams along with several large-scale studies and replication of many findings using fMRI.
Abstract
Data analysis workflows in many scientific domains have become increasingly complex and flexible. Here we assess the effect of this flexibility on the results of functional magnetic resonance imaging by asking 70 independent teams to analyse the same dataset, testing the same 9 ex-ante hypotheses. The flexibility of analytical approaches is exemplified by the fact that no two teams chose identical workflows to analyse the data. This flexibility resulted in sizeable variation in the results of hypothesis tests, even for teams whose statistical maps were highly correlated at intermediate stages of the analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Notably, a meta-analytical approach that aggregated information across teams yielded a significant consensus in activated regions. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. Our findings show that analytical flexibility can have substantial effects on scientific conclusions, and identify factors that may be related to variability in the analysis of functional magnetic resonance imaging. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for performing and reporting multiple analyses of the same data. Potential approaches that could be used to mitigate issues related to analytical variability are discussed.APA Style Reference
Botvinik-Nezer, R., Holzmeister, F., Camerer, C. F., Dreber, A., Huber, J., Johannesson, M., ... & Avesani, P. (2020). Variability in the analysis of a single neuroimaging dataset by many teams. Nature, 582, 84-88. https://doi.org/10.1038/s41586-020-2314-9
You may also be interested in
- Next Steps for Citizen Science (Bonney et al., 2014)
- Citizen Science: Can Volunteers Do Real Research? (Cohn, 2008)
- The Increasing Dominance of Teams In Production of Knowledge (Wuchty et al., 2007)
- A Manifesto for Team Science (Forscher et al., 2020)
- Many hands make tight work(Silberzahn & Uhlmann, 2015)
- Publishing Research With Undergraduate Students via Replication Work: The Collaborative Replications and Education Project (CREP; Wagge et al., 2019)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
Fifty psychological and psychiatric terms to avoid:a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases (Lilienfeld et al., 2015)
Main Takeaways:
- Scientific thinking necessitates clarity, including clarity in writing. In turn clarity hinges on accuracy in the use of specialized terminology.
- Researchers, students and allied health researchers should be as explicit as possible about what they are saying andare not saying,as terms in these disciplines readily lend themselves to confusion and misinterpretation.
- If students are allowed,or worse,encouraged,to be imprecise in their language concerning psychological concepts,their thinking about these concepts is likely to follow suit.
- First,some psychological terms are inaccurate or misleading.
Quote
“We modestly hope that our admittedly selective list of 50 terms to avoid will become recommended, if not required, reading for students, instructors, and researchers in psychology, psychiatry, and similar disciplines. Although jargon has a crucial place in these fields, it must be used with care, as the imprecise use of terminology can engender conceptual confusion. At the very least, we hope that our article encourages further discussion regarding the vital importance of clear writing and clear thinking in science, and underscores the point that clarity in writing and thinking are intimately linked. Clear writing fosters clear thinking, and confused writing fosters confused thinking. In the words of author McCullough (2002), “Writing is thinking. To write well is to think clearly. That’s why it’s so hard.”” (p.11).
Abstract
The goal of this article is to promote clear thinking and clear writing among students and teachers of psychological science by curbing terminological misinformation and confusion. To this end, we present a provisional list of 50 commonly used terms in psychology, psychiatry, and allied fields that should be avoided, or at most used sparingly and with explicit caveats. We provide corrective information for students, instructors, and researchers regarding these terms, which we organize for expository purposes into five categories: inaccurate or misleading terms, frequently misused terms, ambiguous terms, oxymorons, and pleonasms. For each term, we (a) explain why it is problematic, (b) delineate one or more examples of its misuse, and (c) when pertinent, offer recommendations for preferable terms. By being more judicious in their use of terminology, psychologists and psychiatrists can foster clearer thinking in their students and the field at large regarding mental phenomena.APA Style Reference
Lilienfeld, S. O., Sauvigné, K. C., Lynn, S. J., Cautin, R. L., Latzman, R. D., & Waldman, I. D. (2015). Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases. Frontiers in Psychology, 6, 1100. https://doi.org/10.3389/fpsyg.2015.01100
You may also be interested in
Is science only for the rich? (Lee, 2016) ⌺
Main Takeaways:
- Few countries collect detailed data on socioeconomic status, but the available numbers consistently show that nations are wasting the talents of underprivileged youth who might otherwise be tackling challenges in health, energy, pollution, climate change and a host of other societal issues. And it’s clear that the universal issue of class is far from universal in the way it plays out. Here, Nature looks at eight countries around the world, and their efforts to battle the many problems of class in science.
- Students from poor districts therefore end up being less prepared for university-level science than are their wealthier peers, many of whom attended well-appointed private schools.
- That also puts the students at a disadvantage in the fiercely competitive applications process: only about 40% of high-school graduates in the lowest-income bracket enrolled in a university in 2013, versus about 68% of those born to families with the highest incomes. The students who do get in then have to find a way to pay the increasingly steep cost of university. Between 2003 and 2013, undergraduate tuition, fees, room and board rose by an average of 34% at state-supported institutions, and by 25% at private institutions, after adjusting for inflation. The bill at a top university can easily surpass US$60,000 per year. Many students are at least partly supported by their parents, and can also take advantage of scholarships, grants and federal financial aid. Many, like Quasney, work part time.
- But if graduate students have to worry about repaying student loans, that can dissuade them from continuing with their scientific training.
- In China:
Abstract
Around the world, poverty and social background remain huge barriers in scientific careers.APA Style Reference
Lee, J. J. (2016). Is science only for the rich?. Nature, 537(7621), 466-467.
You may also be interested in
- #bropenscience is broken science (Whitaker & Guest, 2020) ⌺
- ls There a Positive Correlation between Socioeconomic Status and Academic Achievement? (Quagliata, 2008)
- Education and Socio-economic status (APA, 2017b)
- Ethnic and Racial minorities and socio-economic status (APA, 2017)
- Women and Socio-economic status (APA, 2010)
- Disability and Socio-economic status (APA, 2010)
- Lesbian, Gay, Bisexual and Transgender Persons & Socioeconomic Status (APA, 2010)
- The Relation Between Family Socioeconomic Status and Academic Achievement in China: A Meta-analysis (Liu et al., 2020)
- Socioeconomic Status and Academic Outcomes in Developing Countries: A Meta-Analysis (Kim et al., 2019)
Psychology’s Renaissance (Nelson et al., 2018)
Main Takeaways:
- Researchers now understand that the old ways of collecting and analyzing data produce results that are not diagnostic of truth and that a new, more enlightened approach is needed. Thousands of psychologists have embraced this notion. The improvements to our field have been dramatic. This is psychology’s renaissance.
- The authors believe that this “file-drawer explanation” is incorrect. Most failed studies are not missing. They are published in our journals, masquerading as successes. The file-drawer explanation becomes transparently implausible once its assumptions are made explicit. It assumes that researchers conduct a study and perform one (predetermined) statistical analysis. If the analysis is significant, then they publish it.
- P-hacking provides the real solution to the paradox. P-hacking is the only honest and practical way to consistently get underpowered studies to be statistically significant.
- P-hacking makes it dramatically easier to generate false-positive findings, so much so that, for decades, p-hacking enabled researchers to achieve the otherwise mathematically impossible feat of getting most of their underpowered studies to be significant. P-hacking has long been the biggest threat to the integrity of our discipline.
- False positives are bad. Publishing them can cause scientists to spend precious resources chasing down false leads, policy makers to enact potentially harmful or ineffective policies, and funding agencies to allocate their resources away from hypotheses that are actually true. When false positives populate the scientific literature, we can no longer distinguish between what is true and what is false, undermining the very goal of science.
- P-hacking is a pervasive problem precisely because researchers usually do not realize that they are doing it or appreciate that what they are doing is consequential. The most straightforward way to prevent researchers from selectively reporting their methods and analyses is to require them to report less selectively. At the bare minimum, this means requiring authors to disclose all of their measures, manipulations, and exclusions, as well as how they determined their sample sizes.
- Overcoming this concern requires realizing that preregistrations do not tie researchers’ hands, but merely uncover readers’ eyes. Preregistering does not preclude exploration, but it does communicate to readers that it occurred. Preregistering allows readers to discriminate between confirmatory analyses, which provide valid p-values and trustworthy results, and exploratory analyses, which provide invalid p-values and tentative results.
- Replications have traditionally been deemed failures when the effect described by the original study is not statistically significant in the replication. This approach has two obvious flaws. First, a replication attempt could be nonsignificant simply because its sample size is too small. Second, a replication attempt could be significant even if the effect size is categorically smaller than in the original.
- To correct selective reporting is p-curve analysis, P-curve is the distribution of statistically significant p-values from a set of studies. P-curve’s shape can be used to diagnose whether a literature contains replicable effects.
- Discussions of fraud typically focus on two questions: How common is it and how can we stop it? Estimating the frequency of fraud is very difficult. Some blatantly detectable fraud is prevented by vigilant coauthors, reviewers, or editors and, thus, not typically observed by the rest of the field. The fraud that gets through those filters might be noticed by a very small share of readers. Of those readers, a very small number might ever make their concerns known.
- Meta-analytic thinking has its benefits. It allows inferences to be based on larger and potentially more diverse samples, promotes collaboration among scientists, and incentivizes more systematic research programs. Nevertheless, meta-analytic thinking not only fails to solve the problems of p-hacking, reporting errors, and fraud, it dramatically exacerbates them.
- Why is p values heavily relied upon? The authors think it is because there is actually no compelling reason to abandon the use of p-values. It is true that p-values are imperfect, but, for the types of questions that most psychologists are interested in answering, they are no more imperfect than confidence intervals, effect sizes, or Bayesian approaches. The biggest problem with p-values is that they can be mindlessly relied upon; however, when effect size estimates, confidence intervals, or Bayesian results are mindlessly relied upon, the results are at least as problematic. It is not the statistic that causes the problem, it is the mindlessness.
Quote
“It might make sense for new graduate students to erroneously think 12 participants per cell will be a sufficiently large sample size to test a counterintuitive attenuated interaction hypothesis, but it would not make sense for a full professor to maintain this belief after running hundreds of experiments that should have failed. It is one thing for a very young child to believe that 12 peas are enough for dinner and quite another for a chronically starving adult to do so.” (p.515)
Abstract
In 2010–2012, a few largely coincidental events led experimental psychologists to realize that their approach to collecting, analyzing, and reporting data made it too easy to publish false-positive findings. This sparked a period of methodological reflection that we review here and call Psychology’s Renaissance. We begin by describing how psychologists’ concerns with publication bias shifted from worrying about file-drawered studies to worrying about p-hacked analyses. We then review the methodological changes that psychologists have proposed and, in some cases, embraced. In describing how the renaissance has unfolded, we attempt to describe different points of view fairly but not neutrally, so as to identify the most promising paths forward. In so doing, we champion disclosure and preregistration, express skepticism about most statistical solutions to publication bias, take positions on the analysis and interpretation of replication failures, and contend that meta-analytical thinking increases the prevalence of false positives. Our general thesis is that the scientific practices of experimental psychologists have improved dramatically.APA Style Reference
Nelson, L. D., Simmons, J., & Simonsohn, U. (2018). Psychology's renaissance. Annual review of psychology, 69, 511-534. https://doi.org/10.1146/annurev-psych-122216-011836
You may also be interested in
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- The life of p: “Just significant” results are on the rise (Leggett et al., 2013)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- A 21 Word Solution (Simmons et al., 2012)◈
- Quality Uncertainty Erodes Trust in Science (Vazire, 2017)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Many hands make tight work (Silberzahn & Uhlmann, 2015)
- Promoting an open research culture (Nosek et al., 2015)
- Promoting Transparency in Social Science Research (Miguel et al., 2014)
- Psychologists Are Open to Change, yet Wary of Rules (Fuchs et al., 2012)
- Registered Reports: Realigning incentives in scientific publishing (Chambers et al., 2015)
- Registered Reports: A new publishing initiative at Cortex (Chambers, 2013)
- Registered Reports: A step change in scientific publishing (Chambers, 2014)
- Registered reports: a method to increase the credibility of published results (Nosek & Lakens, 2014)
- Registered reports (Jamieson et al., 2019)
- Attitudes Toward Open Science and Public Data Sharing: A Survey Among Members of the German Psychological Society (Abele-Brehm et al., 2019)
- Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
- Constraints on Generality (COG): A Proposed Addition to All Empirical Papers (Simons et al., 2017)
- Most people are not WEIRD (Henrich et al., 2010)
- How scientists can stop fooling themselves (Bishop, 2020b)
- CJEP Will Offer Open Science Badges (Pexman, 2017)
- Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency (Kidwell et al., 2016)
- Trust Your Science? Open Your Data and Code (Stodden, 2011)◈
- Publishing Research With Undergraduate Students via Replication Work: The Collaborative Replications and Education Project (CREP; Wagge et al., 2019)
- Is science really facing a reproducibility crisis, and do we need it to? (Fanelli, 2018)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- A consensus-based transparency checklist (Aczel et al., 2020)
- Tell it like it is (Anon, 2020)
- Is pre-registration worthwhile? (Szollosi et al., 2020)
- Signalling the trustworthiness of science (Jamieson et al., 2020)
- Scientific inbreeding and same-team replication: Type D personality as an example (Ioannidis, 2012)
Speaker Introductions at Internal Medicine Grand Rounds: Forms of Address Reveal Gender Bias (Files et al., 2017)⌺
Main Takeaways:
- The authors hypothesize that female speakers in this professional setting are more often addressed by first name or equivalent than their male counterparts during speaker introductions.
- The authors examined the association between gender and address practices used during formal introductions of speakers in Internal Medicine Grand Rounds (IMGR).
- Method: 134 unique grand rounds presentations listed in a video archive library were accessed and reviewed. Of the 124 grand rounds reviewed, 83 had more than 1 introduction (introducer introducing the speaker) with each introduction representing an opportunity for the introducer to utilize the appropriate professional title. Each grand round had between one and five introducers. The authors analyzed the form of address used in up to 5 speaker introductions in each grand round.
- Results: Female introducers were more likely to use professional titles when introducing any speaker during the first form of address compared with male introducers. Female dyads utilized formal titles during the first form of address compared with male dyads who utilized a formal title 72.4% of the time. In mixed-gender dyads, where the introducer was female and speaker male, formal titles were used 95.0% of the time. Male introducers of female speakers utilized professional titles 49.2% of the time.
- In this study, women introduced by men at IMGR were less likely to be addressed by their professional title than were men introduced by men. In contrast, women introducers were more formal in both same- and mixed-gender interactions.
- The findings demonstrate that female introducers compared with male introducers were more likely to use professional titles when introducing any speaker, male or female, during the first form of address. However, there were striking differences in how males utilized their informal introduction style depending on whether the speaker was a man or woman.
- While women consistently and nearly universally introduced both male and female speakers by their formal titles during first form of address, men used male’s formal title during introductions 72% of the time, whereas acknowledging female speakers with their professional title only 49.2% (31/63) of the time.
- Female introducers with their high utilization of formal title during the first form of address exhibited no change in their utilization of formal title. When all introductions by men were included, the rate of utilization of professional titles increased slightly, but a gender difference remained. Despite multiple opportunities to acknowledge the speakers’ credentials, the title of Dr. was withheld by male introducers from 41.3% of female speakers compared with only 24.3% of male speakers.
- This study supports what many female physicians have experienced and discussed informally; the withholding of their professional titles when they are referenced or addressed by their male colleagues. Perhaps this is made more noticeable by their finding that women use formal titles close to 100% of the time for both the men and the women they introduce. This formal practice by women may engender an expectation of reciprocity, thus, further amplifying the disparity.
- While the did find that men are less formal overall and do withhold the professional title of Dr. during the first form of address from over one quarter of male speakers, it is important to view the experience from the perspective of the female speaker. As she prepares to assume the podium for her formal presentation, she will hear her formal title from almost all of the female introducers; however, she has less than a 50% likelihood that a male introducer will set the tone in the first form of address by calling her ‘‘Doctor.’
- The significance of these linguistic biases lies in the fact that they implicitly communicate stereotypes to the individual, in this case women in medicine, and thereby contribute to the transmission and maintenance of socially shared stereotypes which ultimately have the potential to affect both the recipient and the audience.
- Overt discrimination is usually obvious and well recognized by those experiencing it, whereas more subtle forms of gender bias are difficult to describe, explain, and to address especially when inflicted upon an individual who may feel unsafe to address the practice as it occurs. Furthermore, unrecognized aspects of an organization’s culture may have different effects on men and women.
- It is the authors’ hope that objective documentation of the gender disparity identified in speaker introductions at IMGR will provide validation to women who have experienced it.
Abstract
Background: Gender bias has been identified as one of the drivers of gender disparity in academic medicine. Bias may be reinforced by gender subordinating language or differential use of formality in forms of address. Professional titles may influence the perceived expertise and authority of the referenced individual. The objective of this study is to examine how professional titles were used in the same and mixed-gender speaker introductions at Internal Medicine Grand Rounds (IMGR).Methods: A retrospective observational study of video-archived speaker introductions at consecutive IMGR was conducted at two different locations (Arizona, Minnesota) of an academic medical center. Introducers and speakers at IMGR were physician and scientist peers holding MD, PhD, or MD/PhD degrees. The primary outcome was whether or not a speaker’s professional title was used during the first form of address during speaker introductions at IMGR. As secondary outcomes, we evaluated whether or not the speakers professional title was used in any form of address during the introduction.Results: Three hundred twenty-one forms of address were analyzed. Female introducers were more likely to use professional titles when introducing any speaker during the first form of address compared with male introducers (96.2% [102/106] vs. 65.6% [141/215]; p < 0.001). Female dyads utilized formal titles during the first form of address 97.8% (45/46) compared with male dyads who utilized a formal title 72.4% (110/152) of the time ( p = 0.007). In mixed-gender dyads, where the introducer was female and speaker male, formal titles were used 95.0% (57/60) of the time. Male introducers of female speakers utilized professional titles 49.2% (31/63) of the time ( p < 0.001).Conclusion: In this study, women introduced by men at IMGR were less likely to be addressed by professional title than were men introduced by men. Differential formality in speaker introductions may amplify isolation, marginalization, and professional discomfiture expressed by women faculty in academic medicine.APA Style Reference
Files, J. A., Mayer, A. P., Ko, M. G., Friedrich, P., Jenkins, M., Bryan, M. J., ... & Duston, T. (2017). Speaker introductions at internal medicine grand rounds: forms of address reveal gender bias. Journal of women's health, 26(5), 413-419. https://doi.org/10.1089/jwh.2016.6044
You may also be interested in
- Unequal effects of the COVID-19 pandemic on scientists (Myers et al., 2019)
- Against Eminence (Vazire, 2017)
- Scientific Eminence: Where Are the Women? (Eagly & Miller, 2016)
- #bropenscience is broken science (Whitaker & Guest, 2020) ⌺
- Examining Gender Bias in Student Evaluations of Teaching for Graduate Teaching Assistants (Khazan et al., 2019) ⌺
- Science faculty’s subtle gender biases favour male students (Moss-Racusin et al., 2012) ⌺
Redesign open science for Asia, Africa and Latin America (Onie, 2020)⌺
Main Takeaways:
- Research is relatively new in many countries in Asia, Africa and Latin America. Across these regions, young scientists are working to build practices for open science from the ground up. The aim is that scientific communities will incorporate these principles as they grow. But these communities’ needs differ from those that are part of mature research systems. So, rather than shifting and shaping established systems, scientists are endeavouring to design new ones.
- Financial and career incentives to publish (or disadvantages from not publishing) are common government policies in countries such as Indonesia, China and Brazil where research culture is still being shaped. They aim to increase publication quantity to ‘catch up’ with other countries, but inadvertently encourage poor research practices.
- Lower-income countries cannot waste resources on funding untrustworthy research. Policies should therefore be designed to improve transparency, relevance and scientific rigour, rather than just to increase output — especially if governments want to use research to inform decision-making. Governments must also provide the training, resources and motivation needed for people to take these changes to heart.
- Crucially, the team will include researchers from many different types of university, not just the largest ones. Going forward, the institute will monitor whether the repository improves research quality. Other countries will face different issues. But a commonality will be that all stakeholders — not just the rich or prestigious ones — should be involved in finding a solution.
- Most universities in Asia, Africa and Latin America were set up for education. Many are ill-equipped to perform research and lack the proper infrastructure.
- Sustainable changes require education. Universities should train researchers not just in field-specific theories, but also in how to improve scientific practice. This training should cover the pitfalls of modern academia (e.g. prestige and academic metrics) have contributed to publication bias. It must address the consequences of succumbing to these pressures for the quality, replicability and trustworthiness of research. And it should honestly highlight disagreements about whether and when these practices actually work — debates about when pre-registration of research is and is not useful, for instance. And researchers must learn about these topics as they begin their research careers, even as undergraduates, rather having to modify existing practices later.
- Training in good scientific practices will set scientists up to think more critically and to adopt practices that increase the credibility of their work.
- Training will also enable researchers to add their diverse voices to continuing debates about open science, including active consideration of how science can benefit society, locally and globally. This shift towards open research might require a reworking of the overall training package, reducing the number of field-specific courses to avoid an overwhelming workload.
- Journals should take an active role in reducing under-representation, without compromising rigour. Ask authors to explicitly describe the populations they study upfront, and not to generalize their findings beyond this sample without good reason. Open reviews could reduce potential bias against samples from outside Western countries. Established journals should make efforts to communicate their standards to scientists in developing research cultures, and could also host special issues focused on understanding under-represented populations.
- A lack of funding and travel restrictions in many parts of Asia, Africa and Latin America reduce these opportunities for international collaboration, networking and travel, leading to researchers becoming more isolated. Such problems need to be acknowledged explicitly and confronted.
- Metrics and policies should be in place only if they are useful to science’s goal: knowledge accumulation for the greater societal good. Constant monitoring and introspection are therefore essential. Some of the best initiatives to improve science today might not be relevant in the future.
Quote
“If young research cultures can guard against harmful practices becoming ingrained, they have the opportunity to lay down a new type of strategy for open research. This could avoid the pressures that can sometimes warp research in the Western world and ultimately produce work that is credible and beneficial to society. The goal is not to replicate what is done in North America, Europe and Australia — rather, it is to do better.” (p.37)
Abstract
Researchers in many countries need custom-built systems to do robust and transparent science.APA Style Reference
Onie, S.(2020). Redesign open science for Asia, Africa and Latin America. Nature, 587, 35-37. https://doi.org/10.1038/d41586-020-03052-3
You may also be interested in
- #bropenscience is broken science (Whitaker & Guest, 2020) ⌺
- ls There a Positive Correlation between Socioeconomic Status and Academic Achievement? (Quagliata, 2008)
- Education and Socio-economic status (APA, 2017b)
- Ethnic and Racial minorities and socio-economic status (APA, 2017)
- Women and Socio-economic status (APA, 2010)
- Disability and Socio-economic status (APA, 2010)
- Lesbian, Gay, Bisexual and Transgender Persons & Socioeconomic Status (APA, 2010)
- The Relation Between Family Socioeconomic Status and Academic Achievement in China: A Meta-analysis (Liu et al., 2020)
- Socioeconomic Status and Academic Outcomes in Developing Countries: A Meta-Analysis (Kim et al., 2019)
- Is science only for the rich? (Lee, 2016) ⌺
Main Takeaways:
Abstract
APA Style Reference
You may also be interested in
Publishing Research With Undergraduate Students via Replication Work: The Collaborative Replications and Education Project (CREP; Wagge et al., 2019)
Main Takeaways:
- The Collaborative Replications and Education Project (CREP) allows undergraduates to participate in high-quality direct replication, using existing resources and by providing structure for research projects.
- CREP samples seminal papers in 9 sub-disciplines published 3 years prior to the current year. Alumni students can then rate papers based on time and level of interest.
- CREP teaches good scientific practices utilizing direct replications and open science methods.
- CREP informs the original authors of the study selections and asks for materials and guidance for replication.
- The skills acquired from CREP can be applied to non-academic careers. For instance, teaching students the ability to evaluate scientific claims.
- CREP provides a forum and a community: for replication results to be presented and the institutionalization of replications, thereby contributing to science.
- Students are invited to contribute to authorship, even if they do not involve lead authorship roles.
- CREP deems that unaided, most student projects are not adequately powered for publication, thus do not lead to publication.
- Working with CREP allows students to replicate/not replicate a seminal finding but also to provide them a publication.
Quote
“CREP offers a supportive entry point for faculty…new to open science and large-scale collaboration…helps with fidelity and quality checks…eliminates need for instructors to vet every hypothesis and design for student research projects…not be experts in a topic…do not need to learn new programs…documentable experience blending teaching, scholarship, and close mentoring.” (p. 4).
Abstract
The Collaborative Replications and Education Project (CREP; http://osf.io/wfc6u) is a framework for undergraduate students to participate in the production of high-quality direct replications. Staffed by volunteers (including the seven authors of this paper) and incorporated into coursework, CREP helps produce high-quality data using existing resources and provides structure for research projects from conceptualization to dissemination. Most notably, student research generated through CREP make an impact: data from these projects are available for meta-analyses, some of which are published with student authors.APA Style Reference
Wagge, J. R., Brandt, M. J., Lazarevic, L. B., Legate, N., Christopherson, C., Wiggins, B., & Grahe, J. E. (2019). Publishing research with undergraduate students via replication work: The collaborative replications and education project. Frontiers in psychology, 10, 247. https://doi.org/10.3389/fpsyg.2019.00247
You may also be interested in
- Is science really facing a reproducibility crisis, and do we need it to? (Fanelli, 2018)
- Many hands make tight work(Silberzahn & Uhlmann, 2015)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
CJEP Will Offer Open Science Badges (Pexman, 2017)
Main Takeaways:
- This article introduces three badges: open data, open material, and pre-registration badges to the Canadian Journal of Experimental Psychology.
- Open data badge: the data is digitally shareable and made publicly available to reproduce results.
- Open materials badge: all materials that are necessary to reproduce reported results are digitally shareable with descriptions of non-digital materials being provided in order to replicate the study.
- Pre-registration badge: researchers provide an analysis plan that includes a planned sample size, motivated research questions or hypotheses, outcome and predictor variables, including controls, covariates and independent variables. Results must be fully disclosed and distinguished from other results that were not pre-registered.
- Pre-register + analysis: design a pre-register study with an analysis plan for research and the results are recorded according to the pre-registered plan.
Quote
“Indeed, in most cases, authors who wish to apply for badges will do so only after the editorial decision has been made. I understand that there are many reasons why it may not be possible to share data or materials, or to preregister a study, and so I certainly do not expect all authors to apply for badges. Nonetheless, I hope that many authors will devote the time required to make their data, materials, or research plans publicly available; these efforts are an important step toward improving our science.” (p.1).
Abstract
This is a view on open science badges in the Canadian Journal of Psychology by Professor Penny Pexman. It describes the badges and their importance to open science. The badges are used as a mechanism to state that the author is following good research practices.APA Style Reference
Pexman, P. M. (2017). CJEP will offer open science badges. Canadian Journal of Experimental Psychology= Revue Canadienne de Psychologie Experimentale, 71(1), 1-1. https://doi.org/10.1037/cep0000128
You may also be interested in
- Trust Your Science? Open Your Data and Code (Stodden, 2011)
- Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
- Psychologists Are Open to Change, yet Wary of Rules (Fuchs et al., 2012)
- Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency (Kidwell et al., 2016)
- Quality Uncertainty Erodes Trust in Science (Vazire, 2017)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Signalling the trustworthiness of science (Jamieson et al., 2020)
Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency (Kidwell et al., 2016)
Main Takeaways:
- Researchers are not likely to share data and materials unless there are incentives such as badges. The inclusion of badges states that the journal values transparency and that the author has met the transparency standards for research by signalling to the reader that they have provided accessible data, materials, or pre-registered their study.
- The present study investigated the influence of adopting badges by comparing data and material sharing rates before badges were adopted (i.e. 2012-2013) and after badges were adopted (2014-May 2015) in Psychological Science.
- Method: “We used the population of empirical articles with studies based on experiment or observation (N = 2,478) published in 2012, 2013, 2014, and January through May 2015 issues of one journal that started awarding badges” (p.3). Variables included were open data or open material badge, availability statement of data and material, and whether data or materials are available at a publicly accessible location.
- Results: There was an increase in the reporting of open data after badges were introduced. However, reporting openness does not guarantee openness. When badges are earned, available data is provided, correct, usable and complete than when it was not earned.
- Results: Open materials increased but not to the same extent.
- Psychological science adopts badges, report sharing rates increases 10-fold to 40%. Without badges – a small percentage of reported sharing is a gross exaggeration of sharing.
- Sharing data was larger when a badge was earned than when it was not earned.
- Effects on sharing research materials were similar to sharing data but weaker with badges producing only three times more sharing.
Quote
“However, actual evidence suggests that this very simple intervention is sufficient to overcome some barriers to sharing data and materials. Badges signal a valued behavior, and the specifications for earning the badges offer simple guides for enacting that behavior. Moreover, the mere fact that the journal engages authors with the possibility of promoting transparency by earning a badge may spur authors to act on their scientific values. Whatever the mechanism, the present results suggest that offering badges can increase sharing by up to an order of magnitude or more. With high return coupled with comparatively little cost, risk, or bureaucratic requirements, what’s not to like?” (p.13).
Abstract
Beginning January 2014, Psychological Science gave authors the opportunity to signal open data and materials if they qualified for badges that accompanied published articles. Before badges, less than 3% of Psychological Science articles reported open data. After badges, 23% reported open data, with an accelerating trend; 39% reported open data in the first half of 2015, an increase of more than an order of magnitude from baseline. There was no change over time in the low rates of data sharing among comparison journals. Moreover, reporting openness does not guarantee openness. When badges were earned, reportedly available data were more likely to be actually available, correct, usable, and complete than when badges were not earned. Open materials also increased to a weaker degree, and there was more variability among comparison journals. Badges are simple, effective signals to promote open practices and improve preservation of data and materials by using independent repositories.APA Style Reference
Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L. S., ... & Errington, T. M. (2016). Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLoS biology, 14(5), e1002456. https://doi.org/10.1371/journal.pbio.1002456
You may also be interested in
- Trust Your Science? Open Your Data and Code (Stodden, 2011)
- Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
- Psychologists Are Open to Change, yet Wary of Rules (Fuchs et al., 2012)
- CJEP Will Offer Open Science Badges (Pexman, 2017)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Signalling the trustworthiness of science (Jamieson et al., 2020)
Signalling the trustworthiness of science should not be a substitute for direct action against research misconduct (Kornfeld & Titus, 2020)
Main Takeaways:
- Truth is undermined by misconduct, fraud, failure to replicate, rise in the number of retractions, and the public media.
- Fraudulent behaviour does not decrease the trust in science.
- Fraudulent behaviour is a result of the fraudulent scientist, not untrustworthy science.
- Reports indicate failure to publish will prevent academic appointment, tenure and ensuring funding of laboratories as the main concerns.
- Educating the public about the high standards of science and scientists will not reduce the outrage concerning fraudulent research.
Quote
“When then will these leaders of the scientific community finally direct their talents and energy to the culprit per se, research misconduct, and its perpetrators” (p.41).
Abstract
This is a response to the paper by Jamieson et al. (2019) on signalling trustworthiness in science. It contains information that the trust in science from the public and scientific community contributes to misconduct and fraudulent behaviour.APA Style Reference
Kornfeld, D. S., & Titus, S. L. (2020). Signaling the trustworthiness of science should not be a substitute for direct action against research misconduct. Proceedings of the National Academy of Sciences of the United States of America, 117(1), 41. https://doi.org/10.1073/pnas.1917490116
You may also be interested in
- Reply to Kornfeld and Titus: No distraction from misconduct (Jamieson et al., 2020)
- Stop ignoring misconduct (Kornfeld & Titus, 2016)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- Publication Pressure and Scientific Misconduct in Medical Scientists (Tijdink et al., 2014)
- Signalling the trustworthiness of science (Jamieson et al., 2020)
- Check for publication integrity before misconduct (Grey et al., 2020)
Reply to Kornfeld and Titus: No distraction from misconduct (Jamieson et al., 2020)
Main Takeaways:
- Funders should make research ethics a condition of support. Institutions should provide education and investigate misconduct fairly, quickly and transparently, while protecting whistle-blowers. Journals should act quickly to correct the record.
- Scientists and outlets that publish their work should not only honor science’s integrity-protecting norms but also clearly signal when, and how, they have done so (e.g. statistical checks, plagiarism checks, badges, checklists).
- These aforementioned methods should uncover and increase awareness of biases that undermine the ability to fairly interpret the authors’ findings.
- “These indicators of trustworthiness clearly signal that the scientific community is safeguarding science’s norms and institutionalizing practices that protect its integrity as a way of knowing.” (p.42).
Abstract
This is a response to the commentary by Kornfeld and Titus (2020). It contains information about the importance of research ethics for funders, how institutions should protect whistleblowers and provide education to prevent misconduct and how scientists and outlets can provide evidence they honour scientific integrity.APA Style Reference
Jamieson, K. H., McNutt, M., Kiermer, V., & Sever, R. (2020). Reply to Kornfeld and Titus: No distraction from misconduct. Proceedings of the National Academy of Sciences of the United States of America, 117(1), 42. https://doi.org/10.1073/pnas.1918001116
You may also be interested in
- Signalling the trustworthiness of science should not be a substitute for direct action against research misconduct (Kornfeld & Titus, 2020)
- Stop ignoring misconduct (Kornfeld & Titus, 2016)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- Publication Pressure and Scientific Misconduct in Medical Scientists (Tijdink et al., 2014)
- Signalling the trustworthiness of science (Jamieson et al., 2020)
- Check for publication integrity before misconduct (Grey et al., 2020)
Stop ignoring misconduct (Kornfeld & Titus, 2016)
Main Takeaways:
- The history of science shows irreproducibility is not a product of our times. These problems result from inadequate research practices and fraud. Current initiatives to improve science ignores fraudulent behaviour.
- Reducing irreproducibility is a wasted opportunity, if dishonesty is not given much attention.
- These unethical practices occurred long before people entered science. We need to consider reasons for misconduct: some researchers are perfectionists and are unable to cope with failure.
- Funders should craft policies to ensure mentors are advisers, teachers, and role models, while limiting the number of trainees per mentor by discipline.
- Established scientists are less likely to commit misconduct if they were more concerned about being detected and punished.
- Whistle-blowers need to come forward and be protected. One method is to provide research integrity officers in the university who will protect them from retaliation.
- Research funds should be given only when current certification about research integrity and honesty is provided by the institution. If misconduct occurs, institutions that fail to establish and execute these policies to assure integrity, will be held accountable.
Quote
“Government officials should be prepared to pursue repayments. The threat of such penalties should have a chilling effect on investigators contemplating research misconduct, and motivate institutions to establish and implement policies that reflect their commitment to institutional integrity.” (p.30)
Abstract
This is an editorial by Kornfeld and Titus (2016) who discusses that misconduct needs to be taken seriously and discussed. It contains solutions to resolve matters concerning research integrity for both the scientist and research institute.APA Style Reference
Kornfeld, D. S., & Titus, S. L. (2016). Stop ignoring misconduct. Nature, 537(7618), 29-30.https://doi.org/10.1038/537029a
You may also be interested in
- Signalling the trustworthiness of science should not be a substitute for direct action against research misconduct (Kornfeld & Titus, 2020)
- Reply to Kornfeld and Titus: No distraction from misconduct (Jamieson et al., 2020)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- Publication Pressure and Scientific Misconduct in Medical Scientists (Tijdink et al., 2014)
- Check for publication integrity before misconduct (Grey et al., 2020)
The Statistical Crisis in Science (Gelman & Loken, 2014)
Main Takeaways:
- Authors argue that there is a statistical crisis in science because results are data-dependent, meaning that analytical decisions — a "garden of forking paths" — are so impactful that it can potentially produce different results depending on researchers' decisions.
- Authors explain that p-value is short for probability value. It is the probability of obtaining an effect that is at least as extreme as the one you found in your sample - assuming the null hypothesis is true. Gelman and Loken define it as: “a way of measuring the extent to which a data set provides evidence against a so-called null hypothesis.”
- Some ‘common’ practices (e.g., creating rules for data exclusion, for example) can flip analyses from non-significant to significant (and vice-versa). Such practices are particularly problematic when effect sizes or sample sizes are small, or when there are substantial measurement errors and variability.
- The ‘garden of forking paths’, or researcher degrees of freedom, or data-dependent results, highlights that multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. This is because different choices about combining variables, inclusion and exclusion of cases, transformations of variables, tests for interactions in absence of main effects, and other steps, could occur with different data (depending on these decisions, which are often implicit, and unreported).
- The Way Forward? Researchers should be made aware of choices involved in data analysis (pre-registration is practical but cannot be a general solution). Make a sharper distinction between exploratory and confirmatory data analysis, recognizing the benefits and limitations of each. Hence, researchers can perform two experiments: exploratory and confirmatory with its own pre-registered protocol. Authors also argue that statistically significant p-values shouldn’t be taken at face value even if linked to comparison consistent with existing theory. Researchers need to be aware of data dredging (the misuse of data analysis to find patterns in data that can be presented as statistically significant which dramatically increase the risk of false positives) and using both confidence intervals and p-values to avoid getting fooled by noise.
- At the aggregate level, as the vast majority of papers are not published in high-impact journals without a significant p < .05 result (i.e., most journals, and the academic system in general, value ‘novel’ positive results rather than replications or pointing out mistakes in published literature), data-dependency results may be widespread. This is also known as perverse incentives.
Abstract
Data-dependent analysis— a "garden of forking paths" — explains why many statistically significant comparisons don't hold up.APA Style Reference
Gelman, A., & Loken, E. (2014). The statistical crisis in science: data-dependent analysis--a" garden of forking paths"--explains why many statistically significant comparisons don't hold up. American scientist, 102(6), 460-466. [gated, ungated]
You may also be interested in
- How scientists can stop fooling themselves over statistics (Bishop, 2020b)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- The life of p: “Just significant” results are on the rise (Leggett et al., 2013)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Scientific inbreeding and same-team replication: Type D personality as an example (Ioannidis, 2012)
Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
Main Takeaways:
- The h-index is used to evaluate scientists for hiring, promoting and funding decisions. This metric is affected by the number of publications, citations, and productivity. But papers can be cited in critiques, due to faults in methodology or failures to replicate. Does this mean citations are a good measure? No! (cf. Goodhart’s Law).
- Academia provides short-term contracts to exploit without wasting resources. There is fierce competition for limited funding.
- Publications aim for newsworthy results, leading to false positives and less integration with the literature. The pressure for positive findings can lead to unethical behaviour.
- Questionable research practices are seen as unethical as it distorts data to support the researchers’ hypotheses, either intentionally or unintentionally. There is too much faith that scientists will self-correct.
- Scientists need to be open about their results. Many scientists subscribe to the norm of communality (common ownership of scientific results and methods).
- There is some data sharing but, most scientists don’t. Scientists are assumed to self-regulate, but this assumption is erroneous.
- Incentives need to change and focus on quality, reproducibility, data sharing, and impact on society.
- Pre-registration can help with publication biases and questionable research practice. A study should be published irrespective of findings.
- Criticism of pre-registration is that workload will increase; evaluation of methodology and data collection to evaluate adherence to pre-registration plan. It is argued that pre-registration would save time in preventing manuscripts being rejected based on methodological issues or null results.
- Pre-registration could backfire, as editors may require revisions to protocols, study is complete and changes may be impossible.
- Pre-registration may address integrity issues before and during data collection.
- Need to change the culture so scientists don’t need to prioritise their own research over scientific inquiry or credibility.
Quote
“The success of science is often attributed to its objectivity: surely science is an impartial, transparent, and dispassionate method for obtaining the truth? In fact, there is growing concern that several aspects of typical scientific practice conflict with these principles and that the integrity of the scientific enterprise has been deeply compromised.” (p.1)
Abstract
It is becoming increasingly clear that science has sailed into troubled waters. Recent revelations about cases of serious research fraud and widespread ‘questionable research practices’ have initiated a period of critical self-reflection in the scientific community and there is growing concern that several common research practices fall far short of the principles of robust scientific inquiry. At a recent symposium, ‘Improving Scientific Practice: Dealing with the Human Factors’ held at The University of Amsterdam, the notion of the objective, infallible, and dispassionate scientist was firmly challenged. The symposium was guided by the acknowledgement that scientists are only human, and thus subject to the desires, needs, biases, and limitations inherent to the human condition. In this article, five post-graduate students from University College London describe the issues addressed at the symposium and evaluate proposed solutions to the scientific integrity crisis.APA Style Reference
Hardwicke, T E et al 2014 Only Human: Scientists, Systems, and Suspect
You may also be interested in
- Trust Your Science? Open Your Data and Code (Stodden, 2011)
- Is science really facing a reproducibility crisis, and do we need it to? (Fanelli, 2018)
- Six principles for assessing scientists for hiring, promotion, and tenure (Naudet et al, 2018)
- The Nine Circles of Scientific Hell (Neuroskeptic, 2012)
- Don’t let transparency damage science (Lewandowsky & Bishop, 2016)
- Signalling the trustworthiness of science should not be a substitute for direct action against research misconduct (Kornfeld & Titus, 2020)
- Attitudes Toward Open Science and Public Data Sharing: A Survey Among Members of the German Psychological Society (Abele-Brehm et al., 2019)
- Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
- Open Data in Qualitative Research (Chauvette et al., 2019)
- How scientists can stop fooling themselves (Bishop, 2020b)
- Reply to Kornfeld and Titus: No distraction from misconduct (Jamieson et al., 2020)
- Stop ignoring misconduct (Kornfeld & Titus, 2016)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Publication Prejudices: An Experimental Study of Confirmatory Bias in the Peer Review System (Mahoney, 1977)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- The life of p: “Just significant” results are on the rise (Leggett et al., 2013)
- Signalling the trustworthiness of science (Jamieson et al., 2020)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Scientific inbreeding and same-team replication: Type D personality as an example (Ioannidis, 2012)
False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
Main Takeaways:
- Researchers are faced with many different decisions when analysing their data, such as whether to test more participants, how to exclude outliers, whether to use covariates or not. These “researcher degrees of freedom” can be exploited to run different variations of the analysis until a statistically significant result is found.
- The combination of different researcher degrees of freedom makes it increasingly more likely to find a false positive result (i.e., a statistical fluke).
- The authors use two real experiments and computer simulations to show how undisclosed flexibility in data analysis and the selective reporting of results makes it “unacceptably easy” to find significant results, even for hypotheses that are unlikely, or necessarily incorrect.
- It is recommended that authors should: 1) determine data collection rules in advance of running the experiment; 2) collect at least 20 observations per cell; 3) report all variables and experimental conditions (including failed manipulations); and 4) if outliers are removed or covariates are included, authors should show how these actions change the results.
- Reviewers should: 1) ensure that the rules for authors above are followed; 2) be tolerant towards imperfections of the study; 3) ask authors to show that the results don’t depend on specific analytical decisions; and 4) ask for direct replications when the authors’ justification is not compelling enough.
- While other solutions are possible, such as correcting the alpha level, using Bayesian statistics, doing conceptual replications, and posting data and materials online, the authors consider them to be less practical and effective.
Abstract
In this article, we accomplish two things. First, we show that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.APA Style Reference
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22(11), 1359-1366. https://doi.org/10.1177/0956797611417632 [ungated]
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- A 21 Word Solution (Simmons et al., 2012)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- The life of p: “Just significant” results are on the rise (Leggett et al., 2013)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Scientific inbreeding and same-team replication: Type D personality as an example (Ioannidis, 2012)
Publication Prejudices: An Experimental Study of Confirmatory Bias in the Peer Review System (Mahoney, 1977)
Main Takeaways:
- Cognitive bias (e.g. confirmation bias) is more prevalent in scientific publication. A piece of research is threatened by human decision making (i.e. the journal editor and reviewer).
- The present study investigated whether confirmation bias is a problem for current review practices and how we can reduce confirmatory bias
- To what extent do editors and referees weigh various components in evaluation? The ideal publication review system should focus on methodological quality and relevance, over data outcome and interpretation. Writing styles and conclusions can impact the decision made by the editor and peer reviewer.
- Method: five groups of referees read manuscripts that had data that was consistent or inconsistent with the reviewer’s theoretical perspective.
- Method con’t: Reviewers had to evaluate manuscripts based on relevance and methodology.
- Method con’t: two final groups of reviewers received mixed findings, supporting one perspective of the reviewer and one was contradictory to the reviewer’s perspective.
- Results: There was poor inter-rater reliability. Reviewers were more likely to show confirmation bias for manuscripts in favour of their theoretical perspective and were more severe for manuscripts that contradict their perspective.
- Referees should be asked to evaluate relevance and methodology of an experiment without seeing its results or interpretations (cf. registered reports).
- Referees show little agreement on topics, thus they should be trained to produce better and unprejudiced consensus.
- Peer review is perceived as an objective measure but ironically is prone to human biases.
Abstract
Confirmatory bias is the tendency to emphasize and believe experiences which support one's views and to ignore or discredit those which do not. The effects of this tendency have been repeatedly documented in clinical research. However, its ramifications for the behavior of scientists have yet to be adequately explored. For example, although publication is a critical element in determining the contribution and impact of scientific findings~ little research attention has been devoted to the variables operative in journal review policies. In the present study, 75 journal reviewers were asked to referee manuscripts which described identical experimental procedures but which reported positive, negative, mixed, or no results. In addition to showing poor interrater agreement, reviewers were strongly biased against manuscripts which reported results contrary to their theoretical perspective. The implications of these findings for epistemology and the peer review system are briefly addressed.APA Style Reference
Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive therapy and research, 1(2), 161-175. https://doi.org/10.1007/BF01173636
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial (van Rooyen et al., 1999)
- The Peer Reviewers’ Openness Initiative: incentivising open research practices through peer review (Morey et al., 2016)
Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial (van Rooyen et al., 1999)
Main Takeaways:
- The British Medical Journal wants to improve peer review. There was no evidence that investigated whether anonymous peer review is better than other forms of peer review.
- Open peer review (i.e. reviewer signing their review) was argued to lead to better reviews, thus increasing credibility and accountability.
- The article investigated whether the quality of reviews in open review was the same as traditional review.
- Method: When both reviews were received, the reviews and the manuscript were passed to a responsible editor who was asked to rate the quality of reviews, using a validated review quality instrument.
- A second editor was randomly chosen from the other 12 editors to measure review quality independently.
- Method con’t: The corresponding author of each manuscript was sent anonymous copies of the two reviews and was told a decision on the manuscript had not been reached.
- Method con’t: The corresponding author was asked to assess the quality of the review, using a review quality instrument.
- Results: Twelve percent of reviewers were more likely to decline to review, if they were to be identified, as opposed to being anonymous.
- Results: There was no difference between anonymous and identified reviewers in terms of quality of reviews, in recommendation of reviewers, or time taken to review the papers.
- Results: The editors’ quality scores for reviews was higher than that of the authors.
- There was no difference observed in terms of the quality of and time taken to produce the review of the manuscript.
- Authors rated reviews that recommended the publication of the manuscript higher than those reviews that recommended rejection of the manuscript.
- “Editors...did not seem to be influenced by a reviewer's opinion of the merit of a paper when they assessed the quality of the review” (p.26)
Abstract
To examine the effect on peer review of asking reviewers to have their identity revealed to the authors of the paper. Randomised trial. Consecutive eligible papers were sent to two reviewers who were randomised to have their identity revealed to the authors or to remain anonymous. Editors and authors were blind to the intervention. The quality of the reviews was independently rated by two editors and the corresponding author using a validated instrument. Additional outcomes were the time taken to complete the review and the recommendation regarding publication. A questionnaire survey was undertaken of the authors of a cohort of manuscripts submitted for publication to find out their views on open peer review. Two editors' assessments were obtained for 113 out of 125 manuscripts, and the corresponding author's assessment was obtained for 105. Reviewers randomised to be asked to be identified were 12% (95% confidence interval 0.2% to 24%) more likely to decline to review than reviewers randomised to remain anonymous (35% v 23%). There was no significant difference in quality (scored on a scale of 1 to 5) between anonymous reviewers (3.06 (SD 0.72)) and identified reviewers (3.09 (0.68)) (P = 0.68, 95% confidence interval for difference - 0.19 to 0.12), and no significant difference in the recommendation regarding publication or time taken to review the paper. The editors' quality score for reviews (3.05 (SD 0.70)) was significantly higher than that of authors (2.90 (0.87)) (P < 0.005, 95%confidence interval for difference - 0.26 to - 0.03). Most authors were in favour of open peer review. Asking reviewers to consent to being identified to the author had no important effect on the quality of the review, the recommendation regarding publication, or the time taken to review, but it significantly increased the likelihood of reviewers declining to review.APA Style Reference
Van Rooyen, S., Godlee, F., Evans, S., Black, N., & Smith, R. (1999). Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomised trial. Bmj, 318(7175), 23-27. https://doi.org/10.1136/bmj.318.7175.23
You may also be interested in
- Publication Prejudices: An Experimental Study of Confirmatory Bias in the Peer Review System (Mahoney, 1977)
- The Peer Reviewers’ Openness Initiative: incentivising open research practices through peer review (Morey et al., 2016)
Early co-authorship with top scientist predicts success in academic careers (Li et al., 2019)
Main Takeaways:
- Academic impact is complex and linked to citation number. The output of junior researchers can gain a competitive advantage based on visibility.
- The present study asks whether a single event of interaction with ‘top scientists’ may alter junior researchers futures in academia.
- Hypothesis: the more co-authorship with ‘top scientists’, the more junior researchers have competitive advantage.
- Method: publication and citation data for four disciplines was indexed, since 1970 of selected journals for specific authors and institutions.
- A paper’s prestige score is: the average prestige score of its authors’ institution (+) the average prestige score of the researchers’ papers.
- Results: co-author with top scientists provide competitive advantage compared to peers of comparable early career profiles without top co-authors. Authors seem to suggest that students from less prestigious institutions would benefit junior researchers most.
- Discussion:
Abstract
We examined the long-term impact of coauthorship with established, highly-cited scientists on the careers of junior researchers in four scientific disciplines. Here, using matched pair analysis, we find that junior researchers who coauthor work with top scientists enjoy a persistent competitive advantage throughout the rest of their careers, compared to peers with similar early career profiles but without top coauthors. Such early coauthorship predicts a higher probability of repeatedly coauthoring work with top-cited scientists, and, ultimately, a higher probability of becoming one. Junior researchers affiliated with less prestigious institutions show the most benefits from coauthorship with a top scientist. As a consequence, we argue that such institutions may hold vast amounts of untapped potential, which may be realised by improving access to top scientists.APA Style Reference
Li, W., Aste, T., Caccioli, F., & Livan, G. (2019). Early coauthorship with top scientists predicts success in academic careers. Nature communications, 10(1), 1-9. https://doi.org/10.1038/s41467-019-13130-4 [ungated]
You may also be interested in
- Prestige drives epistemic inequality in the diffusion of scientific ideas (Morgan et al., 2018)
- Open Science Isn’t Always Open to All Scientists (Bahlai et al., 2019)
- The Matthew effect in science funding (Bol et al., 2018)
The Peer Reviewers’ Openness Initiative: incentivising open research practices through peer review (Morey et al., 2016)
Main Takeaways:
- Openness and transparency is crucial to science. “Scientific progress is accelerated as more data are available for verification, theory-building and meta-analysis, and experimental materials are available for easier replications and extension studies.” (p.2)
- Openness is an ethical obligation that provides further advantages and is being introduced as a policy change. It is not difficult to learn to be open, but implementing them may delay publications by a few days.
- The relationship between reviewers and authors is important for the scientific process, especially when there is a missing figure or statistical results which requires the author to clarify.
- The author must justify to the reviewer if the following is not included: link to the data, materials, a document with details on how to interpret any files, code, or explanation of how to run the software in the manuscript. If no real reason is given (e.g. legal, ethical or impracticality), reviewers should provide a short review for the lack of openness and failure to justify.
- This Peer Reviewers’ Openness Initiative will ease the review load, as the reviewer can reject the manuscript, if the openness requirement is not met.
- Open research is not a matter of policy, but a matter of scientific value and quality of product.
- Open practices are not standardised and are driven by practice. Authors that lack training in open practices and scientists need to learn new skills and knowledge.
- Senior researchers can help students curate data and research materials. Open data allows the reviewer the option to check the analysis.
- The initiative is targeted at reviewers, not action editors. Researchers who value open research practices should join the Initiative to help promote open research.
Quote
“As a group, reviewers share the power to ensure that articles meet minimum scientific quality standards.What is needed is an affirmation that those minimum scientific quality standards include open practices. By acknowledging that open practices should be considered by reviewers alongside other research norms, reviewers can collectively bring about a radical positive change in the culture of science.” (p.3)
Abstract
Openness is one of the central values of science. Open scientific practices such as sharing data, materials and analysis scripts alongside published articles have many benefits, including easier replication and extension studies, increased availability of data for theory-building and metaanalysis, and increased possibility of review and collaboration even after a paper has been published. Although modern information technology makes sharing easier than ever before, uptake of open practices had been slow. We suggest this might be in part due to a social dilemma arising from misaligned incentives and propose a specific, concrete mechanism—reviewers withholding comprehensive review—to achieve the goal of creating the expectation of open practices as a matter of scientific principle.APA Style Reference
Morey, R. D., Chambers, C. D., Etchells, P. J., Harris, C. R., Hoekstra, R., Lakens, D., ... & Vanpaemel, W. (2016). The Peer Reviewers' Openness Initiative: incentivizing open research practices through peer review. Royal Society Open Science, 3(1), 150547. https://doi.org/10.1098/rsos.150547
You may also be interested in
- Publication Prejudices: An Experimental Study of Confirmatory Bias in the Peer Review System (Mahoney, 1977)
- Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial (van Rooyen et al., 1999)
A 21 Word Solution (Simmons et al., 2012)◈
Main Takeaways:
- The authors suggest a “21-word solution” to encourage researchers to disclose their analytical choices. If researchers did not engage in questionable research practices (e.g., dropping conditions/ variables, p-hacking, optional data stopping), they should explicitly say so in their paper.
- Researchers should say what their sample size was in advance, be transparent and disclose information that they did not drop any variables or conditions.
- We cannot trust our colleagues to run and report studies properly if some people believe it is okay to drop conditions and variables and others do not believe this is good scientific practice.
- Researchers shouldn’t wait for journals and other colleagues to catch up if they want to achieve transparency in science. Rather, they should take the initiative themselves.
- Scientific journals should ask researchers to disclose data collection and analysis decisions truthfully, but this doesn’t mean that they are responsible for policing researchers.
- The 21-word solution can be easily included in your manuscript, even if this is done in the Supplemental materials to reduce word count. The ‘red tape’ of this transparency statement is arguably negligible compared to the thousands of idiosyncratic rules in APA’s Publication Manual.
- False positives need to be scrutinised, as many p-hacking choices are encouraged.
- Reviewers and the readers should ask if this study is a 1 or 2 dependent variable study.
- Disclosure does not reduce p-hacking and does not reduce probability of false positives.
- Papers should include this proposed 21 words to improve its credibility.
Quote
“We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study.” (p.1)
Abstract
APA Style Reference
Simmons, Joseph P. and Nelson, Leif D. and Simonsohn, Uri, A 21 Word Solution (October 14, 2012). Available at SSRN: https://ssrn.com/abstract=2160588 or http://dx.doi.org/10.2139/ssrn.2160588
You may also be interested in
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- The life of p: “Just significant” results are on the rise (Leggett et al., 2013)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
Quality Uncertainty Erodes Trust in Science (Vazire, 2017)
Main Takeaways:
- Quality uncertainty threatens the confidence people have in the findings and to build on them. Vazire argues that the lack of transparency in science has led to quality uncertainty and threatened to erode trust in science. They argue that greater transparency in scientific reporting will reduce quality uncertainty and restore trust in science.
- In the scientific market, the source of quality uncertainty is that the authors know much more about what went into the articles than the potential consumers (e.g. the raw data, the original design and analysis plan, the exploratory analyses and final analysis). Put simply, the more information that is hidden, the larger the quality uncertainty.
- As a result of low levels of transparency in scientific publication, journal editors, reviewers and readers cannot differentiate between lemons and high-quality findings. Some, but not all, the information is presented in the method and results section. However, vital information is kept private (e.g. the raw data, the original design and analysis plan, the exploratory analyses and final analysis), preventing consumers of their manuscript being certain about its quality.
- The cost of lack of transparency will end up in building a science based on low-quality finding and shaky foundations, thus driving out rigorous science. When the low-quality findings do not stand up, it is too late, as high-quality research has been driven out.
- The motto of the Royal Society is to “take no one’s word”, as we cannot rely on a few experts to evaluate the findings of specific research and then ask everyone to completely trust the author. To reduce quality uncertainty, we must be transparent in order to make a judgment about the quality of the manuscript.
- Increased transparency will provide the consumers the information needed to detect many errors in the article, while making authors more accountable for their mistakes, thus encouraging further care for how their study is designed, analysed and written. This will not help consumers and researchers catch researchers who are willing to use fraudulent behaviours, but it will solve unintentional misrepresentations, thus rebuilding trust in science.
- When journals choose to maximise citation impacts, instead of producing reliable, robust and reproducible science, the consumer is being given shoddy, as opposed to reliable product, thus making journals neglect their duties. To make journals more accountable, we must tie their reputation not to impact factor, but the quality of their articles instead and their policies on open science and transparency.
Quote
“In any market, consumers must evaluate the quality of products and decide their willingness to pay based on their evaluation. In science, consumers of new scientific findings must likewise evaluate the strength of the findings and decide their willingness to put stock in them. In both kinds of markets, the inability to make informed and accurate evaluations of quality (i.e., quality uncertainty) leads to a lower and lower willingness to put stock in any product – a lack of trust in the market itself. When there are asymmetries in the information that the seller and the buyer have, the buyers cannot be certain about the quality of the products, leading to quality uncertainty.” (p.1).
Abstract
When consumers of science (readers and reviewers) lack relevant details about the study design, data, and analyses, they cannot adequately evaluate the strength of a scientific study. Lack of transparency is common in science, and is encouraged by journals that place more emphasis on the aesthetic appeal of a manuscript than the robustness of its scientific claims. In doing this, journals are implicitly encouraging authors to do whatever it takes to obtain eye-catching results. To achieve this, researchers can use common research practices that beautify results at the expense of the robustness of those results (e.g., p-hacking). The problem is not engaging in these practices, but failing to disclose them. A car whose carburetor is duct-taped to the rest of the car might work perfectly fine, but the buyer has a right to know about the duct-taping. Without high levels of transparency in scientific publications, consumers of scientific manuscripts are in a similar position as buyers of used cars – they cannot reliably tell the difference between lemons and high quality findings. This phenomenon – quality uncertainty – has been shown to erode trust in economic markets, such as the used car market. The same problem threatens to erode trust in science. The solution is to increase transparency and give consumers of scientific research the information they need to accurately evaluate research. Transparency would also encourage researchers to be more careful in how they conduct their studies and write up their results. To make this happen, we must tie journals’ reputations to their practices regarding transparency. Reviewers hold a great deal of power to make this happen, by demanding the transparency needed to rigorously evaluate scientific manuscripts. The public expects transparency from science, and appropriately so – we should be held to a higher standard than used car salespeople.APA Style Reference
Vazire, S. (2017). Quality Uncertainty Erodes Trust in Science. Collabra: Psychology, 3(1), 1. DOI: http://doi.org/10.1525/collabra.74
You may also be interested in
- Trust Your Science? Open Your Data and Code (Stodden, 2011)
- Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
- Is science really facing a reproducibility crisis, and do we need it to? (Fanelli, 2018)
- Psychologists Are Open to Change, yet Wary of Rules (Fuchs et al., 2012)
- The Nine Circles of Scientific Hell (Neuroskeptic, 2012)
- Don’t let transparency damage science (Lewandowsky & Bishop, 2016)
- How scientists can stop fooling themselves (Bishop, 2020b)
- CJEP Will Offer Open Science Badges (Pexman, 2017)
- Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency (Kidwell et al., 2016)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Promoting an open research culture (Nosek et al., 2015)
- Promoting Transparency in Social Science Research (Miguel et al., 2014)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Signalling the trustworthiness of science (Jamieson et al., 2020)
Rein in the four horsemen of irreproducibility (Bishop, 2019)
Main Takeaways:
- Publication bias, low statistical power, p-hacking, and HARKing (hypothesising after the results are known) are threats to research reproducibility that make it difficult to find meaningful results.
- Publication bias harms patients. The tendency to not publish negative results misleads readers and biases meta-analyses.
- Low statistical power also misleads readers- when the sample size is small, there is a low probability that one will detect an effect even if it exists.
- Time and resources are wasted on such underpowered studies.
- P-hacking occurs when researchers conduct many analyses, but report only those that are significant.
- HARKing is so wide-spread, that researchers may come to accept it as a good practice. Authors should be free to do exploratory analyses, but not when p-values are used outside of the context that was used to calculate them.
- These four problems are older than most of the junior researchers working on them. New developments may help combat these issues:
Abstract
Dorothy Bishop describes how threats to reproducibility, recognized but unaddressed for decades, might finally be brought under control.APA Style Reference
Bishop, D. (2019). Rein in the four horsemen of irreproducibility. Nature, 568(7753), 435-436. http://doi.org/10.1038/d41586-019-01307-2
You may also be interested in
- Trust Your Science? Open Your Data and Code (Stodden, 2011)
- Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
- Is science really facing a reproducibility crisis, and do we need it to? (Fanelli, 2018)
- Psychologists Are Open to Change, yet Wary of Rules (Fuchs et al., 2012)
- The Nine Circles of Scientific Hell (Neuroskeptic, 2012)
- Don’t let transparency damage science (Lewandowsky & Bishop, 2016)
- How scientists can stop fooling themselves (Bishop, 2020b)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Registered Reports: Realigning incentives in scientific publishing (Chambers et al., 2015)
- Registered Reports: A new publishing initiative at Cortex (Chambers, 2013)
- Registered Reports: A step change in scientific publishing (Chambers, 2014)
- Registered reports: a method to increase the credibility of published results (Nosek & Lakens, 2014)
- Registered reports (Jamieson et al., 2019)
- Quality Uncertainty Erodes Trust in Science (Vazire, 2017)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Promoting an open research culture (Nosek et al., 2015)
- Promoting Transparency in Social Science Research (Miguel et al., 2014)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- On the persistence of low power in psychological science (Vankov et al., 2014)
- The life of p: “Just significant” results are on the rise (Leggett et al., 2013)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Scientific inbreeding and same-team replication: Type D personality as an example (Ioannidis, 2012)
Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
Main Takeaways:
- Students and academics with little knowledge of open science may not easily find and make use of resources.
- Transparency and robustness may not guarantee increased rigour.
- Researchers should plan data collection and analysis, be aware of assumptions of statistical models and understanding of statistical tools.
- Credibility of scientific claims depend on replicability.
- Open access removes barriers to access and distributes research.
- The gold route refers to publicly available articles, while the green route relates to self-archiving or the works are made publicly available by people who created the manuscripts (e.g. preprints).
- Open access articles are cited between 36%-600% more than non-open access work. It is given more coverage and discussed more in non-scientific settings.
- Researchers need to consider how they share their data. Is it findable, accessible, interoperable and reusable (FAIR)?
- All steps of data analysis should be recorded in open source programs (e.g. R or Python) or placed in a reproducible syntax file.
- Pre-registration is an open science practice that: protects people from biases; encourages transparency about analytic decision-making; supports rigorous scientific research; enables more replicable and reproducible work.
- Open science increases confidence and replicability of scientific results.
- Direct replication duplicates the necessary elements in order to assess whether the original findings are reproducible, whereas conceptual replication changes one component of the original procedure such as sample or measure to measure whether the original results are reproducible.
Quote
“We hope that this paper will provide researchers interested in open science an accessible entry point to the practices most applicable to their needs. For all of the steps presented in this annotated reading list, any time taken by researchers to understand the issues and develop better practices will be rewarded in orders of magnitude. On an individual level, time and effort are ultimately saved, errors are reduced, and one’s own research is improved through a greater adherence to openness and transparency. On a field-wide level, the more researchers invest in adopting these practices, the closer the field will come toward adhering to scientific norms and the values it claims to espouse.” (p.245)
Abstract
The open science movement is rapidly changing the scientific landscape. Because exact definitions are often lacking and reforms are constantly evolving, accessible guides to open science are needed. This paper provides an introduction to open science and related reforms in the form of an annotated reading list of seven peer-reviewed articles, following the format of Etz, Gronau, Dablander, Edelsbrunner, and Baribault (2018). Written for researchers and students – particularly in psychological science – it highlights and introduces seven topics: understanding open science; open access; open data, materials, and code; reproducible analyses; preregistration and registered reports; replication research; and teaching open science. For each topic, we provide a detailed summary of one particularly informative and actionable article and suggest several further resources. Supporting a broader understanding of open science issues, this overview should enable researchers to engage with, improve, and implement current open, transparent, reproducible, replicable, and cumulative scientific practicesAPA Style Reference
Crüwell, S., van Doorn, J., Etz, A., Makel, M. C., Moshontz, H., Niebaum, J. C., ... & Schulte-Mecklenbeck, M. (2019). Seven Easy Steps to Open Science. Zeitschrift für Psychologie. https://doi.org/10.1027/2151-2604/a000387
You may also be interested in
- Trust Your Science? Open Your Data and Code (Stodden, 2011)
- Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
- Psychologists Are Open to Change, yet Wary of Rules (Fuchs et al., 2012)
- How scientists can stop fooling themselves (Bishop, 2020b)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Registered Reports: Realigning incentives in scientific publishing (Chambers et al., 2015)
- Registered Reports: A new publishing initiative at Cortex (Chambers, 2013)
- Registered Reports: A step change in scientific publishing (Chambers, 2014)
- Registered reports: a method to increase the credibility of published results (Nosek & Lakens, 2014)
- Registered reports (Jamieson et al., 2019)
- Quality Uncertainty Erodes Trust in Science (Vazire, 2017)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Promoting an open research culture (Nosek et al., 2015)
- Promoting Transparency in Social Science Research (Miguel et al., 2014)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Signalling the trustworthiness of science (Jamieson et al., 2020)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
Many hands make tight work (Silberzahn & Uhlmann, 2015)
Main Takeaways:
- It is argued that re-running the analysis produces the same outcome. An analysis run by single team-researchers takes on several roles: inventor: creates ideas and hypotheses; analysts: scrutinise data to support hypotheses; devil’s advocate: use different approaches to show weaknesses in the findings.
- A crowdsourcing approach can be a useful addition to research. Several teams work with the same dataset, where the hypotheses and results are held closed.
- All researchers discuss results via email exchanges and researchers add notes to their individual reports in light of others’ work; to express doubt or confidence about their approach. Teams present findings in a draft manuscript, in which the participants are invited to comment and modify.
- Researchers should not take any single analysis too seriously, as different analyses can produce a broad range of effect sizes.
- Crowdsourcing analyses will not be the optimal solution for several research problems e.g. resource intensive.
- Decision making should be made with care regarding: which hypothesis to test; data collection; and which variables to collect. Researchers will disagree about findings, making it difficult to present a manuscript with a clear conclusion.
- Crowdsourcing research can allow us to evaluate whether analytical approaches and decisions drive findings. This would allow us to discuss the analytical approaches before we commit to a specific strategy.
- Crowdsourcing reduces the incentives for novel and groundbreaking results and can reveal several scientific possibilities.
Quote
“Under the current system, strong storylines win out over messy results. Worse, once a finding has been published in a journal, it becomes difficult to challenge. Ideas become entrenched too quickly, and uprooting them is more disruptive than it ought to be. The crowdsourcing approach gives space to dissenting opinions. Scientists around the world are hungry for more-reliable ways to discover knowledge and eager to forge new kinds of collaborations to do so.” (p.191).
Abstract
Crowdsourcing research can balance discussions, validate findings and better inform policy, say Raphael Silberzahn and Eric L. Uhlmann.APA Style Reference
Silberzahn, R., & Uhlmann, E. L. (2015). Crowdsourced research: Many hands make tight work. Nature News, 526(7572), 189. https://doi.org/10.1038/526189a
You may also be interested in
- Publishing Research With Undergraduate Students via Replication Work: The Collaborative Replications and Education Project (CREP; Wagge et al., 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
A user’s guide to inflated and manipulated impact factor (Ioannidis & Thombs, 2019)
Main Takeaways:
- A widely misused metric is impact factor: reflecting the importance of publications in a specific journal.
- The promotion and funding of an individual depends on the impact factor (cf. Goodhart’s Law). Based on the Declaration on Research Assessment, over 14000 researchers agree: let’s remove impact factor as the measure of an individual article’s quality.
- There is a belief that a higher impact factor leads to more and better articles being submitted and published. If this is the case, a journal’s ratings may improve, as its impact factor increases.
- Volume of submissions may increase, as many scientists naively decide where to send their paper based on journal impact factor. Volume may dissociate from quality.
- An inappropriate use of impact factor is unlikely to stop (e.g. self citations and including citations to other recent articles without justification), especially with a large number of papers being cited without being counted.
- In addition, certain manuscripts (e.g. review articles and papers with questionable scientific value) will get more citations than others research articles.
- Papers should be submitted to target journals based on the quality, scientific rigour, and the relevance of the journal, not impact factor.
Quote
“Authors should pick target journals based on relevance and scientific rigour and quality, not spurious impact factors. Inspecting inflation measures is more informative for choosing a journal than JIF, because prominent inflation may herald spurious editorial practices and thus poor quality. Authors who submit to journals with high‐impact inflation may become members of bubbles. They even run the risk of having their work published in journals that are eventually formally discredited if Clarivate decides to make a more serious effort to curtail spurious gaming.” (p.5).
Abstract
This is a view on impact factor by Professor John P.A. Ioannidis and Dr Brett D. Thombs. It contains a discussion of the impact factor being misused, how it is misused by journals and reviewers but provides solutions to overcome the use of this metric. In addition, we should base journals not on the impact factor but the relevance, scientific rigour and quality of the journal.APA Style Reference
Ioannidis, J. P., & Thombs, B. D. (2019). A user’s guide to inflated and manipulated impact factors. European journal of clinical investigation, 49(9), e13151. https://doi.org/10.1111/eci.13151
You may also be interested in
- Six principles for assessing scientists for hiring, promotion, and tenure (Naudet et al, 2018)
- The Matthew effect in science funding (Bol et al., 2018)
Promoting an open research culture (Nosek et al., 2015)
Main Takeaways:
- The incentive system focuses on innovation, as opposed to replication, openness, transparency, and reproducibility.
- There is no means to align individual and communal incentives via universal scientific policies and procedures.
- We should reward researchers for the time and effort spent in open practices.
- Citations should also extend to data, code, and research materials. Regular and rigorous citation of these materials should be cited as an original intellectual contribution.
- Reproducibility increases confidence in the results and allows scholars to learn more about data interpretation.
- The transparency guidelines are used to improve explicitness about the research process, while reducing vague or incomplete reporting of methodology.
- Pre-registration of studies facilitates the discovery of research, allowing the study to be recorded in a public registry.
- Four levels are used to encourage open science policy:
- Level 1 is designed to have no barrier or incentive to adopting open science practices (e.g. code sharing). This reduces the effort on the efficiency and workflow of the journal.
- Level 2 has stronger authorial expectations than Level 1. It avoids adding resource cost to editors or publishers who adopt this standard. In Level 2, journals would require codes to be deposited in a trusted repository (e.g. osf), also reviewers would need to check the link appears in the manuscript and access the code in the repository.
- Level 3 is the strongest standard but provides some barriers to implementations in the journal. For instance, authors must provide their code for the review process and editors must be able to reproduce the reported analyses publication.
- These higher level guidelines should reduce the time spent on communication with the authors and reviewers, improve standards of reporting, increase detectability of errors prior to publication and ensure that publication-related data is accessible for a long time.
Quote
“The journal article is central to the research communication process. Guidelines for authors define what aspects of the research process should be made available to the community to evaluate, critique, reuse, and extend. Scientists recognize the value of transparency, openness, and reproducibility. Improvement of journal policies can help those values become more evident in daily practice and ultimately improve the public trust in science, and science itself.” (p.1425).
Abstract
Author guidelines for journals could help to promote transparency, openness, and reproducibility.APA Style Reference
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., ... & Contestabile, M. (2015). Promoting an open research culture. Science, 348(6242), 1422-1425. http://doi.org/10.1126/science.aab2374
You may also be interested in
- Quality Uncertainty Erodes Trust in Science (Vazire, 2017)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Promoting Transparency in Social Science Research (Miguel et al., 2014)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
Promoting Transparency in Social Science Research (Miguel et al., 2014)
Main Takeaways:
- The incentives, norms, institutions, and a dysfunctional reward structure make it difficult to improve research design in the social sciences.
- Since social science journals do not instruct adherence to reporting standards (e.g. data sharing), researchers feel motivated to analyse and present data in a more publishable way, e.g. by selecting a subset of positive results.
- Such practices result in a distorted body of evidence with too few null results having direct consequences on policies and eventually citizens.
- This article surveys recent progress towards research transparency in the social sciences and provides standards and rules to realign scholarly incentives with good scientific practices based on three pillars: Disclosure, Preregistration, and Open data and materials.
- Disclosure is about the systematic reporting of all measures, manipulations, data exclusions, and sample sizes.
- Preregistration helps to reduce bias and increase credibility by pre-specifying, e.g., statistical models, dependent variables, and covariates.
- Open data and materials allows researchers to test alternative approaches on the data, reproduce results, identify misreported or fraudulent results; reuse or adapt materials for replication or their own research.
- Limitation: One might argue that preregistration counteracts exploratory analysis. Counterargument: Preregistration should just free an analysis from being reported as formal hypothesis testing.
- Further work is needed, e.g., it is unclear how to preregister studies based on existing data which is a common approach in the social sciences.
Quote
“Scientific inquiry requires imaginative exploration. Many important findings originated as unexpected discoveries. But findings from such inductive analysis are necessarily more tentative because of the greater flexibility of methods and tests and, hence, the greater opportunity for the outcome to obtain by chance. The purpose of prespecification is not to disparage exploratory analysis but to free it from the tradition of being portrayed as formal hypothesis testing. New practices need to be implemented in a way that does not stifle creativity or create excess burden. Yet we believe that such concerns are outweighed by the benefits that a shift in transparency norms will have for overall scientific progress, the credibility of the social science research enterprise, and the quality of evidence that we as a community provide to policy-makers” (p.31).
Abstract
Social scientists should adopt higher transparency standards to improve the quality and credibility of research.APA Style Reference
Miguel, E., Camerer, C., Casey, K., Cohen, J., Esterling, K. M., Gerber, A., ... & Laitin, D. (2014). Promoting transparency in social science research. Science, 343(6166), 30-31. http://doi.org/10.1126/science.1245317
You may also be interested in
- Quality Uncertainty Erodes Trust in Science (Vazire, 2017)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Promoting an open research culture (Nosek et al., 2015)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
Rebuilding Ivory Tower: A bottom-up experiment in aligning research with societal needs (Hart & Silka, 2020) ◈
Main Takeaways:
- Scientists are trained to conduct good science, develop interesting research questions, be impartial to data, sceptical about conclusions and open to criticisms from our peers.
- We are taught good science is a reward in itself for improving our world.
- We need strong collaborations with diverse stakeholders in the public and private sectors, non-governmental organisations and civil society in order to identify and solve (sustainability/wicked) problems.
- “So it turned out that social scientists as well as natural scientists had keen interest in a project aimed at bringing together their expertise and forming bonds with individuals and groups outside academia to solve local problems...also to identify best practices for interdisciplinarity and stakeholder engagement”. (pp. 80-81).
- A shared culture, comprising a common set of beliefs and values and supported by organizational strategy and structure, is needed to streamline the commitment to excellence innate to many academics.
- We try to create an atmosphere of learning from successes and failures. There is no sure-fire formula to match research with societal needs.
- Older faculty are retiring but are being replaced by younger students who are able to move the initiative forward as a result of their skills to be interdisciplinary researchers.
- Universities should use bottom-up (inner interest of academics to improve the world) and top-down (university programs; ideas from senior leaders) strategies to become more useful partners to society.
- Put simply, “Although no single recipe will work in all contexts, it is our hope that the ingredients we’ve identified may prove useful to other universities in their own quests to help solve society’s greatest problems". (p.85).
Quote
“Two fundamental commitments [have emerged]: 1) In addition to the traditional focus on the biophysical components underpinning a problem, a much greater emphasis is needed on the human dimensions, including the complex interactions between society and nature; and 2) productive collaborations must be built between the university and diverse stakeholders to develop a sufficient understanding of sustainability problems and viable strategies for solving them.”
Abstract
Academic scientists can transcend publish-or-perish incentives to help produce real-world solutions. Here’s how one group did it.APA Style Reference
Hart, D. D., & Silka, L. (2020). Rebuilding the ivory tower: bottom-up experiment in aligning research with societal needs. Issues Sci Technol, 36(3), 64-70. https://issues.org/aligning-research-with-societal-needs/ [accessed 14/08/2020]
You may also be interested in
- Promoting an open research culture (Nosek et al., 2015)
- Promoting Transparency in Social Science Research (Miguel et al., 2014)
- Open Knowledge Institutions (Montgomery et al. 2018)
- Professors, We Need You! (Kristof, 2014)
Is science really facing a reproducibility crisis, and do we need it to? (Fanelli, 2018)
Main Takeaways:
- Science is said to be in a crisis due to unreliable findings, poor research quality and integrity, low statistical power, and questionable publication practices caused by the pressure to publish.
- Fanelli questions this “science in crisis” narrative by critically examining the evidence for the existence of these problems.
- Fraud and questionable research practices exist, but they are likely not common enough to seriously distort the scientific literature.
- The pressure to publish has not been conclusively linked to scientific bias or misconduct.
- Low power and replicability may differ between subfields and methodologies, and may be influenced by the magnitude of the true effect size, and the prior probability of the hypothesis being true.
- There is little evidence to suggest that misconduct or questionable research practices have increased in recent years.
- The “science in crisis” narrative is not well supported by the evidence and may be counterproductive, as it encourages values that can be used to discredit science. A narrative of “new opportunities” or “revolution” may be more empowering to scientists.
Quote
“Science always was and always will be a struggle to produce knowledge for the benefit of all of humanity against the cognitive and moral limitations of individual human beings, including the limitations of scientists themselves.” (p.2630)
Abstract
Efforts to improve the reproducibility and integrity of science are typically justified by a narrative of crisis, according to which most published results are unreliable due to growing problems with research and publication practices. This article provides an overview of recent evidence suggesting that this narrative is mistaken, and argues that a narrative of epochal changes and empowerment of scientists would be more accurate, inspiring, and compelling.APA Style Reference
Fanelli, D. (2018). Opinion: Is science really facing a reproducibility crisis, and do we need it to?. Proceedings of the National Academy of Sciences, 115(11), 2628-2631. https://doi.org/10.1073/pnas.1708272114
You may also be interested in
- Publishing Research With Undergraduate Students via Replication Work: The Collaborative Replications and Education Project (Wagge et al., 2019)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- On the persistence of low power in psychological science (Vankov et al., 2014)
Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
Main Takeaways:
- Researchers may be under pressure, once they find a error, to not reveal it due to pressure from senior scientist and the institution;
- Retraction produces fear in a scientist, as it is associated with shame.
- Errors can be reduced with open science practices.
- Raw data can never be made completely open due to confidentiality but we can modify it to remove identifiable information, so that other researchers reproduce what was done.
- Stigma needs to be removed concerning error detection.
- Making an analysis program open does not mean they are error-free. A reproducible result simply indicates that when the same data is analysed, the same result is obtained, even if incorrect.
- Researchers whose error is noticed and respond with denial, anger or silence tend to damage their reputation for integrity. Resolving such issues via the journal that published the original article may be a better approach, though this process seldom proceeds smoothly.
- Findings may be due to methodological concern, as opposed to errors in calculation or scripts, such as conducting a study without a control group, underpowered, using unreliable measures or that has a major confound.
- Methodological errors may be due to ignorance instead of bad faith, including honest errors in the data, analysis or method which compromise conclusions inferred.
- Replication is important, as confidence in the robustness of a finding cannot depend on a single study. When there is a failure to replicate, we should uncover why this happened (e.g. contextual factors or research expertise).
- We should not say original researchers are incompetent, frauds, etc., but we should also not say that critics had malevolent motives and lack expertise. We need to be impartial.
- We should avoid bias and identify publications that are ignored, as positive findings produce more citations than null findings.
- Investigating misconduct is important but challenging. It is a difficult endeavour and requires evidence that takes time to accumulate.
- Academic institutions take an accusation of misconduct against a staff member seriously but it takes a long time. We should consider whether people could have vested interests against this academic.
- We should not mock or abuse other scientists who make honest errors, as this would encourage poor research practices and people may be less likely to be open about these errors.
Quote
“Criticism is the bedrock of the scientific method. It should not be personal: If one has to point to problems with someone’s data, methods, or conclusions, this should be done without implying that the person is stupid or dishonest. This is important, because the alternative is that many people will avoid engaging in robust debate because of fears of interpersonal conflict—a recipe for scientific stasis. If wrong ideas or results are not challenged, we let down future generations who will try to build on a research base that is not a solid foundation. Worse still, when the research findings have practical applications in clinical or policy areas, we may allow wrongheaded interventions or policies to damage the well-being of individuals or society. As open science becomes increasingly the norm, we will find that everyone is fallible. The reputations of scientists will depend not on whether there are flaws in their research, but on how they respond when those flaws are noted.” (p.6)
Abstract
This is a view on the fallibility of science, response to self-errors and errors made by others by Professor Dorothy Bishop. It contains a discussion on how open science should be the norm but being open and honest about oneself is not. It informs us that we should not mock or be hurtful to others concerning honest mistakes and that misconduct is a serious issue but we need to be supportive of both the researcher who is being accused and the individual who is accusing them.APA Style Reference
Bishop, D. V. M. (2018). Fallibility in science: responding to errors in the work of oneself and others. Advances in Methods and Practices in Psychological Science, 1(3), 432-438. https://doi.org/10.1177/2515245918776632
You may also be interested in
- Don’t let transparency damage science (Lewandowsky & Bishop, 2016)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Quality Uncertainty Erodes Trust in Science (Vazire, 2017)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Promoting an open research culture (Nosek et al., 2015)
- Promoting Transparency in Social Science Research (Miguel et al., 2014)
- Signalling the trustworthiness of science should not be a substitute for direct action against research misconduct (Kornfeld & Titus, 2020)
- Reply to Kornfeld and Titus: No distraction from misconduct (Jamieson et al., 2020)
- Stop ignoring misconduct (Kornfeld & Titus, 2016)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Publication Pressure and Scientific Misconduct in Medical Scientists (Tijdink et al., 2014)
- Scientists’ Reputations are Based on Getting it Right, not being Right (Ebersole et al., 2016)
- Check for publication integrity before misconduct (Grey et al., 2020)
Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
Main Takeaways:
- Publication bias, small sample sizes, and p-hacking exaggerate effect sizes in the literature, contributing to the replication crisis.
- Lindsay proposes seven steps to improve transparency and replicability:
- 1. Tell the truth. Be honest and advocate research-if the idea was inspired by data, state so. Report effect size with 95% confidence intervals around them.
- 2. Assess your understanding of inferential statistical tools. We need improved statistical sophistication for researchers to test hypotheses about populations based on their samples - reward quality and accuracy of methods, not quantity and flashiness of results.
- 3. Consider standardizing aspects of your approach to conducting hypothesis testing research. Create a detailed research plan providing a priori hypotheses, sample size planning, data exclusion rules, analyses, transformations, covariates etc. Be transparent and register a research plan (cf. Pre-registration and Registered reports).
- 4. Consider developing a lab manual. Include standardised procedures in data exclusion, data transformations, data-cleaning, authorship, file naming conventions, etc.
- 5. Make your materials, data, and analysis scripts transparent. They should be Findable, Accessible, Interoperable and Reusable (FAIR).
- 6. Address constraints on the generality of your findings. Under what conditions should your results replicate, and not replicate? Failure to replicate could be due to differences in procedures, albeit original work did not indicate such differences modulate effect.
- 7. Consider collaborative approaches to conducting research.
Quote
“The aim of the methodological reform movement is not to restrict psychological research to procedures that meet some fixed criterion of replicability. Replicability is not in itself the goal of science. Rather, the central aim of methodological reform is to make research reports more transparent, so that readers can gain an accurate understanding of how the data were obtained and analyzed and can therefore better gauge how much confidence to place in the findings. A second aim is to discourage practices that contribute to effect-size exaggeration and false discoveries of non-existent phenomena. As per Vazire’s analogy, the call is not for car dealerships to sell nothing but new Ferraris, but rather for dealers to be forthcoming about the weaknesses of what they have on the lot. The grand aim of science is to develop better, more accurate, and more useful understandings of reality. Methodological reform cannot in and of itself deliver on that goal, but it can help.” (p.19).
Abstract
Psychological scientists strive to advance understanding of how and why we animals do and think and feel as we do. This is difficult, in part because flukes of chance and measurement error obscure researchers’ perceptions. Many psychologists use inferential statistical tests to peer through the murk of chance and discern relationships between variables. Those tests are powerful tools, but they must be wielded with skill. Moreover, research reports must convey to readers a detailed and accurate understanding of how the data were obtained and analyzed. Research psychologists often fall short in those regards. This paper attempts to motivate and explain ways to enhance the transparency and replicability of psychological science. Specifically, I speak to how publication bias and p hacking contribute to effect-size exaggeration in the published literature, and how effect-size exaggeration contributes, in turn, to replication failures. Then I present seven steps toward addressing these problems: Telling the truth; upgrading statistical knowledge; standardizing aspects of research practices; documenting lab procedures in a lab manual; making materials, data, and analysis scripts transparent; addressing constraints on generality; and collaborating.APA Style Reference
Lindsay, D. S. (2020). Seven steps toward transparency and replicability in psychological science. Canadian Psychology/Psychologie canadienne. Advance online publication. https://doi.org/10.1037/cap0000222 [ungated]
You may also be interested in
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- A 21 Word Solution (Simmons et al., 2012)◈
- Quality Uncertainty Erodes Trust in Science (Vazire, 2017)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Many hands make tight work (Silberzahn & Uhlmann, 2015)
- Promoting an open research culture (Nosek et al., 2015)
- Promoting Transparency in Social Science Research (Miguel et al., 2014)
- Psychologists Are Open to Change, yet Wary of Rules (Fuchs et al., 2012)
- Registered Reports: Realigning incentives in scientific publishing (Chambers et al., 2015)
- Registered Reports: A new publishing initiative at Cortex (Chambers, 2013)
- Registered Reports: A step change in scientific publishing (Chambers, 2014)
- Registered reports: a method to increase the credibility of published results (Nosek & Lakens, 2014)
- Registered reports (Jamieson et al., 2019)
- Attitudes Toward Open Science and Public Data Sharing: A Survey Among Members of the German Psychological Society (Abele-Brehm et al., 2019)
- Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
- Constraints on Generality (COG): A Proposed Addition to All Empirical Papers (Simons et al., 2017)
- Most people are not WEIRD (Henrich et al., 2010)
- How scientists can stop fooling themselves (Bishop, 2020b)
- CJEP Will Offer Open Science Badges (Pexman, 2017)
- Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency (Kidwell et al., 2016)
- Trust Your Science? Open Your Data and Code (Stodden, 2011)◈
- Publishing Research With Undergraduate Students via Replication Work: The Collaborative Replications and Education Project (CREP; Wagge et al., 2019)
- Is science really facing a reproducibility crisis, and do we need it to? (Fanelli, 2018)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- A consensus-based transparency checklist (Aczel et al., 2020)
- Tell it like it is (Anon, 2020)
- Is pre-registration worthwhile? (Szollosi et al., 2020)
- Signalling the trustworthiness of science (Jamieson et al., 2020)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Scientific inbreeding and same-team replication: Type D personality as an example (Ioannidis, 2012)
Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them (Flake & Fried, 2019)◈
Main Takeaways:
- Questionable measurement practices (QMPs) undermine internal and external validity, both of the statistical conclusions and construct of interest.
- The authors focus on lack of transparency in reporting how measures are developed, used, and adapted, as well as reporting key psychometric information.
- Often, information about the decisions researchers make is lacking, particularly when measures are created/adapted on the fly (e.g. changing response type, changing response style or options, changing item wording or content).
- Increasing transparency of measurement development and use facilitates thorough and accurate evaluation of validity of results. The authors offer the following guidance:
Quote
“The increased awareness and emphasis on QRPs, such as p-hacking, have been an important contribution to improving psychological science. We echo those concerns, but also see a grave need for broadening our scrutiny of current practices to include QMPs (Fried & Flake, 2018). Recalling our example of depression at the outset, even if we increase the sample size of our depression trials, adequately power our studies, pre-register our analytic strategies, and stop p-hacking — we can still be left wondering if we were ever measuring depression at all.” (p.22)
Abstract
In this paper, we define questionable measurement practices (QMPs) as decisions researchers make that raise doubts about the validity of the measures, and ultimately the validity of study conclusions. Doubts arise for a host of reasons including a lack of transparency, ignorance, negligence, or misrepresentation of the evidence. We describe the scope of the problem and focus on how transparency is a part of the solution. A lack of measurement transparency makes it impossible to evaluate potential threats to internal, external, statistical conclusion, and construct validity. We demonstrate that psychology is plagued by a measurement schmeasurement attitude: QMPs are common, hide a stunning source of researcher degrees of freedom, pose a serious threat to cumulative psychological science, but are largely ignored. We address these challenges by providing a set of questions that researchers and consumers of scientific research can consider to identify and avoid QMPs. Transparent answers to these measurement questions promote rigorous research, allow for thorough evaluations of a study’s inferences, and are necessary for meaningful replication studies.APA Style Reference
Flake, J. K., & Fried, E. I. (2019). Measurement schmeasurement: Questionable measurement practices and how to avoid them. https://psyarxiv.com/hs7wm/
You may also be interested in
A consensus-based transparency checklist (Aczel et al., 2020)
Main Takeaways:
- This manuscript provides a checklist of research transparency practices researchers can use to evaluate the transparency in their study, or to help improve transparency at each stage of the research process.
- Transparency is required to evaluate and reproduce findings, and also for research synthesis and meta analysis from the raw data.
- There is a lack of transparency in the literature, but we should not assume an intention to be deceptive or misleading. Rather, human reasoning is prone to biases (e.g. confirmation bias and motivated reasoning) and few journals ask about statistical and methodological practices and transparency.
- Journals can support open practices by offering badges, using the transparency and openness promotion guidelines, promote the availability of all research items, including data, materials and codes.
- The consensus-based transparency checklist can be submitted with the manuscript to provide critical information about the process to evaluate the robustness of a finding.
- The checklist can be modified by deleting, adding and rewording items with a high level of acceptability and consensus with no strong counter argument for single items.
- Researchers can explain the choices at the end of each 36 section. There is a shortened 12-item version to reduce demands on the researchers’ time and facilitate broader adoption that fosters transparency and asks authors to complete a 36-item list.
Abstract
We present a consensus-based checklist to improve and document the transparency of research reports in social and behavioural research. An accompanying online application allows users to complete the form and generate a report that they can submit with their manuscript or post to a public repository.APA Style Reference
Aczel, B., Szaszi, B., Sarafoglou, A., Kekecs, Z., Kucharský, Š., Benjamin, D., ... & Ioannidis, J. P. (2020). A consensus-based transparency checklist. Nature human behaviour, 4(1), 4-6. https://doi.org/10.1038/s41562-019-0772-6
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Tell it like it is (Anon, 2020)
- Is pre-registration worthwhile? (Szollosi et al., 2020)
Tell it like it is (Anon, 2020)
Main Takeaways:
- A manuscript provides an account of how a research question(s) is/are addressed, reports findings, and explains how findings support or contradict hypotheses.
- Current research culture is defined by a pressure to present research projects as conclusive narratives that leave no room for ambiguity.
- The pressure to produce clean narratives represents a threat to validity and counter reality of what science looks like.
- Clean narratives often report only outcomes to confirm original predictions or exclude research findings that provide messy results.
- These questionable research practices create a distorted picture of research that prevents cumulative knowledge.
- Pre-registration has little value if not heeded or transparently reported.
- It sometimes becomes evident during peer review that a pre-registered analysis is inappropriate or suboptimal. Authors should indicate deviations from their original plan, and explain why they did these deviations.
- To ensure transparency, unless a preregistered analysis plan is unquestionably flawed, authors should also report the results of their preregistered analyses.
- In multi-study research papers authors should report all work they executed, irrespective of outcomes.
- All research papers must include a limitation section that explains study shortcomings and explicitly acknowledges alternative interpretations of the findings.
Quote
“No research project is perfect; there are always limitations that also need to be transparently reported. In 2019, we made it a requirement that all our research papers include a limitations section, in which authors explain methodological and other shortcomings and explicitly acknowledge alternative interpretations of their findings… Science is messy, and the results of research rarely conform fully to plan or expectation. ‘Clean’ narratives are an artefact of inappropriate pressures and the culture they have generated. We strongly support authors in their efforts to be transparent about what they did and what they found, and we commit to publishing work that is robust, transparent and appropriately presented, even if it does not yield ‘clean’ narratives” p.1
Abstract
Every research paper tells a story, but the pressure to provide ‘clean’ narratives is harmful for the scientific endeavour.APA Style Reference
Anon (2020). Tell it like it is. Nat Hum Behav 4, 1. https://doi.org/10.1038/s41562-020-0818-9
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- A consensus-based transparency checklist (Aczel et al., 2020)
- Tell it like it is (Anon, 2020)
- Is pre-registration worthwhile? (Szollosi et al., 2020)
Is pre-registration worthwhile? (Szollosi et al., 2020)
Main Takeaways:
- Pre-registration should be an option to improve research. Pre-registration intends to solve statistical problems and forces people to think more deeply about theories, methods, and analyses. But, needing, rewarding, or promoting it is not worthwhile. Requiring pre-registration could harm the progress in our field.
- Scientific inference is the process to develop better theories. Statistical models are simplified mathematical abstractions of scientific problems, simplifications to aid scientific inference but to allow abstraction.
- Diagnosticity of statistical tests depends on how well statistical models map onto theories and improved statistical techniques does little to improve theories when mapping is weak.
- Models are useful depending on how accurately the theory is matched to the model. Many statistical models (e.g. general linear model) in psychology are poor estimates of the theory.
- Bad theories can be pre-registered with predictions barely better than randomly picking an outcome. Pre-registration does not improve theories but should allow researchers to think more deeply on how to improve theories through better planning, more precise operationalisation of constructs, and clear motivation for statistical planning.
- We should improve theories when encountering difficulties with pre-registration or when pre-registered predictions are wrong. There is no problem with post-hoc scientific inference when the theories are strong.
- Any improvement depends on a good understanding of how to improve a theory, and pre-registration provides no understanding. Pre-registration encourages thinking, but it is unclear whether the thinking is better or worse.
- Poor operationalisation, imprecise measurement, weak connection between theory and statistical method should take precedence over problems of statistical inference.
Abstract
Proponents of preregistration argue that, among other benefits, it improves the diagnosticity of statistical tests. In the strong version of this argument, preregistration does this by solving statistical problems, such as family-wise error rates. In the weak version, it nudges people to think more deeply about their theories, methods, and analyses. We argue against both: the diagnosticity of statistical tests depends entirely on how well statistical models map onto underlying theories, and so improving statistical techniques does little to improve theories when the mapping is weak. There is also little reason to expect that preregistration will spontaneously help researchers to develop better theories (and, hence, better methods and analyses).APA Style Reference
Szollosi, A., Kellen, D., Navarro, D. J., Shiffrin, R., van Rooij, I., Van Zandt, T., & Donkin, C. (2020). Is Preregistration Worthwhile?. Trends in cognitive sciences, 24(2), 94.https://doi.org/10.1016/j.tics.2019.11.009
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- A consensus-based transparency checklist (Aczel et al., 2020)
- Tell it like it is (Anon, 2020)
- Arrested theory development: The misguided distinction between exploratory and confirmatory research (Szollosi & Donkin, 2019)
- From pre-registration to publication: a non-technical primer for conducting meta-analysis to synthesize correlation data (Quintana, 2015)
- Pre-registration is Hard, And Worthwhile (Nosek et al., 2019)
- Easy preregistration will benefit any research (Mellor & Nosek, 2018)
- Preregistration of Modeling Exercises May Not Be Useful (MacEachern & Van Zandt, 2019)
Arrested theory development: The misguided distinction between exploratory and confirmatory research (Szollosi & Donkin, 2019)◈
Main Takeaways:
- The article describes the current philosophy of science and explain how theory development and theory assessment should work under this framework. This will be followed by why proposed methodological solutions (e.g. direct replications and pre-registration) to the “replication crisis” are unlikely to eliminate factors that interrupt theory development. They conclude in how we can move towards the development of good theories and explanations.
- The aim of science is to develop good explanations. To bring about good explanations is to detect and correct flaws in our existing theories.
- A theory can be criticised by argument, meaning the rejection of explanations that are bad according to the criteria of a good theory: “(1) explain what they are supposed to explain, (2) consistent with other good theories, and (3) cannot easily be adapted to explain anything” (p.4). This takes the form of an argument that a theory cannot account for some existing observation. Also, the theory can be criticised based on how easily it can be adapted to explain several unobserved data patterns. The theory can also be criticised by experimental testing, making a theory problematic by increasing the set of observations that a theory is meant to explain but is unable to do so.
- For science to progress, accountability in theory change is important, we should predict that each adaptation of a theory makes it more inflexible, thus increasing the potential to be made problematic. In turn, this makes the current theory problematic, thus requiring new theories.
- The distinction between exploratory and confirmatory research is meaningless. It does not matter when theories are changed but how easy it was or would have been to make these changes. Focusing on the distinction between exploratory and confirmatory research focus on the inflexibility of the predictions of a theory as important, while ignoring the flexibility of the theory. Hypotheses only matter for experimental studies when a theory is invariant, “experimental testing of a flexible theory will not move scientific progress, even if the predictions of the theory were pre-registered or directly replicated.” (p.10).
Quote
“The key property of a good explanation is that it is hard to vary (Deutsch, 2011). More specifically, a theory can be regarded as good if it satisfies the following criteria, proposed by Deutsch (2016): good theories (1) explain what they are supposed to explain, (2) are consistent with other good theories, and (3) cannot easily be adapted to explain anything. These criteria aim to ensure that a theory is constrained by all of our existing knowledge (existing observations and other good theories), without the benefit of flexibility to tailor the explanation to any possible pattern of observation. In other words, the conjectures that comprise a theory must be inflexible while still allowing the theory to account for its explicanda. This property of good theories constrains the way in which that theory can be changed. A good theory will resist most changes, because the explanation for any change must be consistent with the retained inflexible conjectures of that theory without making the theory inconsistent with existing observations.” (p.4).
Abstract
Science progresses by finding and correcting problems in theories. Good theories are those that help facilitate this process by being hard-to-vary: they explain what they are supposed to explain, they are consistent with other good theories, and they are not easily adaptable to explain anything. Here we argue that, rather than a lack of distinction between exploratory and confirmatory research, an abundance of flexible theories is a better explanation for current replicability problems of psychology. We also explain why popular methods-oriented solutions fail to address the real problem of flexibility. Instead, we propose that a greater emphasis on theory criticism by argument would improve replicability.APA Style Reference
Szollosi, A., & Donkin, C. (2019). Arrested theory development: The misguided distinction between exploratory and confirmatory research. https://doi.org/10.31234/osf.io/suzej
You may also be interested in
From pre-registration to publication: a non-technical primer for conducting meta-analysis to synthesize correlation data (Quintana, 2015)
Main Takeaways:
- This review will discuss how to conduct a meta-analysis following PRISMA guidelines.
- Pre-register the meta-analysis protocol, as it allows the researchers to formulate the study rationale for a specific research question. In addition, pre-registration avoids bias by providing evidence of a priori analysis intentions, thus in turn, reducing p-hacking.
- Although few journals need to consider meta-analysis registration, pre-registration is important for submission. As meta-analyses are often used to guide treatment for practice and health policy, its pre-registration is possibly even more important than the pre-registration of clinical trials.
- Most journals do not explicitly state that pre-registration is a requirement, the submission of a PRISMA checklist is required, which includes a protocol and a study registration.
- Although there are many databases available, it is the researcher's responsibility to choose the most suitable sources for their research areas. Numerous scientists use duplicate search terms within two or more databases to cover numerous sources. Researchers can also search reference lists of eligible studies for other eligible studies (i.e. snowballing).
- It is important to note the number of studies returned and after using the specified search term, how many of these studies were discarded and the motivation behind discarding these studies. The search teams and strategies should be specific enough for a reader to reproduce the search, which includes the date range of studies, together with the date that the search was conducted.
- Traditionally, it has been difficult to access the gray literature (i.e. research that has not been formally published), now it is becoming more accessible as libraries are posting dissertations in their online repository. Regardless of whether gray literature studies should be included, explicit and detailed search strategies need to be included in the study protocol and method section.
- There are two effect models generally used in a meta-analysis, fixed and random. The way to select one of these models is centered around "how much of the variation of studies can be attributed to variation in the true effect sizes" (p. 5) – assumptions of study homogeneity. A variation is from random error and true study heterogeneity.
- Forest plots visualise effect sizes and confidence intervals from included studies, together with summary effect size. A funnel plot is a visual tool to investigate potential publication bias (i.e. significant findings are published, while non-significant results are not published) in meta-analyses. Funnel plots offer a useful visualisation for potential publication bias, it is important to consider that asymmetry may represent other types of bias like study quality, location bias and study size.
- However, funnel plots suffer from subjective measures of potential publication bias. Two tests used to calculate objective measures of potential bias: trim and fill method and moderating variables.
- The final step of the meta analysis is data interpretation and write-up. The PRISMA guidelines provide a checklist that includes all the items that should be included when reporting a meta-analysis.
Quote
Up to 63% of psychological scientists anonymously admit to questionable research practices(John etal.,2012). These practices include removing data points and analysing data, failing to report all measures analyzed, and HARKing.Such behavior has likely contributed to the lowrates of successful replication observed in psychology (Open Science Collaboration, 2015). The pre-registration of clinical trial protocols has become standard. In contrast,lessthan10%of meta-analysis refers to a study protocol,let alone make the protocol publically available (Moheretal.,2007).Thus,meta-analyses pre-registration would markedly improve the transparency of meta-analyses and the confidence of reported findings.” (p.8)
Abstract
Starting from the view that progress in science consists of the improvement of our theories, in the current paper we ask two questions: what makes a theory good, and how much do the current method-oriented solutions to the replication crisis contribute to the development of good theories? Based on contemporary philosophy of science, we argue that good theories are hard-to-vary: they (1) explain what they are supposed to explain, (2) are consistent with other good theories, and (3) cannot easily be adapted to explain anything. Theories can be improved by identifying problems in them either by argument or by experimental test, and then correcting these problems by changing the theory. Importantly, such changes and the resultant theory should only be assessed based on whether they are hard-to-vary. An assessment of the current state of the behavioral sciences reveals that theory development is arrested by the lack of consideration for how easy it is to change theories to account for unexpected observations. Further, most of the current method-oriented solutions are unlikely to contribute much to the development of good theories, because they do not work towards eliminating this problem. Instead, they reward only temporary inflexibility in theories, and promote the assessment of theory change based on whether the theory was changed before (confirmatory) or after (exploratory) an experimental test, but not whether that change yields a hard-to-vary theory. Finally, we argue that these methodological solutions would become irrelevant if we turned our focus to the explicit aim of developing theories that are hard-to-vary.APA Style Reference
Quintana, D. S. (2015). From pre-registration to publication: a non-technical primer for conducting a meta-analysis to synthesize correlational data. Frontiers in psychology, 6, 1549. https://doi.org/10.3389/fpsyg.2015.01549
You may also be interested in
- Is pre-registration worthwhile? (Szollosi et al., 2020)
- Pre-registration is Hard, And Worthwhile (Nosek et al., 2019)
- Easy preregistration will benefit any research (Mellor & Nosek, 2018)
- Preregistration of Modeling Exercises May Not Be Useful (MacEachern & Van Zandt, 2019)
- On the reproducibility of meta-analyses: six practical recommendations (Lakens et al., 2014)
Pre-registration is Hard, And Worthwhile (Nosek et al., 2019)
Main Takeaways:
- Pre-registration allows us to make exploratory and confirmatory analyses.
- Pre-registration allows us to make the transparent uncertainty more certain, how many statistical tests were conducted and familywise error rate to be corrected.
- Pre-registration reduces influence of publication bias and pre-registration is a skill that needs experience to be improved.
- Pre-registration promotes intellectual humility and better calibration of scientific claims.
- It allows us to provide information on how methodology is implemented, how hypotheses are tested, the exclusion rules, how variables are combined and what to use concerning the statistical model, covariates and characteristics.
- Pre-registration converts general sense into precise and explicit plans that predict what has not yet occurred and decide what will be done.
- It allows us to stop data collection. What are the steps required to assess questions of interest? What are the outcomes?
- Having a plan is better than no plan, sharing plans to advance is better than not sharing them.
- Planning will improve and benefits will increase for oneself and consumers of research.
- Deviations make it harder to interpret with confidence what occurred to what was planned.
- Transparency is important and all deviations should be reported, this is difficult due to narrative coherence, reviewer expectations and word limits.
- We need to maximise credibility of reporting findings when possible, update pre-registration, deviations before observing data, mention all planned analyses to explain why a planned analysis was not reported.
- Use supplements to share in full not hide inconvenient information and during analysis.
Abstract
Preregistration clarifies the distinction between planned and unplanned research by reducing unnoticed flexibility. This improves credibility of findings and calibration of uncertainty. However, making decisions before conducting analyses requires practice. During report writing, respecting both what was planned and what actually happened requires good judgment and humility in making claims.APA Style Reference
Nosek, B. A., Beck, E. D., Campbell, L., Flake, J. K., Hardwicke, T. E., Mellor, D. T., ... & Vazire, S. (2019). Preregistration is hard, and worthwhile. Trends in cognitive sciences, 23(10), 815-818.https://doi.org/10.1016/j.tics.2019.07.009 [ungated]
You may also be interested in
- Is pre-registration worthwhile? (Szollosi et al., 2020)
- From pre-registration to publication: a non-technical primer for conducting meta-analysis to synthesize correlation data (Quintana, 2015)
- Easy preregistration will benefit any research (Mellor & Nosek, 2018)
- Preregistration of Modeling Exercises May Not Be Useful (MacEachern & Van Zandt, 2019)
Preregistration of Modeling Exercises May Not Be Useful (MacEachern & Van Zandt, 2019)
Main Takeaways:
- The present study focuses on modelling and data analysis and how each round improves analysis that builds richer understanding of data and processes that give rise to data.
- Powerful software and improved graphical capabilities allows us to explore many more features of data.
- The ease with which data is transformed and cleaned, with which a model can be fit may lead to overfitting.
- Model development is intrinsically exploratory and creative.
- The present article disagrees with pre- and post-registration of models. In highly exploratory settings, there is greater difficulty to pre-register a model and analysis.
- Modelling depends on the modeller’s perspective and data collected. Each author performs exploratory analysis and may settle on the same transformation for the response variable.
- When the model is combined with the Bayesian model averaging, the overall model provides a better description of the entire dataset than any single model on its own.
- Reality is too complicated and covariates are sparse enough that it would be a challenge to identify the right model. Models are tools. Different models are used differently for distinct ends.
- Model construction and development depend on analysing and re-analysing a dataset to determine which of its properties are crucial to understand a phenomenon and/or make predictions.
- Confirmatory model implies truth to be discovered among models in competition but there tends to be model favouritism,which tends to be determined by which models the researcher has invested time in developing; how the modeller views the world; ease of implementation and so on.
- One model is not true in the strictest sense as some data will be captured, but other data will not.
- Bayesian methods need to be used, as datasets grow. If preregistration of analyses is required, Bayesian analysts may need to pay particular attention to the impact of the prior distribution on features of the analysis such as the Bayes factor, and the analyst must adopt techniques that can automatically provide robustness to the analysis.
- Underfitting of the data is as problematic as overfitting. Pre-registration of model development may lessen the engagement of analysts with the data, contributing to less creative and fewer exploratory analyses.
- Psychology departments should devote more resources to training in quantitative areas and training which include explicit content on under- and over-modelling. Also, we should partner with the statistics department to improve our modelling skills.
Abstract
This is a commentary on Lee et al.’s (2019) article encouraging preregistration of model development, fitting, and evaluation. While we are in general agreement with Lee et al.’s characterization of the modeling process, we disagree on whether preregistration of this process will move the scientific enterprise forward. We emphasize the subjective and exploratory nature of model development, and point out that “under-modeling” of data (relying on black-box approaches applied to data without data exploration) is as big a problem as “over-modeling” (fitting noise, resulting in models that generalize poorly). We also note the potential long-run negative impact of preregistration on future generations of cognitive scientists. It is our opinion that preregistration of model development will lead to less, and to less creative, exploratory analysis (i.e., to more under-modeling), and that Lee at al.’s primary goals can be achieved by requiring publication of raw data and code. We conclude our commentary with suggestions on how to move forward.APA Style Reference
MacEachern, S. N., & Van Zandt, T. (2019). Preregistration of modeling exercises may not be useful. Computational Brain & Behavior, 2(3-4), 179-182. https://doi.org/10.1007/s42113-019-00038-x
You may also be interested in
- Is pre-registration worthwhile? (Szollosi et al., 2020)
- From pre-registration to publication: a non-technical primer for conducting meta-analysis to synthesize correlation data (Quintana, 2015)
- Easy preregistration will benefit any research (Mellor & Nosek, 2018)
- Pre-registration is Hard, And Worthwhile (Nosek et al., 2019)
Six principles for assessing scientists for hiring, promotion, and tenure (Naudet et al, 2018)
Main Takeaways:
- Academic work is usually quantified by the quantity of publications. However, this is not a reliable measure.
- An alternative measure is impact factor: the average number of citations to research articles over the preceding two years. This is an imperfect measure that does not capture the ethos of an academic institution.
- Impact factor provides information about citation influence for a few papers but is less informative about an individual publication and the authors involved in the publication (cf. Goodhart’s Law - a valid measurement becomes useless when it becomes an optimisation target).
- Promotions are based on questionable research practices that promote the quantity of publications, but reproducible research does not receive such support.
- The incentive structure in academia is problematic, as the high impact factor is taken to be similar to high societal impact; this is not the case!
- High impact factor leads to more funding, more citations and further funding (cf. Matthew’s Effect), whereas the opposite is observed for papers with low impact factor. Papers with high societal impact seem to fit papers with low impact factor.
- We need to provide a more inclusive evaluation scheme that allows researchers and research to focus more on open science practices.
- We need to consider societal and broader impact for promotions.
Abstract
The negative consequences of relying too heavily on metrics to assess research quality are well known, potentially fostering practices harmful to scientific research such as p-hacking, salami science, or selective reporting. The "flourish or perish" culture defined by these metrics in turn drives the system of career advancement in academia, a system that empirical evidence has shown to be problematic and which fails to adequately take societal and broader impact into account. To address this systemic problem,APA Style Reference
Naudet, F., Ioannidis, J., Miedema, F., Cristea, I. A., Goodman, S. N., & Moher, D. (2018). Six principles for assessing scientists for hiring, promotion, and tenure. Impact of Social Sciences Blog. http://eprints.lse.ac.uk/90753/
You may also be interested in
- The Nine Circles of Scientific Hell (Neuroskeptic, 2012)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- A user’s guide to inflated and manipulated impact factor (Ioannidis & Thombs, 2019)
- Publication metrics and success on the academic job market (Van Dijk et al., 2014)
- Rewarding Research Transparency (Gernsbacher, 2018)
- Faculty promotion must assess reproducibility (Flier, 2017) ⌺
Sample size and the fallacies of classical inference (Friston, 2013)
Main Takeaways:
- The purpose of this target article was to make authors and reviewers think about their response to questions on sample size and effect size for their data.
- It is important to get as much data as possible (i.e. to have a large sample) to reduce false positives.
- Trivial effect sizes can be resolved by reporting confidence intervals.
- The best studies use a large number of subjects and report the results in terms of confidence intervals or protected inference.
- Large sample sizes will increase the efficiency of model comparison. Increasing sample size will allow us to do more model comparisons, that are otherwise not possible in small sample sizes.
- “a trivial (standardised) effect size does not mean a very small effect — it is only small in relation to the random fluctuations that attend its expression or measurement. In other words, a miniscule effect is not trivial if it is expressed reliably.” (p.504).
Quote
“the proportion of true positives, in relation to the total number of significant tests, increases with sensitivity (i.e., the positive predictive value increases with sensitivity). This simply reflects the fact that the number of false negatives is fixed and the number of true positives increases with sensitivity...if we now assume that increasing sample size will increase sensitivity, increases in sample size should therefore increase the portion of true positives. However...if trivial effect sizes predominate, the PPV measures the proportion of significant results that are trivial. This means that increasing sample size is a bad thing and will increase the probability of declaring trivial effects significant (on average).” (p. 504).
Abstract
I would like to thank Michael Ingre, Martin Lindquist and their co-authors for their thoughtful responses to my ironic Comments and Controversies piece. I was of two minds about whether to accept the invitation to reply — largely because I was convinced by most of their observations. I concluded that I should say this explicitly, taking the opportunity to consolidate points of consensus and highlight outstanding issues.APA Style Reference
Friston, K. (2013). Sample size and the fallacies of classical inference. Neuroimage, 81, 503-504.https://doi.org/10.1016/j.neuroimage.2013.02.057
You may also be interested in
- Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) (Ingre, 2013)
- Ironing out the statistical wrinkles in “ten ironic rules” (Lindquist et al., 2013)
- Ten ironic rules for non-statistical reviewers (Friston, 2012)
Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) (Ingre, 2013)
Main Takeaways:
- It is often argued that small underpowered studies provide better evidence due to a lack of small and trivial effect sizes yet underpowered studies are less likely to find a true effect in the data and failure does not come without consequences.
- Failure to detect true effects may indicate that when significant findings are reported they may be due to type I error.
- Findings from small low-powered studies are weaker than high-powered studies due to the fact that poor statistical power increases false positive rates, even with large effect sizes.
- Researchers should make use of at least one additional statistic value (e.g. t value, effect size or confidence intervals) alongside p-values as this would protect against analysing meaningless effects;
Quote
“From a strictly scientific point of view, you can never have too much precision, and consequently, never too many subjects or too much statistical power (unless a researcher is doing something wrong when reporting and interpreting data). The limiting factors are cost (time, resources and money) and potential harm for the subjects involved in the study. The real question you need to ask is how much cost and harm you can afford to get as good answer as possible.” (p.498)
Abstract
It is sometimes argued that small studies provide better evidence for reported effects because they are less likely to report findings with small and trivial effect sizes (Friston, 2012). But larger studies are actually better at protecting against inferences from trivial effect sizes, if researchers just make use of effect sizes and confidence intervals. Poor statistical power also comes at a cost of inflated proportion of false positive findings, less power to “confirm” true effects and bias in reported (inflated) effect sizes. Small studies (n = 16) lack the precision to reliably distinguish small and medium to large effect sizes (r < .50) from random noise (α = .05) that larger studies (n = 100) do with high level of confidence (r = .50, p = .00000012). The present paper introduces the arguments needed for researchers to refute the claim that small low-powered studies have a higher degree of scientific evidence than large high-powered studies.APA Style Reference
Ingre, M. (2013). Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012). Neuroimage, 81, 496-498.https://doi.org/10.1016/j.neuroimage.2013.03.030
You may also be interested in
- Sample size and the fallacies of classical inference (Friston, 2013)
- Ironing out the statistical wrinkles in “ten ironic rules” (Lindquist et al., 2013)
- Ten ironic rules for non-statistical reviewers (Friston, 2012)
Ironing out the statistical wrinkles in “ten ironic rules” (Lindquist et al., 2013)
Main Takeaways:
- This commentary discusses the concerns and premise of the Ten ironic rules for non-statistical reviewers by Professor Karl Friston (2012).
- “working in a highly collaborative environment has taught us that both experts and non-experts alike can have good and bad ideas about statistics (as well as every other field) and that the idea of sharp boundaries between domains is inaccurate and counterproductive.” (p.499).
- It is more difficult to interpret significant results in small samples, as sample sizes prevent sensitivity analyses to be conducted and specific assumptions to be checked.
- Under-sampled studies do not allow us to ask questions about confounding variables such as age and gender. Increasing sample size allows us to detect small effects.
- One argument in favour of small sample sizes is that when we need to consider important non-statistical/ethical issues such as the lives of animals or side effects.
- Hypothesis testing cannot discriminate between important, but subtle, effects and trivial effects.
Quote
“In summary, sample size discussions, both prior to conducting a study and post-hoc in peer review, should depend on a number of contextual factors and especially specifics of the hypotheses under question. A small sample size is perfectly capable of differentiating gross brain morphometry between, say, children and adults. However, thousands of participants may be necessary to detect subtle longitudinal trends associated with human brain activation patterns in disease. That is, it is information content that is important, of which number of study participants is only a proxy” (p.501).
Abstract
The article “Ten ironic rules for non-statistical reviewers” (Friston, 2012) shares some commonly heard frustrations about the peer-review process that all researchers can identify with. Though we found the article amusing, we have some concerns about its description of a number of statistical issues. In this commentary we address these issues, as well as the premise of the article.APA Style Reference
Lindquist, M. A., Caffo, B., & Crainiceanu, C. (2013). Ironing out the statistical wrinkles in “ten ironic rules”. Neuroimage, 81, 499-502. https://doi.org/10.1016/j.neuroimage.2013.02.056
You may also be interested in
- Sample size and the fallacies of classical inference (Friston, 2013)
- Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) (Ingre, 2013)
- Ten ironic rules for non-statistical reviewers (Friston, 2012)
Ten ironic rules for non-statistical reviewers (Friston, 2012)
Main Takeaways:
- Reviewers may not have adequate statistical expertise to provide a critique during peer review in order to reject a manuscript.
- Handling editors are happy to decline a paper and are placed under pressure to maintain a high rejection rate.
- All journals need to maximise rejection rates in order to increase the quality of submission and impact factor. There are ten rules to follow.
Quote
“We have reviewed some general and pragmatic approaches to critiquing the scientific work of others. The emphasis here has been on how to ensure a paper is rejected and enable editors to maintain an appropriately high standard, in terms of papers that are accepted for publication. Remember, as a reviewer, you are the only instrument of selective pressure that ensures scientific reports are as good as they can be. This is particularly true of prestige publications like Science and Nature, where special efforts to subvert a paper are sometimes called for.” (p.1303)
Abstract
As an expert reviewer, it is sometimes necessary to ensure a paper is rejected. This can sometimes be achieved by highlighting improper statistical practice. This technical note provides guidance on how to critique the statistical analysis of neuroimaging studies to maximise the chance that the paper will be declined. We will review a series of critiques that can be applied universally to any neuroimaging paper and consider responses to potential rebuttals that reviewers might encounter from authors or editors.APA Style Reference
Friston, K. (2012). Ten ironic rules for non-statistical reviewers. Neuroimage, 61(4), 1300-1310. https://doi.org/10.1016/j.neuroimage.2012.04.018
You may also be interested in
- Sample size and the fallacies of classical inference (Friston, 2013)
- Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) (Ingre, 2013)
- Ironing out the statistical wrinkles in “ten ironic rules” (Lindquist et al., 2013)
Using OSF to Share Data: A Step-by-Step Guide (Soderberg, 2018)
Main Takeaways:
- Materials should be findable, accessible, interoperable and reusable (FAIR) forms. Researchers should look for repositories to decide where and how to share their data.
- A repository should contain unique and persistent identifiers, so data can be cited.
- The data is publicly searchable with licenses clarifying how data is reused.
- Rich meta-data descriptions are provided to allow data to be understandable and reusable.
- Open science framework is a free and open-source Web tool to help researchers collaboratively manage, store and share the research process and the files related to their research.
- Step 1: create an account on https://osf.io
- Step 2: Sign-in your account. Enter name and password or login through your institution.
- Step 3: Create a project. Press the green button to create a new project.
- Step 4: Add Collaborators to the project.
- Click on Contributors and press +Add green button. Search for contributors by name and click on the green + button.
- If a collaborator does not come up in search, add them to the project by clicking add as an unregistered contributor link.
- Step 5: upload files that are below the maximum storage of 5GB. To upload files to OSF storage, go to your main project page and click on “OSF Storage” in the Files section. Click on the green “Upload” button and then select the files you wish to upload. Step 6: Add a description of the project. In order to allow you and other users to know what files relate to the project.
- Step 7: Add a License. reuse is one of the main purposes of data sharing. Other researchers need to know how they are allowed to reuse your work.
- Step 8: Add component. data, analysis script and study materials should be placed in the project.
- Step 9: Share your project with reviewers. The project is set up that you may want or need to give reviewers access to the contents of your project before you make it public.
- Step 10: Make a project public. To make a project public, press the “make public” button in the top right corner of the project page. Anyone will be able to view and download all files.
- Step 11: Reference open science files in your work. Include the links in the manuscript, lab website or the published article to make the data accessible and useful.
Abstract
Sharing data, materials, and analysis scripts with reviewers and readers is valued in psychological science. To facilitate this sharing, files should be stored in a stable location, referenced with unique identifiers, and cited in published work associated with them. This Tutorial provides a step-by-step guide to using OSF to meet the needs for sharing psychological data.APA Style Reference
Soderberg, C. K. (2018). Using OSF to share data: A step-by-step guide. Advances in methods and practices in psychological science, 1(1), 115-120. https://doi.org/10.1177/2515245918757689
You may also be interested in
- Trust Your Science? Open Your Data and Code (Stodden, 2011)
- Attitudes Toward Open Science and Public Data Sharing: A Survey Among Members of the German Psychological Society (Abele-Brehm et al., 2019)
- Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al., 2011)
On supporting early-career black scholars (Roberson, 2020) ⌺
Main Takeaways:
- Non-Black researchers need to take immediate support for early-career Black scholars.
- “Maybe you were in a seminar where a Black doctoral student pushed back against a racist disciplinary norm, and you silently agreed and followed up with them afterwards to let them know that you support them.”
- This silence in public signals to Black scholars that they are not welcome in these spaces
- We must challenge white supremacy in academia. Speaking up about this is much more costly for Black scholars, who face an onslaught of racist micro- and macro-aggressions on a daily basis. The burden should not fall on their shoulders.
- We should be proactive in our outreach. We should invite early-career Black scholars, if they have expertise to improve a research project. Our careers and science will benefit from this help.
- “Do not just encourage [Black scholars] to apply, provide material support to promote our successful applications; share funded grants with [Black scholars], work with [Black scholars] on developing compelling aims pages, and write [Black scholars] a persuasive letter of support. Supporting [Black scholars] on manuscripts and funding opportunities can mitigate some of the barriers in science that often stunt Black success.”
- Inviting Black scholars will increase their credibility as experts and expand the audience’s familiarity with their scholarship. Manels are now being prohibited but we need to eliminate all-white speaker panels.
- Educate yourself on rising Black scholars in your field, learn from early-career Black researchers, investigate journals that publish their scholarships, be familiar with the Black community’s professional societies, affinity groups and diversify your following list on Twitter.
- Incorporate Black scholar’s work into your syllabi. This is necessary to eliminate structural racism. However, it requires individuals with the most amount of power. These steps will promote Black people to thrive among trainees and early-career scholars.
- This will remove barriers to promote a more inclusive environment!
Quote
“Do not just encourage [Black scholars] to apply, provide material support to promote our successful applications; share funded grants with [Black scholars], work with [Black scholars] on developing compelling aims pages, and write [Black scholars] a persuasive letter of support. Supporting [Black scholars] on manuscripts and funding opportunities can mitigate some of the barriers in science that often stunt Black success.”
Abstract
Professor Mya Roberson provides a detailed commentary about the struggles that Black people encounter in academia and starting steps to eliminate structural racism.APA Style Reference
Roberson, M. L. (2020). On supporting early-career Black scholars. Nature Human Behaviour, 1-1. https://doi.org/10.1038/s41562-020-0926-6
You may also be interested in
On the persistence of low power in psychological science (Vankov et al., 2014)
Main Takeaways:
- Surveys of the literature have consistently shown that psychological studies have low statistical power and this problem has seen little, if any, improvement over the last few decades. Vankov et al. examine two arguments of why this may be the case.
- The first possible reason is that researchers may fail to appreciate the importance of statistical power, since null hypothesis significance testing is a hybrid of two statistical theories- Fisher’s and Neyman and Pearson’s. While researchers readily adhere to the 5% Type I error rate, they pay little attention to the Type II error rate. Both need to be considered when evaluating whether a result is true.
- A second possible reason is that scientists are humans and respond to incentives, such as the prestige of publishing a transformative study in a highly-regarded journal. However, producing such works is a high-risk strategy; a safer option may be to “salami-slice” works into multiple publications to increase the chance of producing publishable outputs.
- To examine the merit of the first reason, Vankov et al. contacted authors of published papers and asked them for their sample size rationale. One third of the contacted authors were found to hold beliefs that would typically act to reduce statistical power.
- There is a need for structural change, where editors and journals enforce rigorous requirements for statistical power. Journals introducing registered reports may also place greater emphasis on statistical power and robust designs.
Abstract
A comment by Dr Ivan Vankov, Professors Jeffrey Bowers and Marcus Munafo on the persistence of low power in psychological sciences. They discuss issues concerning false negatives, the importance of highly-regarded journals and that power is an issue to be discussed. They state that we need structural changes in journals in order to avoid the replicability crisis.APA Style Reference
Vankov, I., Bowers, J., & Munafò, M. R. (2014). Article commentary: On the persistence of low power in psychological science. Quarterly journal of experimental psychology, 67(5), 1037-1040. https://doi.org/10.1080/17470218.2014.885986
You may also be interested in
- Is science really facing a reproducibility crisis, and do we need it to? (Fanelli, 2018)
- Registered Reports: Realigning incentives in scientific publishing (Chambers et al., 2015)
- Registered Reports: A new publishing initiative at Cortex (Chambers, 2013)
- Registered Reports: A step change in scientific publishing (Chambers, 2014)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Registered reports: a method to increase the credibility of published results (Nosek & Lakens, 2014)
- Registered reports (Jamieson et al., 2019)
Publication Decisions and their possible effects on inferences drawn from tests of significance or vice versa (Sterling, 1959)
Main Takeaways:
- There is a risk to reject the null hypothesis, as it can lead to type I error.
- “The experimenter who uses...tests of significance to evaluate observed differences usually reports that he has tested Ho by finding the probability of the experimental results on the assumption that Ho is true, and he does (or does not) ascribe some effect to experimental treatments”. (p.30).
- Depending on the confidence of methodology and data collection, readers can reject or accept the null hypothesis.
- Acceptance and rejection of null hypothesis is taken at p < .05.
- When a fixed level of significance is used as a criterion for publishing in professional journals, it may result in embarrassing and surprising results.
Quote
“What credence can then be given to inferences drawn from statistical tests of Ho if the reader is not aware of all experimental outcomes of a kind? Perhaps even more pertinent is the question: Can the reader justify adopting the same level of significance as does the author of a published study?” (p.33)
Abstract
There is some evidence that in fields where statistical tests of significance are commonly used, research which yields non-significant results is not published. Such research being unknown to other investigators may be repeated independently until eventually by chance a significant result occurs - an "error of the first kind" - and is published. Significant results published in these fields are seldom verified by independent replication. The possibility thus arises that the literature of such a field consists in substantial part of false conclusions resulting from errors of the first kind in statistical tests of significance.APA Style Reference
Sterling, T. D. (1959). Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. Journal of the American statistical association, 54(285), 30-34. https://doi.org/10.1080/01621459.1959.10501497
You may also be interested in
- Negative results are disappearing from most disciplines and countries (Fanelli, 2011)
- Negativity towards negative results: a discussion of the disconnect between scientific worth and scientific culture (Matosin et al., 2014)
Replicability as a Publication Criterion (Lubin, 1957)
Main Takeaways:
- How can publication lag be reduced? Researchers should perform replications to ensure the results are repeated.
- Replicability and generalisability should be used as a criteria to judge the rigour for these articles.
- If results are replicated, there is no need to discuss other trivial factors. However, if it is not replicated, you should discuss contextual factors such as the time of day.
Abstract
A commentary by Dr Ardie Lubin on replicability being perceived as a criterion of publication. Replications are perceived as fundamental but not enough to publish. However, replication studies are important to conduct in order to remove any trivial variables that may explain the findings.APA Style Reference
Lubin, A. (1957). Replicability as a publication criterion. American Psychologist, 12(8), 519-520. https://doi.org/10.1037/h0039746
You may also be interested in
- Negative results are disappearing from most disciplines and countries (Fanelli, 2011)
- Negativity towards negative results: a discussion of the disconnect between scientific worth and scientific culture (Matosin et al., 2014)
The life of p: “Just significant” results are on the rise (Leggett et al., 2013)
Main Takeaways:
- Computers make it easy to analyse data. Modern software allows simple calculations, enabling them to monitor data, while collecting it.
- The ease of data analysis has made issues such as optional stopping and exclusion of outliers (i.e. p-hacking) easier to engage in.
- The current study measured whether there is an over-representation of p < .05 for 2005 than 1965.
- Method: P values were collected from all articles published between 1965 and 2005.
- Method: P values that were categorised as p < .05 and .01 were recalculated to provide a more exact p-value.
- Method: If there was a lack of information to determine the exact p value, the data for this specific p value was excluded from the analysis.
- Method: The distributions of p values used for the manuscript were between .01 and .10, any values outside of this range were excluded.
- Results: The frequency of p values at or below .05 was greater compared to p frequencies in other ranges.
- Results: Although there is an over-representation for p values below .05 between 1965 and 2005, there was a greater spike for p < .05 in 2005 than 1965.
- Results: In addition, p values close to but over .05 were more likely to be rounded down (e.g. p = .053 becomes p < .05) or incorrectly reported as significant in 2005 than in 1965.
- As a result of shifting research climates, there are changes in how statistical analyses are executed.
- Any values above .05 should be interpreted as non-significant, including trends such as .051.
- Suboptimal research practices are easier to engage in, as calculations have become easier to compute.
Quote
“The use of confidence intervals, along with effect sizes, as well as registered reporting and mandatory methods disclosure, might decrease the emphasis placed on p values. This would, in turn, also encourage the use of optimal research practices. In the absence of additional, complementary statistics or registered reports, the use of p values as an isolated method for determining statistical significance remains vulnerable to human fallibility.” (p.2309)
Abstract
Null hypothesis significance testing uses the seemingly arbitrary probability of .05 as a means of objectively determining whether a tested effect is reliable. Within recent psychological articles, research has found an overrepresentation of p values around this cut-off. The present study examined whether this overrepresentation is a product of recent pressure to publish or whether it has existed throughout psychological research. Articles published in 1965 and 2005 from two prominent psychology journals were examined. Like previous research, the frequency of p values at and just below .05 was greater than expected compared to p frequencies in other ranges. While this overrepresentation was found for values published in both 1965 and 2005, it was much greater in 2005. Additionally, p values close to but over .05 were more likely to be rounded down to, or incorrectly reported as, significant in 2005 than in 1965. Modern statistical software and an increased pressure to publish may explain this pattern. The problem may be alleviated by reduced reliance on p values and increased reporting of confidence intervals and effect sizes.APA Style Reference
Leggett, N. C., Thomas, N. A., Loetscher, T., & Nicholls, M. E. (2013). The life of p:" just significant" results are on the rise. Quarterly journal of experimental psychology (2006), 66(12), 2303. https://doi.org/10.1080/17470218.2013.863371
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- A 21 Word Solution (Simmons et al., 2012)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
- Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) (Ingre, 2013)
- Ironing out the statistical wrinkles in “ten ironic rules” (Lindquist et al., 2013)
- Ten ironic rules for non-statistical reviewers (Friston, 2012)
Psychologists Are Open to Change, yet Wary of Rules (Fuchs et al., 2012)
Main Takeaways:
- The present study investigated whether psychologists support concrete changes to data collection, reporting and publication processes. If not, what are their reasons?
- Method: 1292 psychologists from 42 countries were surveyed to assess whether each of Simmons et al.’s (2011) requirements and guidelines should be followed as a measure of good practice and whether these guidelines should be placed as mandatory conditions for publication in psychological journals.
- Results: 98% of psychologists are open to change and agreed at least one requirement should be placed as a condition for publication, especially that “researchers must report all experimental conditions run in a study, including failed manipulations”(p.641).
- Results: The reasons for not including a condition were: it was too rigorous; the condition did not agree with the argument; or the condition was not appropriate for all studies.
- Psychologists are open to change for reporting and conducting research, and agree with the guidelines. However, some requirements are too rigid and questionable.
Quote
“Researchers and editorial staff alike must also ensure that standards are enforceable so as to avoid punishing honest researchers. The psychological community should capitalize on the current openness to change in order to develop and implement appropriate changes and thus improve the quality of published psychological research.” (p. 641).
Abstract
Psychologists must change the way they conduct and report their research—this notion has been the topic of much debate in recent years. One article recently published in Psychological Science proposing six requirements for researchers concerning data collection and reporting practices as well as four guidelines for reviewers aimed at improving the publication process has recently received much attention (Simmons, Nelson, & Simonsohn, 2011). We surveyed 1,292 psychologists to address two questions: Do psychologists support these concrete changes to data collection, reporting, and publication practices, and if not, what are their reasons? Respondents also indicated the percentage of print and online journal space that should be dedicated to novel studies and direct replications as well as the percentage of published psychological research that they believed would be confirmed if direct replications were conducted. We found that psychologists are generally open to change. Five requirements for researchers and three guidelines for reviewers were supported as standards of good practice, whereas one requirement was even supported as a publication condition. Psychologists appear to be less in favor of mandatory conditions of publication than standards of good practice. We conclude that the proposal made by Simmons, Nelson & Simonsohn (2011) is a starting point for such standards.APA Style Reference
Fuchs, H. M., Jenny, M., & Fiedler, S. (2012). Psychologists are open to change, yet wary of rules. Perspectives on Psychological Science, 7(6), 639-642.
You may also be interested in
- The Nine Circles of Scientific Hell (Neuroskeptic, 2012)
- Six principles for assessing scientists for hiring, promotion, and tenure (Naudet et al, 2018)
- CJEP Will Offer Open Science Badges (Pexman, 2017)
- Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency (Kidwell et al., 2016)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Quality Uncertainty Erodes Trust in Science (Vazire, 2017)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Signalling the trustworthiness of science (Jamieson et al., 2020)
Experimental power comes from powerful theories – the real problem in null hypothesis testing (Ashton, 2013)
Main Takeaways:
- Although null hypothesis testing is a powerful tool for decision making, null hypothesis is no longer performed in how it was originally conceived.
- A power analysis of a specific desired effect size must be carried out prior to the experiment being conducted, wherein a null hypothesis is compared against an alternative hypothesis based on this effect size. This may not be possible with an effect size that was estimated from the data with no standard to compare with.
- The advice to increase sample size and statistical power is sound, this may make any hypothesis in neuroscience become virtually untestable. For instance, if we fail to find an effect at 10%, we need to increase power to detect a 1% or 0.1% effect. In turn, poorly testable and hard-to-refute hypotheses become difficult to restrain, thus making these hypotheses more prevalent in the literature.
- “The only way to resolve this dilemma while retaining the advantages of traditional null hypothesis testing is to be specific about the theoretical predictions that our experiments are designed to test” (p.1).
Quote
“The solution to the problem is to increase discipline not only in analysis and experimental design but also in relating experiments to explanatory theory. Much current practice instead seems to be an open-ended search for associations, reminiscent of old-style inductionism while superficially following the conventions of hypothetico-deductivism.” (p.1).
Abstract
A commentary by John C. Ashton who discusses the paper written by Professor Kate Button on small sample sizes. Ashton argues that power analyses and effects sizes should be used to estimate the alternative hypothesis.APA Style Reference
Ashton, J. C. (2013). Experimental power comes from powerful theories—the real problem in null hypothesis testing. Nature Reviews Neuroscience, 14(8), 585-585. https://doi.org/10.1038/nrn3475-c2
You may also be interested in
- Negative results are disappearing from most disciplines and countries (Fanelli, 2011)
- Negativity towards negative results: a discussion of the disconnect between scientific worth and scientific culture (Matosin et al., 2014)
Negative results are disappearing from most disciplines and countries (Fanelli, 2011)
Main Takeaways:
- There are concerns that distort science. One concern is that the system disfavours negative findings, which gives a poor impression of science.
- The study investigated whether positive results have increased in the recent scientific literature.
- Method: 4600 papers in all disciplines between 1990 and 2007 were used, including variables such as frequency of papers to test a hypothesis and a report to support it.
- Method: Country of location was included, information on year of publication and country was coded. Whether the evidence was positive or negative.
- Results: Frequency of positive findings increased between 1990 and 2007 by 22%. This increase was larger in social and some biomedical disciplines.
- Results: There were fewer positive results published by American than Asian countries. More positive results in American than in European countries.
- Negative results decreased in frequency across disciplines due to publication bias.
- The authors seem to suggest that science is now closer to truth today than 20 years ago.
- There is an editorial bias that favours the United States that enables them to publish as many or more negative results than any other country, not fewer. The United States has a stronger bias against negative findings than Europe.
Quote
“However, even if in the long run truth will prevail, in the short term resources go wasted in pursuing exaggerated or completely false findings (Ioannidis 2006). Moreover, this self-correcting principle will not work efficiently in fields where theoretical predictions are less accurate, methodologies less codified, and true replications rare. Such conditions increase the rate of both false positives and false negatives, and a research system that suppresses the latter will suffer the most severe distortions.” (p.900)
Abstract
Concerns that the growing competition for funding and citations might distort science are frequently discussed, but have not been verified directly. Of the hypothesized problems, perhaps the most worrying is a worsening of positive-outcome bias. A system that disfavours negative results not only distorts the scientific literature directly, but might also discourage high-risk projects and pressure scientists to fabricate and falsify their data. This study analysed over 4,600 papers published in all disciplines between 1990 and 2007, measuring the frequency of papers that, having declared to have ‘‘tested’’ a hypothesis, reported a positive support for it. The overall frequency of positive supports has grown by over 22% between 1990 and 2007, with significant differences between disciplines and countries. The increase was stronger in the social and some biomedical disciplines. The United States had published, over the years, significantly fewer positive results than Asian countries (and particularly Japan) but more than European countries (and in particular the United Kingdom). Methodological artefacts cannot explain away these patterns, which support the hypotheses that research is becoming less pioneering and/or that the objectivity with which results are produced and published is decreasing.APA Style Reference
Fanelli, D. (2012). Negative results are disappearing from most disciplines and countries. Scientometrics, 90(3), 891-904. https://doi.org/10.1007/s11192-011-0494-7
You may also be interested in
- A farewell to Bonferroni: the problems of low statistical power and publication bias (Nakagawa, 2004)
- Experimental power comes from powerful theories – the real problem in null hypothesis testing (Ashton, 2013)
- Negativity towards negative results: a discussion of the disconnect between scientific worth and scientific culture (Matosin et al., 2014)
Negativity towards negative results: a discussion of the disconnect between scientific worth and scientific culture (Matosin et al., 2014)
Main Takeaways:
- There is pressure on scientists to choose investigative avenues that result in high-impact knowledge (as opposed to hypothesis-driven).
- Negative results are not valued as positive results (positive results are valued, negative results are undervalued).
- Scientific principles are under reconsideration and there are events, in which new evidence refutes old hypotheses (cf. paradigm shift).
- Negative findings are seen as an inconvenient truth.
- Science is a collaborative endeavour and we should value and report negative findings
- When time is money, the current heuristic of judging research output on impact and citations can lead to waste of funds & time.
- It is commonplace for researchers to face resistance when presenting their work at scientific conferences. Then why is a negative finding viewed as a bad thing? What’s more, a negative result is often seen as philosophical rather than practical (voir real).
- If negative questions are rephrased as positive questions, does that mean a negative finding is a positive finding?
- Negative findings are seen as taboo and unworthy of publication in social sciences, but, for example, for clinical research, negative results are of absolute relevance and importance.
- Negative results are not worthy of attention, thus placed in a file drawer and seen as less important.
Quote
“It means that the direction of scientific research should not be determined by the pressure to win the ‘significance lottery’, but rather systematic, hypothesis-driven attempts to fill holes in our knowledge. At the core, it is our duty as scientists to both: (1) publish all data, no matter what the outcome, because a negative finding is still an important finding; and (2) have a hypothesis to explain the finding. If the experiment has been performed to plan, the data has not been manipulated or pulled out of context and there is compiled evidence of a negative result, then it is our duty to provide an explanation as to why we are seeing what we are seeing. Only by truly rethinking the current scientific culture, which clearly favours positive findings, will negative results be esteemed for their entire value. Only then can we work towards an improved scientific paradigm.” (p.173)
Abstract
“What gets us into trouble is not what we don’t know, it’s what we know for sure that just ain’t so.” – Mark Twain. Science is often romanticised as a flawless system of knowledge building, where scientists work together to systematically find answers. In reality, this is not always the case. Dissemination of results are straightforward when the findings are positive, but what happens when you obtain results that support the null hypothesis, or do not fit with the current scientific thinking? In this Editorial, we discuss the issues surrounding publication bias and the difficulty in communicating negative results. Negative findings are a valuable component of the scientific literature because they force us to critically evaluate and validate our current thinking, and fundamentally move us towards unabridged science.APA Style Reference
Matosin, N., Frank, E., Engel, M., Lum, J. S., & Newell, K. A. (2014). Negativity towards negative results: a discussion of the disconnect between scientific worth and scientific culture. Disease Models & Mechanisms, 7(2), 171. https://doi.org/10.1242/dmm.015123
You may also be interested in
- Experimental power comes from powerful theories – the real problem in null hypothesis testing (Ashton, 2013)
- Negative results are disappearing from most disciplines and countries (Fanelli, 2011)
A farewell to Bonferroni: the problems of low statistical power and publication bias (Nakagawa, 2004)
Main Takeaways:
- There are several effect size measures: Cohen’s d and Pearson’s r. The former assesses the mean difference and the latter evaluates the strength of the relationship.
- Bonferroni correction tries to reduce false positives when multiple tests or comparisons are performed.
- Reviewers may demand a Bonferroni correction to remove irrelevant variables and reduce the number of false positives but it can still lead to publication bias.
- The scientific community should discourage Bonferroni or the idea that reviewers should demand a Bonferroni correction.
- These problems stem from a focus on statistical significance (i.e. p values) in journals instead of practical or biological significance (i.e. effect sizes). Researchers should be reporting effect sizes and the confidence intervals around these effect sizes.
Abstract
Professor Shinichi Nakagawa provides a commentary on low statistical power and the need to discourage Bonferroni corrections. In addition, we should rely on effect sizes and their confidence intervals to determine the value of science findings.APA Style Reference
Nakagawa, S. (2004). A farewell to Bonferroni: the problems of low statistical power and publication bias. Behavioral ecology, 15(6), 1044-1045. https://doi.org/10.1093/beheco/arh107
You may also be interested in
- Experimental power comes from powerful theories – the real problem in null hypothesis testing (Ashton, 2013)
- Negativity towards negative results: a discussion of the disconnect between scientific worth and scientific culture (Matosin et al., 2014)
The File-drawer problem revisited: A general weighted method for calculating fail-safe numbers in meta analysis (Rosenberg, 2005)
Main Takeaways:
- There is a file-drawer problem in which studies are not published if they observe no significant effects. One measure to assess the number of non-significant findings is a fail-safe number. A fail-safe number is the number of non-significant, unpublished or missing studies would be needed to reduce the overall significant results to non-significant effect. If the fail-safe number is large compared to the number of observed studies, one can be confident in the summary conclusions.
- The method to calculate a fail-safe number is to calculate the significance of multiple studies by calculating the significance of the mean Z-score (Rosenthal, 1979), whereas Orwin (1983) calculate a fail-safe number based on an effect size that measures the standardised mean difference between intervention and control. It also calculates the number of additional studies to reduce an observed mean effect size to a desired minimal effect size.
- However, these approaches have several problems: “The first is that they are both explicitly unweighted. One of the primary attributes of contemporary meta-analysis is weighting; studies with large sample size or small variance are given higher weight than those with small sample sizes or large variance. Neither method accounts for the weight of the observed or the hypothesized unpublished studies. A second problem with Rosenthal’s method is that the method of adding Z-scores is not normally the method by which one combines studies in a meta-analysis; most modern meta-analyses are based on the combination of effect sizes, not simply significance values (Rosenberg et al. 2000). Rosenthal’s calculation is therefore not precisely applicable to the actual significance obtained from a meta-analysis. Orwin’s method is not based on significance testing; the choice of a desired minimal effect size to test the observed mean against seems unstable without a corresponding measure of variance” (pp.464-465).
- The article proposes a general, weighted fail-safe calculation framework applicable to both fixed- and random-effects models.
- The original fail-safe calculations are based on the fixed-effect model, but the authors estimate a fail-safe number for random-effects model meta-analysis.
- Although the number of studies of null effect are perceived as important to change a significant outcome in the fixed-effect calculations, these studies for the random-effects model involve a sum-of-squares calculation that we need to assume have effects that are precisely zero. This assumption could be partially avoided by simulating missing studies with a desired variance.
Quote
“One needs to remember that a fail-safe calculation is neither a method of identifying publication bias nor a method of accounting for publication bias that does exist. It is simply a procedure by which one can estimate whether publication biases (if they exist) may be safely ignored...While perhaps not as elegant as some of these methods, a fail-safe number is much simpler to calculate. Hopefully, the approach presented here will allow us to better estimate the potential for unpublished or missing studies to alter our conclusions; a low fail-safe number should certainly encourage researchers to pursue the more complicated publication bias methodologies.” (p.467).
Abstract
Quantitative literature reviews such as meta-analysis are becoming common in evolutionary biology but may be strongly affected by publication biases. Using fail-safe numbers is a quick way to estimate whether publication bias is likely to be a problem for a specific study. However, previously suggested fail-safe calculations are unweighted and are not based on the framework in which most meta-analyses are performed. A general, weighted fail-safe calculation, grounded in the meta-analysis framework, applicable to both fixed- and random-effects models, is proposed. Recent meta-analyses published in Evolution are used for illustration.APA Style Reference
Rosenberg, M. S. (2005). The file‐drawer problem revisited: a general weighted method for calculating fail‐safe numbers in meta‐analysis. Evolution, 59(2), 464-468.https://doi.org/10.1111/j.0014-3820.2005.tb01004.x
You may also be interested in
The “File Drawer Problem” and Tolerance for Null Results (Rosenthal, 1979)
Main Takeaways:
- The file drawer problem is that 5% of articles are false positives, while file drawers have 95% non-significant results.
- Researchers need to calculate the number of studies with null findings before the overall false positives are made.
- A conservative alternative is to set Z = .00 when the exact p levels are not present for any non-significant findings, while setting Z = 1.645 when p < .05.
- “A small number of studies that are not very significant, even when their combined p is significant, may well be misleading in that only a few studies filed away could change the combined significant result to a nonsignificant one.” (p.640).
- Currently, there are no firm guidelines that can be given as to what constitute an unlikely number of unretrieved or unpublished studies.
Quote
“[...] more and more reviewers of research literature are estimating average effect sizes and combined ps of the studies they summarize. It would be very helpful to readers if for each combined p they presented, reviewers also gave the tolerance for future null results associated with their overall significance level.” (p.640)
Abstract
For any given research area, one cannot tell how many studies have been conducted but never reported. The extreme view of the "file drawer problem" is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results. Quantitative procedures for computing the tolerance for filed and future null results are reported and illustrated, and the implications are discussed.APA Style Reference
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological bulletin, 86(3), 638–641. https://doi.org/10.1037/0033-2909.86.3.638. [ungated]
You may also be interested in
At random: Sense and Nonsense (McNemar, 1960)
Main Takeaways:
- 1) To what extent have biologists entered into psychology? A random sample of 100 American Psychological Association (APA) members indicated that only 1% can be defined as fifth columnists for biology and from a random sample of 100 titles in Psychological Abstracts indicated that only 4% have a biological slant. This indicates that psychology has not risen to the level of biology.
- 2) To what extent have statisticians taken over of psychology? Using the aforementioned samples, 1% of the APA members call themselves statisticians and 5% of abstracts deal primarily with statistics.
- The development of a science depends largely on the invention of measuring instruments (e.g. The Thurstone and the Likert scaling techniques). However, “it does not require much imagination to predict that other instruments will lead to more and more unimaginative research. It is so easy to say: For your dissertation, why don't you apply the ZANY to such and such groups?” (p.296).
- It is also important to look at the arsenal of statistics. The application of chi-square did not spread into psychology until the 1930s. However, the chi-square test was being misused, and its frequent misuse has contributed to some astoundingly leading to fallacious significance levels.
- In the late 1930s, the analysis of variance (ANOVA) invaded psychology and supporters of the ANOVA argued that the ANOVA would “rescue involve analysis of variance. Also, during the late 1930s, researchers seem to think that the more complex the design the better, although this complexity will introduce additional complexity of data interpretation.
- Too many users of the ANOVA argue that the “reaching of a mediocre level of significance as more important than any descriptive specification of the underlying averages” (p.297). In addition, it was argued that significance testing is necessary but not sufficient for the development of a science.
- In order to critically evaluate the literature and to plan their own research, it is important psychologists have a sound understanding of all commonly used statistical techniques. The teaching task is partly that we should maintain enthusiasm to sell but not oversell statistics. The research problem should come first, then at the design, the “available tools should be scrutinised but with the ever present though that there is merit in simplicity.” (p.299).
- Anonymous reviewer’s comment: This paper may not be the most informative in terms of how much readers would get out of it. Much of it is very idiosyncratic, written almost in a "stream of consciousness" kind of way with little organisation. While there are definitely some good arguments in there, they get a bit lost in the rest of the comments that were mostly motivated by issues in the 1930s- 1960s. A similar article from the same era with more clear messaging is: Bozarth, J. D., & Roberts, R. R. (1972). Signifying significant significance. American Psychologist, 27(8), 774–775. https://doi.org/10.1037/h0038034 and, with respect to misusing statistical tests: Gigerenzer, G. (2004). Mindless statistics. The Journal of Socio-Economics, 33(5), 587-606. https://doi.org/10.1016/j.socec.2004.09.033
Quote
“Our reviewer, after noting that 9 of the 23 authors of Foundations were not psychologists, said that the problem of learning seemed to be the only one that a psychologist could call his own. He went on to point out that learning was too much concerned with statistical interpretation of empirical data. Then he concluded that "psychology as a science is now bankrupt" and should be turned over to two groups of receivers: the biologists (broadly defined) and the statisticians.” (p.295)
Abstract
A commentary by Dr Quinn McNemar who discusses sense and nonsense data, the difficulties of statistical teaching and the advancements of research design and statistics.APA Style Reference
McNemar, Q. (1960). At random: Sense and nonsense. American Psychologist, 15(5), 295–300. https://doi.org/10.1037/h0049193
You may also be interested in
Replication Report: Interpretation of levels of significance by psychological researchers (Beauchamp & May, 1964)
Main Takeaways:
- Importance: one of the first papers dealing with replication in psychology.
- Study: Psychology lecturers were more cautious than psychology graduate students when making confidence judgments about research findings for the p value.
- Also, psychology lecturers and students were more confident of p values based on a sample of 100 than on a sample of 10.
- Methods: Subjects had to state the degree of belief in research findings as a function of associated p levels based on a sample sizes of 100 and 10. Subjects rated each of the 12 p levels (.001 to .90) for each sample size on a six point scale from 0 (i.e. complete absence of confidence or belief) to 5 (i.e. extreme confidence or belief).
- Results: “Effects due to sample size (S) and p levels (P) were significant in both studies (p < .005) and the groups effect was significant in the replication (p < ,025). In addition, the S X P interaction was significant in the replication (p < .005), indicating that differences in confidence related to sample sizes varied across p levels.” (p.272).
- Discussion: There was no significant cliff effect found in intervals following p < .05, .01, or any other p value.
Abstract
A commentary by Drs Kenneth Beauchamp and Richard May who investigated confidence judgments about research findings for p value.APA Style Reference
Beauchamp, K. L., & May, R. B. (1964). Replication Report: Interpretation of Levels of Significance by Psychological Researchers. Psychological Reports, 14(1), 272-272. https://doi.org/10.2466/pr0.1964.14.1.272 [ungated]
You may also be interested in
Further evidence for the Cliff Effect in the Interpretation of Levels of Significance (Rosenthal & Gaito, 1964)
Main Takeaways:
- Importance: one of the first papers dealing with replication in psychology.
- There was a non-monotonicity decrease of confidence as p values increased.
- 11 graduate student subjects showed a greater degree of confidence in .05 than that at .03 level. This indicates that p < .05 level has rather special characteristics.
Abstract
A commentary by Drs Robert Rosenthal and John Gaito who investigated confidence judgments about research findings and confidence in the levels of significance.APA Style Reference
Rosenthal, R., & Gaito, J. (1964). Further Evidence for the Cliff Effect in the Interpretation of Levels of Significance. Psychological Reports, 15(2), 570-570. https://doi.org/10.2466/pr0.1964.15.2.570
You may also be interested in
Ten Simple Rules for Effective Statistical Practice (Kass et al., 2016)
Main Takeaways:
- The 10 simple rules are:
Abstract
A commentary by Dr Robert Kass providing 10 rules about effective statistical practices and how to improve statistical practices.APA Style Reference
Kass, R. E., Caffo, B. S., Davidian, M., Meng, X. L., Yu, B., & Reid, N. (2016). Ten Simple Rules for Effective Statistical Practice. Plos Computational Biology, 12(6), e1004961-e1004961. https://doi.org/10.1371/journal.pcbi.1004961
You may also be interested in
Registered Reports: Realigning incentives in scientific publishing (Chambers et al., 2015)
Main Takeaways:
- Registered Reports allows peer reviews to focus on the quality and rigour of the experimental design instead of ground-breaking results. This should reduce questionable research practices such as selective reporting, post-hoc hypothesising, and low statistical power.
- Registered reports are reviewed and revised prior to data collection.
- A cortex editorial sub-team triages submissions within one week: to reject manuscripts; to invite for revision to meet the necessary standards; or to send out for Stage 1 in-depth review.
- It takes approximately 8-10 weeks for a Stage 1 Registered Report to move from “initial review” to “in-principle acceptance”. This also includes 1-3 rounds of peer reviews.
- Once the study is completed, it takes 4 weeks for a paper to move from Stage 2 review to final editorial decision.
- Registered reports are not a one-shot cure for reproducibility problems in science and pose no threat to exploratory analyses.
Abstract
This is a view on registered reports in Cortex by Professor Chris Chambers and colleagues. It contains information on Registered Reports and the length of duration for submission and review. They discuss the editorial process and that a registered report is not a threat to exploratory research and is not a panacea to cure reproducibility problems.APA Style Reference
Chambers, C. D., Dienes, Z., McIntosh, R. D., Rotshtein, P., & Willmes, K. (2015). Registered reports: realigning incentives in scientific publishing. Cortex, 66, A1-A2.
You may also be interested in
- Registered Reports: A new publishing initiative at Cortex (Chambers, 2013)
- Registered Reports: A step change in scientific publishing (Chambers, 2014)
- Registered reports : a method to increase the credibility of published results (Nosek & Lakens, 2014)
- Registered reports (Jamieson et al., 2019)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- On the persistence of low power in psychological science (Vankov et al., 2014)
Raise standards for preclinical cancer research (Begley & Ellis, 2012)
Main Takeaways:
- Clinical trials in oncology have the highest failure rates.
- Low success rate is not sustainable or acceptable.
- Drug development heavily depends on literature.
- Clinical endpoints are defined in terms of patient survival focus instead of intermediate endpoints e.g. cholesterol levels for statins.
- It takes years before clinical applicability of the preclinical observation is known. Preclinical observation needs to withstand the challenges and rigorous nature of a clinical trial (e.g. blinding).
- Claims in a preclinical study needs to be taken at face value.
- Issue of irreproducible data has been discussed and received greater attention at costs of drug development.
- Researchers need to contact original authors for mixed findings, exchange reagents and repeat experiments under authors’ direction.
- In studies for which findings could be reproduced, authors pay close attention to controls, reagents, investigator bias and describing complete dataset.
- Researchers need commitment and change of prevalent cultures to increase the robustness of published preclinical cancer research.
- Researchers need to consider negative preclinical data and report all findings, irrespective of the outcome.
- Funding agencies, reviewers and journal editors should agree negative data is as informative as positive data.
- There are transparent opportunities for trainees, technicians and colleagues to discuss and report troubling or unethical behaviours without fearing adverse consequences. These should be reinforced and made easier and more general.
- There needs to be a greater dialogue between physicians, scientists, patient advocates and patients: scientists need to learn about clinical reality, whereas physicians need better knowledge of challenges and limitations of preclinical studies. And both groups would benefit from improved understanding of patients’ concerns.
- Institutions and committees should give more credit for teaching and mentoring, relying solely on publications for promotion or grant funding can be misleading and does not recognise the valuable contribution of greater mentors, educators and administrators.
- The Academic system and peer review process encourages erroneous, selective or irreproducible data.
Abstract
Glenn Begley and Lee M. Ellis propose how methods, publications and incentives must change if patients are to benefit.APA Style Reference
Begley, C. G., & Ellis, L. M. (2012). Raise standards for preclinical cancer research. Nature, 483(7391), 531-533. https://doi.org/10.1038/483531a
You may also be interested in
The cumulative effect of reporting and citation biases on the apparent efficacy of treatment: the case of depression (deVries et al., 2018)
Main Takeaways:
- The authors analysed the cumulative influence of biases on efficacy and discussed remedies, using the evidence base for two effective treatments for depression: antidepressants and psychotherapy.
- “Trials that faithfully report non-significant results will yield accurate effect size estimates, but results interpretation can still be positively biased, which may affect apparent efficacy.” (p.2453)
- A spin occurs when a treatment is concluded to be effective, in spite of the fact that the results on the primary outcome was non-significant (e.g. concluding that treatment X was more effective than placebo, when it should be treatment X was not more effective than the placebo).
- Positive trials are more likely to be published (cf. publication bias) and significant outcomes are more likely to be included in a published trial, while negative outcomes are changed or removed. Put simply, negative outcomes are reported but in an overly positive manner that makes the negative outcome into a positive outcome (i.e. spin).
- Negative trials with either positive or mixed abstracts (e.g. concluding that the treatment was effective for one outcome but not another) were cited more often than those with negative abstracts. These findings indicate that the effects of different biases accumulate to hide non-significant results from view.
- Peer reviewers have an important role to ensure that important negative studies are cited and that the abstract accurately reports trial results. The peer reviewer can assess the study’s actual results, as opposed to their conclusions, and can conduct independent literature searches, since the authors’ reference list may have studies that disproportionately produce a number of positive findings.
Quote
“Close examination of registries by independent researchers may be necessary for registration to be a truly effective deterrent to study publication and outcome reporting bias. An alternative (or addition) to registration could be publication of study protocols or ‘registered reports’, in which journals accept a study for publication based on the introduction and methods, before the results are known. Widespread adoption of this format might also help to prevent spin, by reducing the pressure that researchers might feel to ‘oversell’ their results to get published. Hence, adoption of registered reports might also reduce citation bias by reducing the tendency for positive studies to be published in higher impact journals.” (p.2455)
Abstract
Dr deVries and colleagues discuss the importance of a spin on clinical trials, citation biases for positive trials and the benefits of registered reports and pre-registration.APA Style Reference
De Vries, Y. A., Roest, A. M., de Jonge, P., Cuijpers, P., Munafò, M. R., & Bastiaansen, J. A. (2018). The cumulative effect of reporting and citation biases on the apparent efficacy of treatments: the case of depression. Psychological medicine, 48(15), 2453-2455. https://doi.org/10.1017/S0033291718001873
You may also be interested in
Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time (Kaplan & Irvin, 2015)
Main Takeaways:
- The article investigates whether null results have increased over time in the National Heart, Lung, and Blood Institute.
- Method: All large randomised controlled trials between 1970-2012 were identified.
- Method: Two independent searches to improve probability to accurately capture all related trials- one by study author and second by grant databases from 1970-2012.
- Results: 57% of papers were published prior to 2000 that showed benefit of intervention on primary outcome in comparison to only 2 among 25 (8%) trials published after 2000.
- Results: Industry co-sponsorship was not linked to benefit but pre-registration linked to null findings.
- Results: Pre-registration in clinical trials.gov was strongly related with the trend toward null findings.
- The probability of finding a treatment benefit decreased, as opposed to increased, as studies became more precise.
- Following the year 2000, file drawer problems became more prominent leading to over-reported positive findings.
- There is a need to have stricter reporting standards for biases and greater rigour to suppress positive outcomes.
Quote
“All post 2000 trials reported total mortality while total mortality was only reported in about 80% of the pre-2000 trials and many of the early trials were not powered to detect changes in mortality. The effects on total mortality were null for both pooled analyses of trials that were registered or not registered prior to publication (see data in online supplement) In addition, prior to 2000 and the implementation of Clinicaltrials.gov, investigators had the opportunity to change the p level or the directionality of their hypothesis post hoc. Further, they could create composite variables by adding variables together in a way that favored their hypothesis. Preregistration in ClinicalTrials.gov essentially eliminated this possibility.” (p.9).
Abstract
We explore whether the number of null results in large National Heart Lung, and Blood Institute (NHLBI) funded trials has increased over time. We identified all large NHLBI supported RCTs between 1970 and 2012 evaluating drugs or dietary supplements for the treatment or prevention of cardiovascular disease. Trials were included if direct costs >$500,000/year, participants were adult humans, and the primary outcome was cardiovascular risk, disease or death. The 55 trials meeting these criteria were coded for whether they were published prior to or after the year 2000, whether they registered in clinicaltrials.gov prior to publication, used active or placebo comparator, and whether or not the trial had industry co-sponsorship. We tabulated whether the study reported a positive, negative, or null result on the primary outcome variable and for total mortality.17 of 30 studies (57%) published prior to 2000 showed a significant benefit of intervention on the primary outcome in comparison to only 2 among the 25 (8%) trials published after 2000 (χ2=12.2,df= 1, p=0.0005). There has been no change in the proportion of trials that compared treatment to placebo versus active comparator. Industry co-sponsorship was unrelated to the probability of reporting a significant benefit. Pre-registration in clinical trials.gov was strongly associated with the trend toward null findings.The number of NHLBI trials reporting positive results declined after the year 2000. Prospective declaration of outcomes in RCTs, and the adoption of transparent reporting standards, as required by clinicaltrials.gov, may have contributed to the trend toward null findings.APA Style Reference
Kaplan, R. M., & Irvin, V. L. (2015). Likelihood of null effects of large NHLBI clinical trials has increased over time. PloS one, 10(8), e0132382. https://doi.org/10.1371/journal.pone.0132382
You may also be interested in
Publication Pressure and Scientific Misconduct in Medical Scientists (Tijdink et al., 2014)
Main Takeaways:
- There is increasing evidence that scientific misconduct compromises the credibility of science.
- There are different definitions and classifications for scientific misconduct: fabrication, falsification, plagiarism. Each of them is seen as fraud.
- Publication pressure is a risk factor for scientific misconduct but has not been studied.
- The present study addresses the relationship among publication pressure, self-reported fraud and questionable research practices.
- Method: All researchers received a survey and a publication pressure questionnaire that assessed scientific misconduct.
- Method: 315 Respondents provided demographic information on gender, age, type of specialty; years working as a scientist; appointment status; main professional activity and Hirsch index.
- Results: 15% of respondents admitted that they had fabricated, falsified and plagiarised or manipulated data.
- Results: Fraud was more common among younger scientists working in a university hospital.
- Results: 72% rated publication pressure as too high. Publication pressure was related to scientific misconduct severity score.
- Discussion: Publication pressure is a psychological stress. The pressure generated by this stress affects the amount of errors made in scientific research.
- The data is more suited for the identification of potential determinants for self-reported misconduct than as a measure of the prevalence of misconduct.
Abstract
There is increasing evidence that scientific misconduct is more common than previously thought. Strong emphasis on scientific productivity may increase the sense of publication pressure. We administered a nationwide survey to Flemish biomedical scientists on whether they had engaged in scientific misconduct and whether they had experienced publication pressure. A total of 315 scientists participated in the survey; 15% of the respondents admitted they had fabricated, falsified, plagiarized, or manipulated data in the past 3 years. Fraud was more common among younger scientists working in a university hospital. Furthermore, 72% rated publication pressure as “too high.” Publication pressure was strongly and significantly associated with a composite scientific misconduct severity score.APA Style Reference
Tijdink, J. K., Verbeke, R., & Smulders, Y. M. (2014). Publication pressure and scientific misconduct in medical scientists. Journal of Empirical Research on Human Research Ethics, 9(5), 64-71. https://doi.org/10.1177/1556264614552421 [ungated]
You may also be interested in
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
- Signalling the trustworthiness of science should not be a substitute for direct action against research misconduct (Kornfeld & Titus, 2020)
- Reply to Kornfeld and Titus: No distraction from misconduct (Jamieson et al., 2020)
- Stop ignoring misconduct (Kornfeld & Titus, 2016)
- Scientists’ Reputations are Based on Getting it Right, not being Right (Ebersole et al., 2016)
- Check for publication integrity before misconduct (Grey et al., 2020)
Using science and psychology to improve the dissemination and evaluation of scientific work (Buttliere, 2014)
Main Takeaways:
- Buttliere advocates that the best way to optimise open science tools would be increasing their utility and lowering its costs and risks by centralizing existing individual and group efforts. This centralized platform should be easy to use, have a sophisticated public discussion space and impact metrics using the associated data.
- In order to be competitive, we have to publish in high impact journals. Being competitive can drive science and human progress but when we have questionable research practices, lack of open data and the file drawer problem, it is ineffective.
- Researchers invest hours to set up their profile, learn the interface and build up their network.
- Individuals post a paper, dataset, general comment, new protocol and shows up in the newsfeed of the system.
- Other researchers interact with the post and the system notifies the original poster and displays the content from the same source.
- If a question a researcher proposes is not found in the discussion of a paper or the subfield, the system could provide a list of experts to answer the question.
- To increase impact and reduce questionable research practices, we need individuals to engage with prosocial activities.
- Reviews should be done pre-publication and should privately provide feedback or reviews are made public and serve as a discussion of a certain number of comments.
- To help science, people should adopt the new system.
Abstract
Here I outline some of what science can tell us about the problems in psychological publishing and how to best address those problems. First, the motivation behind questionable research practices is examined (the desire to get ahead or, at least, not fall behind). Next, behavior modification strategies are discussed, pointing out that reward works better than punishment. Humans are utility seekers and the implementation of current change initiatives is hindered by high initial buy-in costs and insufficient expected utility. Open science tools interested in improving science should team up, to increase utility while lowering the cost and risk associated with engagement. The best way to realign individual and group motives will probably be to create one, centralized, easy to use, platform, with a profile, a feed of targeted science stories based upon previous system interaction, a sophisticated (public) discussion section, and impact metrics which use the associated data. These measures encourage high quality review and other prosocial activities while inhibiting self-serving behavior. Some advantages of centrally digitizing communications are outlined, including ways the data could be used to improve the peer review process. Most generally, it seems that decisions about change design and implementation should be theory and data driven.APA Style Reference
Buttliere, B. T. (2014). Using science and psychology to improve the dissemination and evaluation of scientific work. Frontiers in computational neuroscience, 8, 82. https://doi.org/10.3389/fncom.2014.00082
You may also be interested in
Bias against research on gender bias (Cislak et al., 2018) ⌺
Main Takeaways:
- Scientific inquiries often disregard the moderating roles of sex or gender. Moreover, some finding applies only to male participants, producing biased knowledge.
- Findings related to men may be irrelevant and harmful to women.
- Studies on gender bias are often met with lower appreciation in the scientific community compared to studies on race bias.
- The present study investigated whether research on gender bias is prone to biased evaluation resulting in fewer and less prestigious publications and fewer funding opportunities.
- The present study compared articles for gender bias and race bias in impact factor and grant support.
- Method: 1485 articles published in 520 journals were assigned a numerical value based on type of bias. Two peer review criteria were used: Impact factor and whether the article was supported by finding or not.
- Results: Articles on gender bias are funded less often and published in journals with lower Impact factor than articles on similar instances of social discrimination.
- Discussion: Results suggest that bias against gender bias research is not merit based but reflects the topic's lower prestige and appreciation due to a generalised gender bias.
- Another potential explanation for the observed difference in grant funding is the relative difference in availability of participant samples. Recruiting racially diverse samples may be more difficult, time-consuming and costly, while recruiting gender-diverse samples does not have similar issues.
- It is less plausible, however, that differences in participant samples affect researcher’s decisions of the outlet for their work. It may be that researchers are aware of bias against gender bias research and consider their own work less suitable for more prestigious journals.
- Research on gender bias is more often reviewed by male researchers than research on race bias.
- Rejection by more prestigious journals show subtle bias in perceived quality of studies evidencing gender discrimination.
Quote
“This discussion is primarily important in order for gender bias to be properly acknowledged within the scientific community and to pursue further examination of this powerful source of inequality that severely affects many women in the world.” (p. 200)
Abstract
The bias against women in academia is a documented phenomenon that has had detrimental consequences, not only for women, but also for the quality of science. First, gender bias in academia affects female scientists, resulting in their underrepresentation in academic institutions, particularly in higher ranks. The second type of gender bias in science relates to some findings applying only to male participants, which produces biased knowledge. Here, we identify a third potentially powerful source of gender bias in academia: the bias against research on gender bias. In a bibliometric investigation covering a broad range of social sciences, we analyzed published articles on gender bias and race bias and established that articles on gender bias are funded less often and published in journals with a lower Impact Factor than articles on comparable instances of social discrimination. This result suggests the possibility of an underappreciation of the phenomenon of gender bias and related research within the academic community. Addressing this meta-bias is crucial for the further examination of gender inequality, which severely affects many women across the world.APA Style Reference
Cislak, A., Formanowicz, M., & Saguy, T. (2018). Bias against research on gender bias. Scientometrics, 115(1), 189-200. https://doi.org/10.1007/s11192-018-2667-0
You may also be interested in
- Surviving (thriving) in academia: feminist support networks and women ECRs (Macoun & Miller, 2014)
- Global gender disparities in science (Lariviere et al., 2013)
- The Pandemic and Gender Inequality in Academia (Kim & Patterson, Jr, 2020)◈
- Gender in the editorial boards of scientific journals: A study on the current state of the art (Ghasemi et al., 2020) ◈
- Something’s Got to Give (Flaherty, 2020) ◈
When it is fine to fail/ Irreproducibility is not a sign of failure, but an inspiration for fresh ideas (Anon, 2020)
Main Takeaways:
- The past decade has seen a growing recognition that results must be independently replicated before they can be accepted as true.
- It is argued that a focus on reproducibility is necessary in the physical sciences as well, although it should be viewed through slightly different lenses.
- Questions in biomedicine and in the social sciences do not reduce as cleanly to the determination of a fundamental constant of nature as questions in physical sciences. As a result, attempts to reproduce results may include many sources of variability, which are hard to control for.
- Experimental results of replications may question long-held theories or point to the existence of another theory altogether.
- It is important to be cautious about assuming something is inherently wrong when researchers cannot reproduce a result when adhering to the best agreed standards.
- When attempting to reproduce previous results, it helps to build trust and confidence in the research process. Researchers from different domains must talk and share the experiences of reproducibility.
Quote
“Irreproducibility should not automatically be seen as a sign of failure. It can also be an indication that it’s time to rethink our assumptions.” (p.192)
Abstract
The history of metrology holds valuable lessons for initiatives to reproduce results.APA Style Reference
Anon (2020). It is fine to fail/Irreproducibility is not a sign of failure, but an inspiration for fresh ideas. Nature, 578, 191-192. https://doi.org/10.1038/d41586-020-00380-2
You may also be interested in
Signalling the trustworthiness of science (Jamieson et al., 2020)
Main Takeaways:
- Authors argue that trust in science increases when scientists abide by the scientific norms.
- Scientists reinforce trust in science when they promote the use and value of evidence, transparent reporting, self-correction, replication, a culture of critique, and controls for bias”.
- There are already a number of practical ways scientists and scientific outlets have at their disposal to signal the trustworthiness of science: article badging, checklists, a more extensive withdrawal ontology, identity verification, better forward linking, and greater transparency.
- The research community has started to thwart human biases and increase trustworthiness of scholarly work.
- Scientists, policy makers and public base their decisions on inappropriate grounds such as irrational biases, non-scientific beliefs and misdirections by conflicted stakeholders and malicious actors.
- It is important to communicate the value of scientific practices more explicitly and transparent to clarify misconceptions of science.
- Scientific advances are built on previous work with new technological revolutions, new areas of research. As a result of these new approaches, interpretations can be corrected and advanced.
- Central to this progress of science is a culture of critique, replication and independent validation of results, and self correction.
- Science discourages group-think, countermands, human biases and rewards a dispassionate stance to the subject and institutionalised organised scepticism but fosters competition for scientists to replicate and challenge each other’s work.
- To validate and build on the results of others, it is important to archive data and analysis plans in publicly available repositories.
- Retraction statements to allow the issues that led to the retraction to be known and who was responsible for the paper’s shortcomings. If an official investigation commences, it can help the blame be narrowed as opposed to generalised to all authors (cf. CRediT, as it allows us to look and identify the contributor who caused this issue, without blaming all the authors).
- We should also use a neutral term that encourages vigilance without disincentivizing disclosure such as relevant interest or relevant relationships, as opposed to conflict of interest, to indicate that not all ties are necessarily corrupt.
- To complement peer review, badges, checklist, plagiarism and image manipulations, independent statistics and verification that authors comply with community endorsed reporting and archiving standards checks are used to signal trustworthiness of findings.
- Authors organize their thinking in their helpful Table 1, where they describe 3 dimensions (competence, integrity, and benevolence) that communicate the level of trust warranted by an individual study, as well as their associated norms and examples of violation, while fleshing out the role of stakeholders.
Quote
“Science enjoys a relatively high level of public trust. To sustain this valued commodity, in our increasingly polarized age, scientists and the custodians of science would do well to signal to other researchers and to the public and policy makers the ways in which they are safeguarding science’s norms and improving the practices that protect its integrity as a way of knowing...beyond this peer-to-peer communication, the research community and its institutions also can signal to the public and policy makers that the scientific community itself actively protects the trustworthiness of its work.” (p.19235)
Abstract
Trust in science increases when scientists and the outlets certifying their work honor science’s norms. Scientists often fail to signal to other scientists and, perhaps more importantly, the public that these norms are being upheld. They could do so as they generate, certify, and react to each other’s findings: for example, by promoting the use and value of evidence, transparent reporting, self-correction, replication, a culture of critique, and controls for bias. A number of approaches for authors and journals would lead to more effective signals of trustworthiness at the article level. These include article badging, checklists, a more extensive withdrawal ontology, identity verification, better forward linking, and greater transparency.APA Style Reference
Jamieson, K. H., McNutt, M., Kiermer, V., & Sever, R. (2019). Signaling the trustworthiness of science. Proceedings of the National Academy of Sciences, 116(39), 19231-19236.https://doi.org/10.1073/pnas.1913039116
You may also be interested in
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- Psychologists Are Open to Change, yet Wary of Rules (Fuchs et al., 2012)
- CJEP Will Offer Open Science Badges (Pexman, 2017)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency (Kidwell et al., 2016)
- Quality Uncertainty Erodes Trust in Science (Vazire, 2017)
- Signalling the trustworthiness of science should not be a substitute for direct action against research misconduct (Kornfeld & Titus, 2020)
- Reply to Kornfeld and Titus: No distraction from misconduct (Jamieson et al., 2020)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
On the reproducibility of meta-analyses: six practical recommendations (Lakens et al., 2014)
Main Takeaways:
- Researchers on different sides of a scientific argument reach different conclusions in their meta-analyses of the same literature. The article recommends six recommendations that will increase the openness and reproducibility of meta-analyses.
Quote
APA Style Reference
Abstract
Meta-analyses play an important role in cumulative science by combining information across multiple studies and attempting to provide effect size estimates corrected for publication bias. Research on the reproducibility of meta-analyses reveals that errors are common, and the percentage of effect size calculations that cannot be reproduced is much higher than is desirable. Furthermore, the flexibility in inclusion criteria when performing a meta-analysis, combined with the many conflicting conclusions drawn by meta-analyses of the same set of studies performed by different researchers, has led some people to doubt whether meta-analyses can provide objective conclusions. The present article highlights the need to improve the reproducibility of meta-analyses to facilitate the identification of errors, allow researchers to examine the impact of subjective choices such as inclusion criteria, and update the meta-analysis after several years. Reproducibility can be improved by applying standardized reporting guidelines and sharing all meta-analytic data underlying the meta-analysis, including Quote from articles to specify how effect sizes were calculated. Pre-registration of the research protocol (which can be peer-reviewed using novel ‘registered report’ formats) can be used to distinguish a-priori analysis plans from data-driven choices, and reduce the amount of criticism after the results are known. The recommendations put forward in this article aim to improve the reproducibility of meta-analyses. In addition, they have the benefit of “future-proofing” meta-analyses by allowing the shared data to be re-analyzed as new theoretical viewpoints emerge or as novel statistical techniques are developed. Adoption of these practices will lead to increased credibility of meta-analytic conclusions, and facilitate cumulative scientific knowledge.APA Style Reference
Lakens, D., Hilgard, J., & Staaks, J. (2016). On the reproducibility of meta-analyses: Six practical recommendations. BMC psychology, 4(1), 24. https://doi.org/10.1186/s40359-016-0126-3
You may also be interested in
Specification Curve: Descriptive and Inferential Statistics on All Reasonable Specifications (Simonsohn et al., 2015) ◈
Main Takeaways:
- To convert a scientific hypothesis into a testable prediction, researchers make several decisions for data analysis. However, these decisions are affected by implicit decisions such as conflict of interest or trying to publish a result that tells a publishable story. This article introduces the Specification-Curve Analysis to reduce these problems. The steps include reporting results for “that (1) are consistent with the underlying theory, (2) are expected to be statistically valid, and (3) are not redundant with other specifications in the set.” (p.2).
- Without specification-curve analysis, researchers selectively report a few specifications in their papers. However, the decisions that are conducted are based on arbitrary analytical decisions, thus specification-curve analysis aims to reduce the influence of arbitrary analytical decisions, while preserving the influence of non-arbitrary analytical decisions.
- Competent researchers will disagree whether a data analysis is an appropriate test of the hypothesis of interest and/or statistically valid for the data at hand, specification-curve analysis will end debates about what data analysis to be conducted, but facilitate them further (cf. crowdsourcing; Tierney et al. ,2020; in press).
- There are three main steps for Specification-Curve Analysis. 1. Define the set of reasonable specifications to estimate. Estimate all specifications and report the results in a descriptive specification curve. Finally, conduct joint statistical tests using an inferential specification. A set of specifications can be produced by enumerating all data analytic decisions that are important to map the scientific hypothesis or construct of interest onto a statistical hypothesis, enumerating all reasonable alternative ways a researcher may make those decisions and generate the combination of decisions, to remove invalid and redundant combinations.
- Different conclusions from the same data can be interpreted by different researchers based on theoretically justified or statistically valid analyses or may reflect on arbitrary decisions on how the shared views of the researchers are operationalised. Specification allows us to help reach consensus on the latter. To solve this issue, we need to do more or different theory or training, not data analysis.
Quote
“The Specification-Curve Analysis, (i) provides a step-by-step guide to generate the set or reasonable specifications, (ii) aids in the identification of the source of variation in results across specifications via a descriptive specification curve... (iii) and provides a formal joint significance test for the family of alternative specifications, derived from expected distributions under the null...If different valid analyses lead to different conclusions, traditional pre-analysis plans lead researchers to blindly pre-commit to one vs the other conclusion by pre-committing to one vs another valid analysis, while Specification-Curve allows learning what the conclusion hinges on.” (pp.5-6).
Abstract
Empirical results often hinge on data analytic decisions that are simultaneously defensible, arbitrary, and motivated. To mitigate this problem we introduce Specification-Curve Analysis, which consists of three steps: (i) identifying the set of theoretically justified, statistically valid, and non-redundant analytic specifications, (ii) displaying alternative results graphically, allowing the identification of decisions producing different results, and (iii) conducting statistical tests to determine whether as a whole results are inconsistent with the null hypothesis. We illustrate its use by applying it to three published findings. One proves robust, one weak, one not robust at all.APA Style Reference
Simonsohn, U., Simmons, J. P., & Nelson, L. D. (2020). Specification curve analysis. Nature Human Behaviour, 1-7.https://doi.org/10.1038/s41562-020-0912-z [ungated]
You may also be interested in
Registered Reports: A new publishing initiative at Cortex (Chambers, 2013)
Main Takeaways:
- We value novel and eye-catching findings over genuine findings, thus increasing questionable research practices.
- Editorial decisions are one cause of questionable research practices, as they make decisions based on results.
- Science undergraduates are taught about data analysis and hypothesis generation before the data is collected, ensuring the observer is independent of observation.
- Cortex provides registered reports to allow null results and encourage replication.
- Registered reports are manuscripts submitted before the experiment begins. This includes the introduction, hypotheses, procedures, analysis pipeline, power analysis, and pilot data, if possible.
- Following peer review, the article is rejected or accepted in principle for publication, irrespective of the obtained results.
- Authors have to submit a finalised manuscript for re-review, share raw data, and laboratory logs.
- Pending quality checks and a sensible interpretation of findings, the manuscript is, in essence, accepted.
- Registered reports are immune to publication bias and need authors to adhere to pre-approved methodology and analysis pipeline to prevent questionable research practices from being used.
- A priori power analysis is required and the criteria for a registered report is seen as providing the highest truth value.
- Registered reports do not exclude exploratory analyses but must be distinguished from the planned analyses.
- Not all modes of scientific investigation fit registered reports but most will.
Abstract
This is an editorial by Chris Chambers who encouraged Registered Reports in Cortex as a viable initiative to reduce questionable research practices, its benefits, limitations and what information to include in a registered report.APA Style Reference
Chambers, C. D. (2013). Registered reports: a new publishing initiative at Cortex. Cortex, 49(3), 609-610. https://doi.org/10.1016/j.cortex.2012.12.016 [ungated]
You may also be interested in
- Registered Reports: A step change in scientific publishing (Chambers, 2014)
- Registered Reports: Realigning incentives in scientific publishing (Chambers et al., 2015)
- Registered reports : a method to increase the credibility of published results (Nosek & Lakens, 2014)
- Registered reports (Jamieson et al., 2019)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- On the persistence of low power in psychological science (Vankov et al., 2014)
Prestige drives epistemic inequality in the diffusion of scientific ideas (Morgan et al., 2018) ⌺
Main Takeaways:
- There is no clear evidence that epistemic inequality is driven by non-meritocratic social mechanisms.
- It remains unknown how an idea spreads in the scientific community.
- If the origin does shape its scientific discourse, what is the relationship between the intrinsic fitness of the idea and its structural advantage by the prestige of origin?
- The present study takes a different approach to define how faculty hiring drives epistemic inequality and can determine which researchers are situated in which institutions and the origin of the idea.
- Method: 5032 tenured or tenure-track faculty data were collected. Data was collected from faculty hiring networks, nodes reflect university and the connections if a PhD was acquired at that university and if they held a tenure-track position.
- Networks with a self-loop contained individuals who received their PhD at the same institution and held a faculty position.
- Small departments have high placement power, while large departments have power. Elite institutions have a structural advantage.
- Faculty hiring may not contribute to the spread of every research idea. Hiring contributes to others. Faculty hiring is a possible mechanism for the diffusion of ideas in academia.
- The spread of information from a varying level of prestige for universities was investigated.
- Results: Research from prestigious institutions spreads more quickly and completely than work of similar quality originating from less prestigious institutions.
- Higher quality research from less prestigious universities has similar success as lower-quality research in more prestigious universities.
- Even when the assessment of an idea’s quality is objective, idea dissemination in academia is not meritocratic.
- Researchers at prestigious institutions benefit from structural advantage allowing ideas to be more easily spread throughout the network of institutions and impact discourse of science.
- Lower quality ideas are overshadowed by comparable ideas from more prestigious institutions, high-quality ideas circulate widely, irrespective of origin.
Abstract
The spread of ideas in the scientific community is often viewed as a competition, in which good ideas spread further because of greater intrinsic fitness, and publication venue and citation counts correlate with importance and impact. However, relatively little is known about how structural factors influence the spread of ideas, and specifically how where an idea originates might influence how it spreads. Here, we investigate the role of faculty hiring networks, which embody the set of researcher transitions from doctoral to faculty institutions, in shaping the spread of ideas in computer science, and the importance of where in the network an idea originates. We consider comprehensive data on the hiring events of 5032 faculty at all 205 Phd.-granting departments of computer science in the U.S. and Canada, and on the timing and titles of 200,476 associated publications. Analyzing five popular research topics, we show empirically that faculty hiring can and does facilitate the spread of ideas in science. Having established such a mechanism, we then analyze its potential consequences using epidemic models to simulate the generic spread of research ideas and quantify the impact of where an idea originates on its long-term diffusion across the network. We find that research from prestigious institutions spreads more quickly and completely than work of similar quality originating from less prestigious institutions. Our analyses establish the theoretical trade-offs between university prestige and the quality of ideas necessary for efficient circulation. Our results establish faculty hiring as an underlying mechanism that drives the persistent epistemic advantage observed for elite institutions, and provide a theoretical lower bound for the impact of structural inequality in shaping the spread of ideas in science.APA Style Reference
Morgan, A. C., Economou, D. J., Way, S. F., & Clauset, A. (2018). Prestige drives epistemic inequality in the diffusion of scientific ideas. EPJ Data Science, 7(1), 40. https://doi.org/10.1140/epjds/s13688-018-0166-4
You may also be interested in
- Early co-authorship with top scientist predicts success in academic careers (Li et al., 2019)
- Open Science Isn’t Always Open to All Scientists (Bahlai et al., 2019)
- Scientists’ Reputations are Based on Getting it Right, not being Right (Ebersole et al., 2016)
- The Matthew effect in science funding (Bol et al., 2018)
Open Science Isn’t Always Open to All Scientists (Bahlai et al., 2019) ⌺
Main Takeaways:
- Open science focuses on accountability and transparency, invites anyone to observe, contribute and create.
- Open science focuses on conviction that research performs in dialogue with society. Science is a mainstreamed but increasing sense of competition rewards scientists who discover ideas and publish findings. “... science traditionally has rewarded only scientists who are the first to discover ideas and publish findings, there is resistance to move from “closed” practices…”
- The broad term of Open Science and resulting vague scope is stalling the progress of the open science movement. We are now often caught up in detailed checklists about whether a project is “open” or not, rather than “focusing on the core goal of accountability and transparency.” All or nothing checklists reduce “the accessibility of science and may reify existing inequalities within this profession.”
- Open science makes science accessible to everyone but there are systemic barriers (e.g. financial and social) that make open science more accessible to some not others such as career stage, power imbalance, employment stability, financial circumstance, country of origin and cultural context.
- These barriers prevent scientists from pursuing further and should not be used to deny further participation, including receiving grant funding or job applications.
- “To truly achieve open science’s transformative vision, it must be universally accessible, so that all people have access to the dialogue of science. Accessible in this context means usable by all, with particular emphasis on communities often not served by scientific products.”
- Open science practices are not equally accessible to all scientists. aywalls make research inaccessible but Open Access processing fees may prevent scientists from sharing their work, as not all institutions/individuals have the resources to overcome these barriers.
- If open access is paid out of our personal funds, instead of grant or institution funding sources, it is an unsustainable solution for many scholars that do not have access to these funds.
- “Yet open tools, code, or data sets are often not valued the same as “normal” academic products, and therefore those who spend their limited time and resources on these products suffer a cost in how they are evaluated for current and future jobs.”
- Preprints and signed peer reviews may exacerbate inherent biases. “.. forcing transparency in practices that have traditionally operated in a “black box” may exacerbate inherent biases against women and people of color, especially women of color.”
- Making data available is seen as high risk as someone can publish analyses with your data before you can. Even “a small risk particularly affects members of the scientific community with fewer resources…”
Quote
“Power imbalance can play a large role in an individual’s ability to convince their research group to use openscience practices and as a result may cause them to not engage in these practices until they have stable employment or are in a senior position.”
Abstract
Current efforts to make research more accessible and transparent can reinforce inequality within STEM professions.APA Style Reference
Bahlai, C., Bartlett, L. J., Burgio, K. R., Fournier, A., Keiser, C. N., Poisot, T., & Whitney, K. S. (2019). Open science isn’t always open to all scientists. American Scientist, 107(2), 78-82. https://doi.org/10.1511/2019.107.2.78
You may also be interested in
- Early co-authorship with top scientist predicts success in academic careers (Li et al., 2019)
- Prestige drives epistemic inequality in the diffusion of scientific ideas (Morgan et al., 2018)
- On supporting early-career black scholars (Roberson, 2020)
- The Matthew effect in science funding (Bol et al., 2018)
Surviving (thriving) in academia: feminist support networks and women ECRs (Macoun & Miller, 2014) ⌺
Main Takeaways:
- This paper argues about how peer support networks may affect the experience of early-career scholars
- Women who participated in the Feminist Reading Group (FRG) are actively intellectually engaged in theorising their own experiences.
- The group perform functions linked to reading groups, create an informal space concerned with furthering disciplinary knowledge and developing academic skills.
- FRG members created a community of belonging among themselves, in which personal support, knowledge, and cultural and social capital were provided.
- Participants share resources and information about institutional processes and gain the confidence to navigate complex and hostile spaces of the University.
- School’s official spaces are seen as gendered and not reflective of our research interests or intellectual backgrounds.
- Participants state that FRG allowed them to continue their studies in times of difficulty.
- FRG provides opportunities to broaden exposure to other fields and improve critical thinking skills.
- FRG promotes the learn essential academic skills, since women are able to learn from experience with writing and publishing, and also developing presentation and analytical skills without fear of seeming to be an inadequate researcher.
- Academic work can be isolating and early career researchers frequently report feeling unsettled, anxious and experiencing self-doubt.
- FRG re-dresses this opacity and operates as an information sharing network for participants to learn about how things work at the University and in the department.
- Women graduates receive less mentoring, less involvement in professional and social networking than their male peers.
- Participation in the FRG also stimulated other academic activities, with members encouraging each other to attend conferences and present paper.
- Most participants were white, straight, cis-gendered and middle class. The group was whiter than our department as a whole.
- FRG provides participants with an opportunity to understand individual experiences of exclusion, exploitation, self-doubt, discrimination as shared and fundamentally political in character.
- Our backgrounds and experiences are not homogeneous, most participants in the reading group are racially and socio-economically privileged.
Abstract
In this paper, we reflect upon our experiences and those of our peers as doctoral students and early career researchers in an Australian Political Science department. We seek to explain and understand the diverse ways that participating in an unofficial Feminist Reading Group in our department affected our experiences. We contend that informal peer support networks like reading groups do more than is conventionally assumed, and may provide important avenues for sustaining feminist research in times of austerity, as well as supporting and enabling women and emerging feminist scholars in academia. Participating in the group created a community of belonging and resistance, providing women with personal validation, information and material support, as well as intellectual and political resources to understand and resist our position within the often hostile spaces of the University. While these experiences are specific to our context, time and location, they signal that peer networks may offer critical political resources for responding to the ways that women’s bodies and concerns are marginalised in increasingly competitive and corporatised university environments.APA Style Reference
Macoun, A., & Miller, D. (2014). Surviving (thriving) in academia: Feminist support networks and women ECRs. Journal of Gender Studies, 23(3), 287-301. https://doi.org/10.1080/09589236.2014.909718
You may also be interested in
- Global gender disparities in science (Lariviere et al., 2013)
- The Pandemic and Gender Inequality in Academia (Kim & Patterson, Jr, 2020)
- Gender in the editorial boards of scientific journals: A study on the current state of the art (Ghasemi et al., 2020)
- Bias against research on gender bias (Cislak et al., 2018)
- Something’s Got to Give (Flaherty, 2020)
Global gender disparities in science (Lariviere et al., 2013) ⌺
Main Takeaways:
- Gender inequality is still rife in science.
- There are gender inequalities in hiring, earnings, funding, satisfactions and patenting.
- Men publish more papers than women. There is no consensus whether gender differences are a result of bias, childbearing or other variables.
- The present state of quantitative knowledge of gender disparities in science was shaped by anecdotal reports and studies which are localized, monodisciplinary and dated. These studies take little account of changes in scholarly practices.
- The present study presents a cross-disciplinary bibliographic research to investigate (i) the relationship between gender and academic output, (ii) the extent of collaboration and (iii) the scientific impact of all articles published between 2008 and 2012 and indexed in the Thomson Reuters Web of Science databases
- Citation disadvantage is highlighted by the fact that women’s publication portfolios are more domestic than male colleagues and profit less from extra citations that international collaborations accrue.
- Men dominate scientific production in nearly every country (the extent of this domination varies by region).
- Women account for fewer than 30% fractionalised authorships, while men representation in such publications was more than 70%.
- Women are underrepresented when it comes to first authorships.
- For every article with a female first author, there are nearly two (1.93) articles first-authored by men.
- Female authorship is more prevalent in countries with lower scientific output.
- Female collaborations are more domestically oriented than collaborations of males from the same country.
- The present study analysed prominent author positions (sole, first- and last-authorship). When a woman was in any of these roles, paper attracted fewer citations than in cases wherein a man was in one of these roles.
- Academic pipeline from junior to senior faculty leaks female scientists. Thus it is likely that many of the trends we observed can be explained by the under-representation of women among the elders of science.
- Barriers to women in science remain widespread worldwide, despite more than a decade of policies aimed at levelling the playing field. For a country to be scientifically competitive, it needs to maximise its human intellectual capital.
- Collaboration is one of the main drivers of research output and scientific impact. Programmes fostering international collaboration for female researchers might help to level the playing field.
- No country can afford to neglect the intellectual contributions of half of its population.
Abstract
Cassidy R. Sugimoto and colleagues present a bibliometric analysis confirming that gender imbalances persist in research output worldwide.APA Style Reference
Larivière, V., Ni, C., Gingras, Y., Cronin, B., & Sugimoto, C. R. (2013). Bibliometrics: Global gender disparities in science. Nature News, 504(7479), 211. https://doi.org/10.1038/504211a
You may also be interested in
- Surviving (thriving) in academia: feminist support networks and women ECRs (Macoun & Miller, 2014)
- The Pandemic and Gender Inequality in Academia (Kim & Patterson, Jr, 2020)◈
- Gender in the editorial boards of scientific journals: A study on the current state of the art (Ghasemi et al., 2020)
- Bias against research on gender bias (Cislak et al., 2018)
- Something’s Got to Give (Flaherty, 2020)
The Pandemic and Gender Inequality in Academia (Kim & Patterson, Jr, 2020)◈ ⌺
Main Takeaways:
- The COVID-19 pandemic worsened existing gender inequalities across society.
- The present study investigated the influence of the current pandemic requires addressing an academic publication pipeline best measured in months, if not years.
- If the pandemic disproportionately influences the productivity of female faculty, the effects on research productivity may not fully materialise for years and evaluation and promotion of female scholars could adversely be affected by gender-related inequalities woven into the system years before.
- The present study determined the proportion of work- and family-related tweets sent by male and female academics using subject-specific keywords.
- The pandemic caused the gender-related differences in professional tweeting to increase by 239%. The lockdown increased the gap between male and female faculty member’s propensity to tweet about family and care-giving.
- Women bear all care-giving activities- both men and women experienced an increase in family-related tweets- patterns we uncover reveal that female careers are more severely taxed by these commitments.
- Method: Our sample was narrowed to tenure-track or tenured faculty based in the United States, producing approximately 3000 handles.
- Method: We first identified all tweets related to career-promoting and family-related activities, and began with terms (e.g. publication, new paper, child care and home school).
- Each tweet was coded as work- and family-related or not. A more extensive set of keywords classified the entire corpus.
- Most papers and articles are shared on Twitter via URL, tweet was classified as work-related, if shared, URL address indicates file type, publication venue or data repository services.
- Results: Faculty members of both genders were affected by the pandemic, the gap in work-related tweets between male and female academics roughly tripled following the work-from-home.
- Variation in effects between junior and senior faculty indicates this relationship is not driven by an intrinsic gender difference. This effect is produced by gendered differences in adapting a work/life balance to the pandemic.
- Female academics who reach full professor have overcome existing barriers to gender equality in academia.
- Parenting obligations overshadow all other factors in limiting research productivity, indicating the influence of parenting on productivity.
- Increased efforts to address these deep-rooted inequalities, the cracks in the pipeline continue to loom large.
- Gender imbalances are less pronounced among the ranks of junior faculty, efforts to explain biases in early career trajectories would have the greatest long-term influence on the pipeline of female academics.
Quote
“With gender imbalances less pronounced among the ranks of junior faculty, efforts to account for biases in early career trajectories would have the greatest long-term impact on the pipeline of female academics. Moreover, as female role-models can positively influence young women’s propensities to enter male-dominated fields (Bonneau and Kanthak, 2018; Breda et al., 2020), administrators’ success or failure here could have downstream impacts on female representation in the academy for the next generation.” (p.15)
Abstract
Does the pandemic exacerbate gender inequality in academia? The temporal lag in publication pipeline complicates the effort to determine the extent to which women’s productivity is disproportionately affected by the COVID-19 crisis. We provide real-time evidence by analyzing 1.8 million tweets from approximately 3,000 political scientists, leveraging their use of social media for career advancement. Using automated text analysis and difference-in-differences estimation, we find that while faculty members of both genders were affected by the pandemic, the gap in work-related tweets between male and female academics roughly tripled following work-from-home. We further argue that these effects are likely driven by the increased familial obligations placed on women, as demonstrated by the increase in family-related tweets and the more pronounced effects among junior academics. Our causal evidence on work-family trade-off provides an opportunity for proactive efforts to address gender disparities that may otherwise take years to manifest.APA Style Reference
Kim, E., & Patterson, S. (2020). The Pandemic and Gender Inequality in Academia. Available at SSRN 3666587. http://dx.doi.org/10.2139/ssrn.3666587
You may also be interested in
- Surviving (thriving) in academia: feminist support networks and women ECRs (Macoun & Miller, 2014)
- Global gender disparities in science (Lariviere et al., 2013)
- Gender in the editorial boards of scientific journals: A study on the current state of the art (Ghasemi et al., 2020) ◈
- Bias against research on gender bias (Cislak et al., 2018)
- Something’s Got to Give (Flaherty, 2020) ◈
Gender in the editorial boards of scientific journals: A study on the current state of the art (Ghasemi et al., 2020) ◈ ⌺
Main Takeaways:
- There is a large number of studies on gender in academia, gender in membership of editorial boards of scientific journals garner attention of research and little literature. They make the policies and determine what is accepted for publication and what is not.
- Admission or rejection of articles influences the academic careers of authors: full professors or PhD students. Gender in editorial boards attracted attention from several researchers, albeit studies focus on journals of a specific field of knowledge.
- Works dealing with women and academia are addressed, those works focusing on editorial boards are reviewed. Male professors, male authors in journals and male dominance is higher than female counterparts.
- Women’s receipt of professional awards, prizes and funding increased in the past two decades. Men continue to win a higher proportion of awards and funding for scholarly research than expected based on the nomination pool.
- Stereotypes about women’s abilities, harsh self-assessment of scientific ability by women than by men; academic and professional climates dissatisfying to women and unconscious bias contribute to achieving fewer awards and funds.
- Female board representations have improved over time, is consistent across countries, and gendered subdisciplines attract higher female board representations. Inequities persist at the highest level: women are under-represented as editors and on boards of higher ranked journals. Three factors for women under-representation in editorial board: discipline, journal's prestige and editor’s gender.
- The last 15 years hinders women’s ability to attain scholarly recognition and advancement and carries risk to the narrow nature and scope of research in the field. They all show a worrying trend of under-representation of women and agree on negative consequences for advancement of science.
Abstract
Gender issues have been studied in a broad range of fields and in many areas of society, including social relations, politics, labour, and also academia. However, gender in the membership of editorial boards of scientific journals is a topic that only recently has started to attract the attention of researchers, and there is little literature on this subject as of today. The objective of this work is to present a study of the current state of editorial boards with regard to gender. The methodology is based on a literature review of gender issues in academia, and more specifically in the incipient field of gender in editorial boards. The main findings of this work, according to the reviewed bibliography, are that women are underrepresented in academic institutions, that this underrepresentation is increasingly marked in higher rank positions in academia and in editorial boards, and that this carries the risk of narrowing the nature and scope of the research in some fields of knowledge.APA Style Reference
Ghasemi, N. M., Perramon Tornil, X., & Simó Guzmán, P. (2019, March). Gender in the editorial boards of scientific journals: a study on the current state of the art. In Congrés Dones Ciència i Tecnologia 2019: Terrassa, 6 i 7 de març de 2019. http://hdl.handle.net/2117/134267
You may also be interested in
- Surviving (thriving) in academia: feminist support networks and women ECRs (Macoun & Miller, 2014)
- Global gender disparities in science (Lariviere et al., 2013)
- The Pandemic and Gender Inequality in Academia (Kim & Patterson, Jr, 2020)◈
- Bias against research on gender bias (Cislak et al., 2018)
- Something’s Got to Give (Flaherty, 2020) ◈
Something’s Got to Give (Flaherty, 2020) ◈ ⌺
Main Takeaways:
- Women's journal submission rates fell as their caring responsibilities increased due to COVID-19 (see also) based on data from ongoing study of article submissions to preprint databases, whose preliminary results were published in Nature’s Index.
- Submissions were up since COVID-19, but the share of submissions made by women was down.
- Submissions by women as first authors (often junior scholars) were especially down, with some indication that they were shifting to middle authors.
- Female first-author submissions to medRxiv, for example, dropped from 36% in December to 20% in April 2020.
- Senior and author submissions by women decreased 6% over the same period, while male senior author submissions rose 5%.
- Other researchers have found COVID-19 related papers in medicine and economics have fewer female authors than expected.
- At one journal, male authors outnumbered female authors by more than three to one.
- It was recommended by Melina R. Kibbe, editor of JAMA Surgery, that we should pause the tenure clock during the pandemic. However, critics of this approach have argued this can actually hurt, not help, women and under-represented minorities, as it can delay career progression and decrease lifetime earnings.
- The status quo is such that men win the COVID-19 game, whereas women, in general, lose. We need to allow part-time work. Different work shifts should be available to those who need them. And agencies should extend grant end dates and allow for increased funding carryover from year to year.
Quote
“In any case, Power said, the challenge “needs more thinking about and a bigger public conversation, because this situation is not going away fast.” That conversation is long overdue, she added, in that “women and carers are supposed to just fit into a system designed for people without caring responsibilities. There is a saying working mothers have: ‘You have to work like you don’t have children and parents like you don’t have a job.’ And that was before COVID-19.”” (p.10).
Abstract
Women's journal submission rates fell as their caring responsibilities jumped due to COVID-19. Without meaningful interventions, the trend is likely to continue.APA Style Reference
Flaherty, C. (2020, August, 20). Something's Got to Give. Inside Higher Ed. Retrieved fromhttps://www.insidehighered.com/news/2020/08/20/womens-journal-submission-rates-continue-fall
You may also be interested in
- Surviving (thriving) in academia: feminist support networks and women ECRs (Macoun & Miller, 2014)
- Global gender disparities in science (Lariviere et al., 2013)
- The Pandemic and Gender Inequality in Academia (Kim & Patterson, Jr, 2020)◈
- Bias against research on gender bias (Cislak et al., 2018)
- Gender in the editorial boards of scientific journals: A study on the current state of the art (Ghasemi et al., 2020) ◈
Publication metrics and success on the academic job market (Van Dijk et al., 2014)
Main Takeaways:
- The number of applicants seeking academic positions vastly outnumber the available faculty positions. To date, there has not been a quantitative analysis of which characteristics lead researchers towards becoming a principal investigator (PI). Authors, based on their empirical results, defend that ‘success in academia’ is predictable, depending on number of publications, the journal’s impact factors (IF) of these publications and ratio between citations and IF. In addition, scientist’s gender and the rank of their university are important predictors suggesting that non-publication features play a statistically significant role in the academic hiring process.
- Method: The authors qualified more than 200 different metrics of publication output for authors who became Principal Investors and those who did not.
- Method: Whether or not a scientist becomes a scientist depends on the publication record, considering the first few years of publication and effect of each publication feature independent of other confounding variables.
- Results: Authors with more first-author publications and more papers in high impact journals are more likely to have higher h index and take less time to become principal investigators.
- Results: The actual number of citations is less predictive than journal impact factor.of becoming a Principal Investigator.
- Results: Authors with more first or second author publications are more likely to become Principal Investigators. However, if you have a lot of co-authors, less credit is given to this publication.
- Results: More middle author publications add little value in becoming a Principal Investigator, unless they are published in high impact journals.
- Results: Authors who take longer than seven years to become a Principal Investigator have more citations per paper than authors who become Principal Investigators more quickly.
- Results: Men are over-represented as Principal Investigator after correcting for all other publication and non-publication derived features, being male is positively predictive of becoming a Principal Investigator.
- Quality of publication is given more weight than its actual quality. The number of citations a publication receives is correlated with the impact factor of the journal.
- The authors found that citations/impact factor is the fourth most predictive feature after impact factor, number of publications and gender.
- These authors have a two-fold increase in their first-author publication rate relative to authors who do not become Principal Investigator, indicating that more first-author publications per year can compensate for lack of high impact factor publications.
- The Set of Principal investigator is enriched for scientists who attend higher-ranked universities, linked to many other features. It predicts becoming Principal Investigator independent of other publication features.
- Scientists from higher-ranked institutions become Principal Investigators before those from lower-ranked institutions.
- The author suggests that better universities attract better people and produce more Principal investigators.
Quote
“Our results suggest that currently, journal impact factor and academic pedigree are rewarded over the quality of publications, which may dis-incentivize rapid communication of findings, collaboration and interdisciplinary science.” (p.517)
Abstract
The number of applicants vastly outnumbers the available academic faculty positions. What makes a successful academic job market candidate is the subject of much current discussion 1, 2, 3, 4. Yet, so far there has been no quantitative analysis of who becomes a principal investigator (PI). We here use a machine-learning approach to predict who becomes a PI, based on data from over 25,000 scientists in PubMed. We show that success in academia is predictable. It depends on the number of publications, the impact factor (IF) of the journals in which those papers are published, and the number of papers that receive more citations than average for the journal in which they were published (citations/IF). However, both the scientist’s gender and the rank of their university are also of importance, suggesting that non-publication features play a statistically significant role in the academic hiring process. Our model (www.pipredictor.com) allows anyone to calculate their likelihood of becoming a PI.APA Style Reference
Van Dijk, D., Manor, O., & Carey, L. B. (2014). Publication metrics and success on the academic job market. Current Biology, 24(11), R516-R517. https://doi.org/10.1016/j.cub.2014.04.039
You may also be interested in
- Six principles for assessing scientists for hiring, promotion, and tenure (Naudet et al, 2018)
- Faculty promotion must assess reproducibility (Flier, 2017) ⌺
- Rewarding Research Transparency (Gernsbacher, 2018)
- The Matthew effect in science funding (Bol et al., 2018)
Scientists’ Reputations are Based on Getting it Right, not being Right (Ebersole et al., 2016)
Main Takeaways:
- What happens if my finding does not replicate?
- The success of replications depends on the methodology used.
- Many researchers argue that scientists should be evaluated only for things that they control (e.g. hypotheses, design, implementation, analysis and reporting).
- Scientists produce ideas and insights that drive the discovery of the results, but results are determined by reality.
- Exciting, innovative results are perceived as better than boring, incremental results.
- However, certain and reproducible results are better than uncertain and irreproducible results.
- It is ideal to have innovative and certain results. However, people prefer reproducible and boring findings than exciting but not reproducible results.
- The authors ask whether we should chase the next exciting findings or should we work to achieve greater certainty via replication and other strategies?
- How the scientist responds to other individuals’ replications or whether they pursue their own replication are closely tied to the reputation of the scientist.
- If self-replication failure was reported or if their research failed to be replicated but was pursued with follow-up research, the reputation of the scientist whose work was being replicated increases.
- A second survey was conducted between researchers and the general population. The same pattern of findings was observed.
Abstract
Replication is vital for increasing precision and accuracy of scientific claims. However, when replications “succeed” or “fail,” they could have reputational consequences for the claim’s originators. Surveys of United States adults (N = 4,786), undergraduates (N = 428), and researchers (N = 313) showed that reputational assessments of scientists were based more on how they pursue knowledge and respond to replication evidence, not whether the initial results were true. When comparing one scientist that produced boring but certain results with another that produced exciting but uncertain results, opinion favored the former despite researchers’ belief in more rewards for the latter. Considering idealized views of scientific practices offers an opportunity to address incentives to reward both innovation and verification.APA Style Reference
Ebersole, C. R., Axt, J. R., & Nosek, B. A. (2016). Scientists’ reputations are based on getting it right, not being right. PLoS biology, 14(5), e1002460.
You may also be interested in
- Publication Pressure and Scientific Misconduct in Medical Scientists (Tijdink et al., 2014)
- Prestige drives epistemic inequality in the diffusion of scientific ideas (Morgan et al., 2018)
- Fallibility in Science: Responding to Errors in the Work of Oneself and Others (Bishop, 2018)
Rewarding Research Transparency (Gernsbacher, 2018)
Main Takeaways:
- Reproducibility of results is the active ingredient of any science, including cognitive science.
- To ensure reproducibility, cognitive scientists are increasingly taking steps such as pre-registering their studies’ goals and analytic plans toward research transparency. Taking steps to research transparency takes time and steps might not be rewarded.
- The author suggests ways to better reward research transparency in three phases: when hiring researchers for academic jobs, when evaluating researchers for academic promotion and tenure, and when selecting researchers for society and national awards:
Abstract
Cognitive scientists are increasingly enthusiastic about research transparency. However, their enthusiasm could be tempered if the research reward system fails to acknowledge and compensate these efforts. This article suggests ways to reward greater research transparency during academic job searches, academic promotion and tenure evaluations, and society and national award selections.APA Style Reference
Gernsbacher, M. A. (2018). Rewarding research transparency. Trends in cognitive sciences, 22(11), 953-956. https://doi.org/10.1016/j.tics.2018.07.002
You may also be interested in
- Six principles for assessing scientists for hiring, promotion, and tenure (Naudet et al, 2018)
- Publication metrics and success on the academic job market (Van Dijk et al., 2014)
Registered Reports: A step change in scientific publishing (Chambers, 2014)
Main Takeaways:
- Registered reports foster clarity and replication before the experiment is conducted.
- Study protocols are reviewed before the experiments are conducted.
- Readers feel more confident that work is replicable with initial study predictions and analysis plans that were independently reviewed.
- Registered reports are a departure from traditional peer review.
- Low power, high rate of cherry picking, post-hoc hypothesising, lack of data sharing, journal culture marked by publication bias, and few replication studies, have contributed to the reproducibility crisis.
- Allows us to publish positive, negative, or null findings, thus producing a true picture of the literature.
- We will not suffer from publication bias, when a manuscript is worthy of publication, editors and reviewers are driven by the quality of the methods, as opposed to results.
- Registered reports are not an innovation but closer to restoration-reinvention of publication and peer review mechanisms.
- Registered reports allow creativity, flexibility and reporting of unexpected findings.
Quote
“Ultimately, it is up to all of us to determine the future of any reform, and if the community continues to support Registered Reports then that future looks promising. Each field that adopts this initiative will be helping to create a scientific literature that is free from publication bias, that celebrates transparency, that welcomes replication as well as novelty, and in which the reported science will be more reproducible.” (p. 3)
Abstract
Professor Chris Chambers, Registered Reports Editor of the Elsevier journal Cortex and one of the concept’s founders, on how the initiative combats publication bias.APA Style Reference
Chambers, C. (2014). Registered reports: A step change in scientific publishing. Reviewers’ Update. November, 13, 2014. https://www.elsevier.com/reviewers-update/story/innovation-in-publishing/registered-reports-a-step-change-in-scientific-publishing
You may also be interested in
- Registered Reports: A new publishing initiative at Cortex (Chambers, 2013)
- Registered Reports: Realigning incentives in scientific publishing (Chambers et al., 2015)
- Registered reports : a method to increase the credibility of published results (Nosek & Lakens, 2014)
- Registered reports (Jamieson et al., 2019)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- On the persistence of low power in psychological science (Vankov et al., 2014)
Fast Lane to Slow Science (Frith, 2020)
Main Takeaways:
- Fast Science is bad for scientists and bad for science.
- Slow science may help us to make faster progress, how can we slow down? People hardly have time to read the original studies. There is little chance to cultivate broader interests: impairing mental health and well-being of a researcher.
- We lost talented people resulting in decreased diversity.
- Fast science cuts corners and contributes to the reproducibility crisis.
- We could set up a working group-a small conference where practical ideas could be discussed.
- We must look differently at timescales and consider bigger aims of science. Researchers need to be reminded that we contribute to a human effort that transcends an individual’s lifetime.
- We work for the sake of truth and for the benefit of society, as they have reason to believe science continuously improves our models of the world.
- A farsighted vision is important to create and test big theories, irrespective of obstacles.
- How funders view lengths of grant proposals and intervals for evaluations.
- Early career researchers believe they need to amass publications and grants. Established researchers assume that grants need to be maintained for their teams and facilities.
- Researchers need to be encouraged and rewarded for long-term projects that depend on collaborations and may not have a short-term pay-off.
- We must teach students about the history of science, its noble goals, how it moves forward through failure and success through collaboration and competition.
- Researchers should actively model thinking pauses.
- We need to inform researchers about regret and make them aware that in time they may feel similarly. Quality, as opposed to quantity, should be grounds for giving grants, for hiring people and promotion and awards.
- Quality feels too subjective and tainted by bias stems from being part or wishing to be part of high-status networks.
- How then do we assess quality, authors can be good judges of their work?
- Best papers have something new and fascinating to say in a well-argued theoretical framework, are concise and use simple languages.
- Collaborations are visible and replace the lone genius stereotype.
- New solutions to big problems can be found more readily when researchers of diverse skills and different viewpoints interact. This is not difficult, first, we need to achieve common ground and language.
- Need for vigilance to measure reliability and discriminate fact from fake. Engaging with those who bring different perspectives and make us aware of flaws in our theories and experiments. Why not develop a system that allows a listing in the manner of film credits?
- We need to restrict the number of grants anyone holds at any one time and limit the number of papers published per year.
- Funders, institutions and publishers regulate an initially voluntary triage to a prearranged number.
- New models of science communication overcome some problems of traditional journal articles and provide answers to tricky problems of credits.
- Doing less is better but we need to develop tools to measure quality. It would be exciting to set a goal and have content between those who continue in the fast lane and those who decide to switch lanes.
Abstract
Fast Science is bad for scientists and bad for science. Slow Science may actually help us to make faster progress, but how can we slow down? Here, I offer preliminary suggestions for how we can transition to a healthier and more sustainable research culture.APA Style Reference
Frith, U. (2020). Fast lane to slow science. Trends in cognitive sciences, 24(1), 1-2. https://doi.org/10.1016/j.tics.2019.10.007
You may also be interested in
Lessons for psychology laboratories from industrial laboratories (Gomez et al., 2017)
Main Takeaways:
- This proposal does not discuss outright fraud and reaches well-intentioned researchers to produce the best possible scientific work.
- How can we increase the quality of the data in psychology and cognitive neuroscience laboratories?
- Behavioural and social scientists are not less ethical than scientists from other disciplines but noisiness of data obtained from human behaviour contributes to these fields’ problems.
- It is a research ethics imperative to reduce sources of noise in our data by implementing data quality systems.
- Academic laboratories do not have external controls and scientists rarely get trained in quality systems.
- Industrial laboratories have a very different culture as quality systems are widely used.
- High-quality standards are imperative for industrial activities, as there are external forces that cannot be ignored.
- Junior graduate students mess up times before they adopt their own quality habits and development of formal, explicit and enforceable quality policies would be beneficial for everyone involved, benefits would quickly outweigh costs of developing and enforcing these systems.
- Reduce waste of resources on failed studies, facilitate the adoption of open science practices and improve the signal to noise ratio in the data.
- Quality system needs of a group do survey-based studies might be different than the needs of a group collecting neurophysiological data.
- Quality Assessment should be the responsibility of senior members of the team, as this process is strategic, pre-planned and has a long-term time frame.
- A more stringent quality system would be to have an external group perform a quality verification audit.
- A laboratory could be audited by a buddy laboratory from either the same or different institution.
- Research could have verification badges the same way that some of the open science initiatives provide forms of certification for different levels of openness.
Abstract
In the past decade there has been a lot of attention to the quality of the evidence in experimental psychology and in other social and medical sciences. Some have described the current climate as a ‘crisis of confidence’. We focus on a specific question: how can we increase the quality of the data in psychology and cognitive neuroscience laboratories. Again, the challenges of the field are related to many different issues, but we believe that increasing the quality of the data collection process and the quality of the data per se will be a significant step in the right direction. We suggest that the adoption of quality control systems which parallel the methods used in industrial laboratories might be a way to improve the quality of data. We recommend that administrators incentivize the use of quality systems in academic laboratories.APA Style Reference
Gomez, P., Anderson, A. R., & Baciero, A. (2017). Lessons for psychology laboratories from industrial laboratories. Research Ethics, 13(3-4), 155-160. https://doi.org/10.1177/1747016117693827
You may also be interested in
Let’s Publish Fewer Papers (Nelson et al., 2012)
Main Takeaways:
- Authors agree with Nosek and Bar-Anan’s (2012) “Scientific Utopia: I. Opening Scientific Communication” that there is no longer a need for page limits, long lags between acceptance and publication, and prohibitive journal subscription fees, but worry that when all findings are made available, (a) it is harder to discriminate the true findings from the false findings; and (b) there will be more false findings.
- We know authors file away less successful papers (file drawer problem and effect), leading to publication bias but we also need to focus on the cluttered office effect.
- In an office full of papers, it is hard to tell good manuscripts from bad papers. Not all researchers should receive equal consideration: “What is less often pictured is the paper that landed in the file drawer, not because of the vagaries of the publication process but because it reports a study that was ill-conceived, poorly run, or generally uninteresting.”
- The consequence is that the less established researcher is unlikely to be noticed and praised. Researchers seeking top jobs would better to comment on a paper by a famous and eminent researcher. Authors write: “When every paper is available, it becomes increasingly burdensome to find the good papers, and even harder to find the diamond in the rough—the paper that is not by a famous person, not from a famous school, and not in a popular research area.”
- Advancement, as described by the proposal by Nosek and Bar-Anan’s (2012) depends on the value of papers being rescued from the file drawer.
- For every good idea we have, we need to consider many bad ideas.
- It is a good idea to drop bad ideas before they mature into bad papers. Bad papers are easy to write but difficult to publish.
- However, it is easy to publish papers now, making it easier to introduce more bad manuscripts
- Some published papers are false-positives. An occasional false-positive manuscript is bad but lots of false positives are catastrophic to science and the field.
- False positives are hard to identify and correct.
- False positives produce severe costs on the scientific community, which is felt more by the field than the individual researcher.
- We reward researchers heavily for having new and exciting ideas, and less so for being accurate (cf. Ebersole et al., 2016).
- Researchers are trained to defeat the review process and conquer the publisher.
- Researchers are rewarded for the quantity of papers and less for the truth value of our shared knowledge.
- In a system that focuses on one paper per year, researchers can publish a paper on an effect that can be reliably obtained.
- The researcher would be able to pursue their own work with improved clarity and focus, as there is only one paper to write per year.
- It would also be easier to evaluate two candidates who differ in quality but are matched in quantity.
Abstract
This commentary is written by Professor Leif Nelson, Professor Jon Simons and Professor Uri Simonsohn about the importance of publishing fewer papers.APA Style Reference
Nelson, L. D., Simmons, J. P., & Simonsohn, U. (2012). Let's publish fewer papers. Psychological Inquiry, 23(3), 291-293. https://doi.org/10.1080/1047840X.2012.705245
You may also be interested in
Informal Teaching Advice (Bloom, 2020)◈
Main Takeaways:
- To be an excellent teacher. You need 12 points:
- 1. Enthusiasm. Act enthusiastic like there’s no place in the world you would rather be. You should enjoy the material more. This will make your audience interested and want to learn more.
- 2. Be confident. Even if you have practiced this talk 100 times, always act as if the talk has gone smashingly.
- 3. Mix it up. Throw in some movies, demos and so on to cure boredom and make the talk more interesting.
- 4. Bring in other people (e.g. guest lectures, interviews and debates). This will introduce variety to your course.
- 5. Be modest in goals for each class. Do not cram too much material in any single session.
- 6. Be yourself. Everyone has strengths. Use your strength to your advance and align it with the way you teach.
- 7. Teaching prep can leech away all the time. Don’t let it.
- 8. If you say well-timed “Great question. I don’t know but I’ll find out for next class”, it is perceived as charming and makes everyone feel good.
- 9. Use specific students as examples in arbitrary ways.
- 10. If a student asks a stupid question, don’t say they are stupid. Always respond with how interesting at a minimum level, no matter how off-topic.
- 11. Use concrete examples from your own life. They do not necessarily have to be true.
- 12. If you suffer from anxiety, self-medicate before teaching but do not get addicted.
Abstract
This commentary is written by Professor Paul Bloom about how to make your teaching more engaging with your students.APA Style Reference
Bloom, P. (2020). Informal Teaching advice. https://www.dropbox.com/s/glm1agnxtz5tbww/informal-teaching-advice.pdf?dl=0
You may also be interested in
The Matthew effect in science funding (Bol et al., 2018)
Main Takeaways:
- Why is academic success so unequally distributed across scientists? One explanation is the Matthew effect (i.e. a scientist’s past success positively influences success in the future). For example, if one of two equally bright scientists is given an award, the award winning scholar will have a more successful career than the other equally bright scientist who did not receive an award. Put simply, the Matthew effect undermines meritocracy, by allowing an initially fortunate scientist to self-perpetuate, whereas an equally talented but less fortunate counterpart remains underappreciated.
- “First, we address the causal inference problem using a regression-discontinuity approach. Second, we systematically study the Matthew effect in science funding... third, we identify a participation mechanism driving the Matthew effect whereby early stage failure inhibits participation in further competition through discouragement and lack of resources.” (p.4887).
- Method: A single granting program, the Innovation Research Incentives Scheme, is the primary funding source for young Dutch scientists. This was used to assess the Matthew effect, as it provided a dataset containing all review scores and funding decisions of grant proposals.
- Method: “We isolate the effects of recent PhDs winning an early career “Veni” grant by comparing the subsequent funding success of nonwinners with evaluation scores just below the threshold to winners with scores just above it.” (p.4888).
- Results: A scientist with an early career award is 2.5 times more likely to win a mid-career award than those who did not obtain an early-career award. This effect was not due to superior proposal quality or scientific ability but early funding itself.
- Results: Winning an early-career grant explains 40% of differences in earning between the best and worst applicant and raises long-term prospects of becoming a professor by 47%.
- The funding of early-career researchers show a Matthew effect, as candidates who won prior awards are evaluated more positively than non-winners, whereas scientists who were successful in obtaining grants select themselves into applicant pools for the following grants at higher rates than unsuccessful researchers.
Quote
“Recent studies have documented rising inequality among scientists across the academic world (38, 39). Not only do our findings suggest that positive feedback in funding may be a key mechanism through which money is increasingly concentrated in the hands of a few extremely successful scholars, but also that the origins of emergent distinction in scientists’ careers may be of an arbitrary nature.” (p.4880)
Abstract
A classic thesis is that scientific achievement exhibits a “Matthew effect”: Scientists who have previously been successful are more likely to succeed again, producing increasing distinction. We investigate to what extent the Matthew effect drives the allocation of research funds. To this end, we assembled a dataset containing all review scores and funding decisions of grant proposals submitted by recent PhDs in a V2 billion granting program. Analyses of review scores reveal that early funding success introduces a growing rift, with winners just above the funding threshold accumulating more than twice as much research funding (€180,000) during the following eight years as nonwinners just below it. We find no evidence that winners’ improved funding chances in subsequent competitions are due to achievements enabled by the preceding grant, which suggests that early funding itself is an asset for acquiring later funding. Surprisingly, however, the emergent funding gap is partly created by applicants, who, after failing to win one grant, apply for another grant less often.APA Style Reference
Bol, T., de Vaan, M., & van de Rijt, A. (2018). The Matthew effect in science funding. Proceedings of the National Academy of Sciences, 115(19), 4887-4890. https://doi.org/10.1073/pnas.1719557115
You may also be interested in
- Early co-authorship with top scientist predicts success in academic careers (Li et al., 2019)
- Prestige drives epistemic inequality in the diffusion of scientific ideas (Morgan et al., 2018)
- Open Science Isn’t Always Open to All Scientists (Bahlai et al., 2019)
- A user’s guide to inflated and manipulated impact factor (Ioannidis & Thombs, 2019)
- Publication metrics and success on the academic job market (Van Dijk et al., 2014)
Minimising Mistakes in Psychological Science (Rouder et al., 2018)
Main Takeaways:
- The article discusses a few practices to improve the reliability of scientific labs by focusing on what technologies and elements and reduce common and ordinary errors.
- Common, everyday and ordinary mistakes (e.g. reporting a figure based on incorrect data) can be detrimental to science and everyone has made these mistakes.
- We need to consider practices in high risk fields where mistakes can have devastating consequences (e.g. healthcare and military). We have organisations that research this type of management and how to reduce these risks through high reliability organisations and the high reliability organisation principles. Should our lab be a high reliability organisation? Yes. Although mistakes in the labs do not have life-or-death consequences, they can produce knowledge waste and can threaten our reputations. The principles of a high reliable organisation can transfer well to the academic lab setting.
- The principles of a high reliable organisation are:
Quote
“We have been practicing open science for about two years. It is our view that there are some not-so-obvious benefits that have improved our work as follows: There are many little decisions that people must make in performing research. To the extent that these little decisions tend to go in a preferred direction, they may be thought of as subtle biases. These decisions are often made quickly, sometimes without much thought, and sometimes without awareness that a decision has been made. Being open has changed our awareness of these little decisions. Lab members bring them to the forefront early in the research process where they may be critically examined. One example is that a student brought up outlier detection very early in the process knowing that not only would she have to report her approach, but that others could try different approaches with the same data. Addressing these decisions head on, transparently, and early in the process is an example of how practicing open science improves our own science.” (p.9).
Abstract
Developing and implementing best practices in organizing a lab is challenging, especially in the face of new cultural norms such as the open-science movement. Part of this challenge in today’s landscape is using new technologies such as cloud storage and computer automation. Here we discuss a few practices designed to increase the reliability of scientific labs by focusing on what technologies and elements minimize common, ordinary mistakes. We borrow principles from the Theory of High-Reliability Organizations which has been used to characterize operational practices in high-risk environments such as aviation and healthcare. From these principles, we focus on five elements: 1. implementing a lab culture focused on learning from mistakes; 2. using computer automation in data and meta-data collection wherever possible; 3. standardizing organization strategies; 4. using coded rather than menu-driven analyses; 5. developing expanded documents that record how analyses were performed.APA Style Reference
Rouder, J. N., Haaf, J. M., & Snyder, H. K. (2019). Minimizing mistakes in psychological science. Advances in Methods and Practices in Psychological Science, 2(1), 3-11. https://doi.org/10.1177/2515245918801915
You may also be interested in
Open Science at Liberal Arts Colleges (Lane et al., 2020)◈
Main Takeaways:
- Authors offer suggestions on how open science can be fertile when promoted among the faculty that works with undergraduates in classrooms and in labs – i.e., faculty at Small Liberal Arts Colleges (SLACS). Authors also discuss how to use open science to encourage a transformation of the institutional culture and the development of professionals around open science practices.
- SLACs' primary focus is the exceptionality of its undergraduate education (i.e., the integration of teaching and research defines the SLAC experience by meaningfully incorporating students into research, which is central to the institutional mission).
- Authors discuss that faculty engaging in open science practices may be hesitant to discuss the replicability crisis because of the worry students will lose trust in the field or are not knowledgeable or qualified enough about open science to teach it. However, authors argue, SLAC has small class sizes and an interactive and educational approach that facilitates productive and robust discussion about open science, thus encouraging critical thinking and well-rounded education.
- For example, open science can be included in statistics and advanced methods, as pre-registration can be used for students to plan their research question, hypothesis, methods and data analytic procedures before data collection begins.
- Open science should be studied as part of the liberal arts experience or as general education requirements, as transferable skills can be taught (e.g. framing questions, thinking critically, working collaboratively, grappling with data and communicating clearly).
- Students can be taught an explicit and transparent account of how discoveries are made, focusing primarily on the process itself, as opposed to the outcome.
- Pre-registration can be used as a checkpoint during the research process to ensure the students understand their project prior to data collection, especially when SLACS have limited participant pools.
- Open science will help the faculty at SLACs, as it allows the faculty to not compete in terms of quantity but also focus primarily on the research process, thus allowing for well-designed and robust studies and lines of research.
- Sharing data and material encourages more productive collaborations with other researchers, current and future generations of undergraduate students, as it allows us to systematically track, organise and share materials and data. This will encourage good practices for future students and will have ready access to materials that were used in a study that was conducted several years ago, saving the research mentor’s time and energy that otherwise would be used to track down analyses, datasets and questionnaires.
- SIPS: Inclusion is a primary focus on the Society of Improving Psychological Science: to make sure non-PhD granting institutions have a strong voice in its governance. Most projects at SIPS are reviewed with regard to diversity, including type and size of the institution.
Quote
“Sharing materials and data increases the trustworthiness of a research project and makes it possible for others to replicate our work. Sharing data requires greater accountability by researchers, who must demonstrate that they have handled the data properly, used data analysis tools adeptly, and did not overlook potential alternative explanations for their findings. The scrutiny that accompanies open science can begin to feel a little like inviting other researchers to look at how you’ve organized your bedroom closet.” (p.10).
Abstract
Adopting and sustaining open science practices is accompanied by particular opportunities and challenges for faculty at small liberal arts colleges (SLACs). Their predominantly undergraduate student body, small size, limited resources, substantial teaching responsibilities, and focus on intensive faculty-student interactions make it difficult to normalize open science at SLACs. However, given the unique synergy between teaching and research at SLACs, many of these practices are well-suited for work with undergraduate psychology students. In addition, the opportunities for collaboration afforded by the open science community may be especially attractive for those doing research at SLACs. In this paper, we offer suggestions for how open science can further grow and flourish among faculty who work closely with undergraduates, both in classrooms and in labs. We also discuss how to encourage professional development and transform institutional culture around open science practices. Most importantly, this paper serves as an invitation to SLAC psychology faculty to participate in the open science community.APA Style Reference
Lane, K. A., Le, B., Woodzicka, J. A., Detweiler-Bedell, J., & Detweiler-Bedell, B. (2020, August 23). Open Science at Liberal Arts Colleges. https://doi.org/10.31234/osf.io/437c8
You may also be interested in
How to prove that your therapy is effective, even when it is not: a guideline (Cuijpers & Cristea, 2016)
Main Takeaways:
- Treatment guidelines use randomised trials to advise professionals to use specific interventions and not others, policymakers and health insurance companies use evidence to indicate whether or not a specific intervention should be adopted and implemented.
- “If this were your starting position, how could you make sure that the randomised trial you do actually results in positive outcomes that your therapy is indeed effective?” (p.1).
- In fact, if you attained one important method to optimise the chance, results of a trial are favourable. The placebo effect may lead to an expectation that a therapy works.
- Advertise your trial in the media as innovative, unique and the best among the available interventions.
- “Another thing that you have to learn when you want to optimise the effects found for your therapy is that randomised trials have ‘weak spots’, also called ‘risk of bias’.” (p.3)
- Consider the randomisation of participants – randomisation contributes to the trial – if participants are not randomised in groups, effects could be due to baseline differences between groups, not the intervention.
- There are two important factors of randomisation: random numbers should be generated – use coin toss for instance or allocation concealment – researchers conduct trial or assistant to assign participants to respond well to intervention to intervention group instead of to control group.
- Use non-blinded raters of clinical assessment of outcomes to influence outcomes of the trial.
- Also, there is the issue of attrition for individuals who do not respond to intervention or experience side effects. It does not help them, harm them, so why continue?
- Ignore attrition in analyses of outcomes and look exclusively at completers, participants who continued should be analysed.
- Therapy had better outcomes for patients who completed the therapy than individuals who dropped out of the therapy. The correct alternative is to include all participants who are randomised in the final analyses.
- Include multiple outcomes and analyse results so you can look at which outcome produced the best result and only report them, ignoring the other measures.
- Published articles can fail to mention trial registration number, not prompting readers to dig up available protocol and check for selective outcome reporting.
- You can say during presentations – the therapy works better than the existing one and user reports are positive but this is not examined in your manuscript.
- If you find null effects for this specific intervention, do not publish the findings. If you think this is unethical to the participant and funder, just remind yourself, many other researchers do it, so it is an acceptable strategy!
Quote
“Research on the effects on therapies is no exception to this predicament. Many published research findings were found not to be true when other researchers tried to replicate these finding. When you want to show that your therapy is effective, you can simply wait until a trial is conducted and published that does find positive outcomes. And then you can still claim that your therapy is effective and evidence-based.” (p.6)
Abstract
Suppose you are the developer of a new therapy for a mental health problem or you have several years of experience working with such a therapy, and you would like to prove that it is effective. Randomised trials have become the gold standard to prove that interventions are effective, and they are used by treatment guidelines and policy makers to decide whether or not to adopt, implement or fund a therapy. You would want to do such a randomised trial to get your therapy disseminated, but in reality your clinical experience already showed you that the therapy works. How could you do a trial in order to optimise the chance of finding a positive effect? Methods that can help include a strong allegiance towards the therapy, anything that increases expectations and hope in participants, making use of the weak spots of randomised trials (risk of bias), small sample sizes and waiting list control groups (but not comparisons with existing interventions). And if all that fails one can always not publish the outcomes and wait for positive trials. Several methods are available to help you show that your therapy is effective, even when it is not.APA Style Reference
Cuijpers, P., & Cristea, I. A. (2016). How to prove that your therapy is effective, even when it is not: a guideline. Epidemiology and Psychiatric Sciences, 25(5), 428-435. https://doi.org/10.1017/S2045796015000864
You may also be interested in
- Most psychotherapies do not really work, but those that might work should be assessed in biased studies (Ioannidis, 2016)
- A guideline for whom? (Furukawa, 2016)
Many Analysts, One Data Set: Making Transparent How Variations in Analytical Choices Affect Results (Silberzahn et al., 2019)
Main Takeaways:
- The article investigates ‘what if scientific results are highly dependent on subjective decisions at the analysis stage’? It also addresses the current lack of knowledge on how much diversity in analytic choice there can be when different researchers analyse the same data, as well as whether these variations lead to different conclusions.
- The study reports the influence of analytical decisions on research findings obtained by 29 teams that analysed the same dataset to answer the same research question.
- The project had several key stages:
Quote
“The observed results from analyzing a complex data set can be highly contingent on justifiable, but subjective, analytic decisions. Uncertainty in interpreting research results is therefore not just a function of statistical power or the use of questionable research practices; it is also a function of the many reasonable decisions that researchers must make in order to conduct the research. This does not mean that analyzing data and drawing research conclusions is a subjective enterprise with no connection to reality. It does mean that many subjective decisions are part of the research process and can affect the outcomes. The best defense against subjectivity in science is to expose it. Transparency in data, methods, and process gives the rest of the community the opportunity to see the decisions, question them, offer alternatives, and test these alternatives in further research.” (p.354)
Abstract
Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship. Overall, the 29 different analyses used 21 unique combinations of covariates. Neither analysts’ prior beliefs about the effect of interest nor their level of expertise readily explained the variation in the outcomes of the analyses. Peer ratings of the quality of the analyses also did not account for the variability. These findings suggest that significant variation in the results of analyses of complex data may be difficult to avoid, even by experts with honest intentions. Crowdsourcing data analysis, a strategy in which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective, analytic choices influence research results.APA Style Reference
Silberzahn, R., Uhlmann, E. L., Martin, D. P., Anselmi, P., Aust, F., Awtrey, E., ... & Carlsson, R. (2018). Many analysts, one data set: Making transparent how variations in analytic choices affect results. Advances in Methods and Practices in Psychological Science, 1(3), 337-356.https://doi.org/10.1177/2515245917747646
You may also be interested in
- How scientists can stop fooling themselves (Bishop, 2020b)
- The Statistical Crisis in Science (Gelman & Loken, 2014)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- A 21 Word Solution (Simmons et al., 2012)
- Rein in the four horsemen of irreproducibility (Bishop, 2019)
- Seven Steps Toward Transparency and Replicability in Psychological Science (Lindsay, 2020)
- The life of p: “Just significant” results are on the rise (Leggett et al., 2013)
- Only Human: Scientists, Systems, and Suspect Statistics A review of: Improving Scientific Practice: Dealing With The Human Factors, University of Amsterdam, Amsterdam, September 11, 2014 (Hardwicke et al., 2014)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant (Simmons et al., 2011)
- Seven Easy Steps to Open Science: An Annotated Reading List (Crüwell et al., 2019)
ls There a Positive Correlation between Socioeconomic Status and Academic Achievement? (Quagliata, 2008) ◈ ⌺
Main Takeaways:
- Poverty rates have been increasing together with a debate on socio-economic status. Parental income is an indicator of socio-economic status reflecting a potential for social and economic resources.
- Parental education is a component of socio-economic status.
- Learning in a meaningful context so at-risk students can immediately apply when they have learned and connect it to their own lives and individual experiences.
- Many dropouts are not only from low SES backgrounds but also from mismatched learning styles.
- SES affects children’s academic achievement. It is beneficial to determine the type of home environment, how educators will best support them at school.
- Learning environment must be structured to achieve the highest level of internal motivation from all students.
- School success is greatly determined by a family's socio-economic status. American society may be failing to provide educational opportunities for every student and citizen irrespective of socio-economic background.
- Many poor students come to school without social and economic benefits available to most middle and high SES students. Sufficient resources for optimal academic achievement irrespective of socio-economic status.
- The educational system produces an intergenerational cycle of school failures and short change an entire future American society as a result of family socio-economic status.
- Method: 31 surveys were handed out and 13 were returned. Some of the answers include health/nutrition; level of IQ; motivation or lack of motivation of teacher; amount of parental support; class size; quality of instruction/teaching resources; support available in home; school; student disabilities; language; education in culture; style of learning exposure to style; gender; peer influence; natural ability; attendance; family loss of tragic event; pregnancy full term; expectations and teacher/student relationship were also considered.
- Method: Every teacher felt that the environment contributed most when considering academic achievement.
- Method: Additional variables for socio-economic status were included: attitude; self-confidence; need to please; desire to do better; love of learning; acceptance; economics in the home; stability of family; siblings; age of parent(s); age of student maturity; family involvement; importance placed on learning; cognitive level; family history; neighbourhood; modelling of good work; ethics; pride; choices made; resources available; parental achievement; attending pre-k; home literacy; received early intervention; good nutrition; health; high IQ; oral language development; self-care skills; family life; class dynamics; personality and mood on any given day tells a specific teacher what they can or cannot do on a given day.
- Results: The higher the socio-economic status, the higher the academic achievement.
- The current literature is not available as specific students in low socio-economic status homes have high academic achievement.
- Income, education and occupation are responsible for low academic achievement in many low SES families.
- Socio-economic status causes less time with children and a result of lower education level of a parent, students from families of higher economic status tend to have parents who read to and with them, parents more apt to talk to them about the world and offer them more cultural experiences, many of the students' struggle with reading comes from low SES and parents that struggle with reading.
- If a family does not have a good educational background or materials to use to work with their child, the child may suffer as a result of their environment.
- If education is not valued in the home, students will not value education, more expectation for higher education in higher classes.
Abstract
In this literature review, family environments of low socioeconomic status (SES) students were examined and a comparison made in learning styles between low and high achievers Socioeconomic factors such as family income, education, and occupation play major role in the academic achievement of all students. There is a positive correlation between SES and academic achievement. The conclusions of this review have implications for all educators as well as the entire future of American society.APA Style Reference
Quagliata, T. (2008). Is there a positive correlation between socioeconomic status and academic achievement?. Paper: Education masters (p. 78). https://fisherpub.sjfc.edu/cgi/viewcontent.cgi?article=1077&context=education_ETD_masters
You may also be interested in
- Education and Socio-economic status (APA, 2017b)
- Ethnic and Racial minorities and socio-economic status (APA, 2017)
- Women and Socio-economic status (APA, 2010)
- Disability and Socio-economic status (APA, 2010)
- Lesbian, Gay, Bisexual and Transgender Persons & Socioeconomic Status (APA, 2010)
- #bropenscience is broken science (Whitaker & Guest, 2020) ⌺