Department of Psychology 2018 - 2019 Colloquium Series

Friday, November 2, 2018 3:00 pm - 3:00 pm EDT (GMT -04:00)

Speaker: Dr. Ulrich Schimmack​, University of Toronto

Title: How Credible is Psychological Science: A Meta-Psychological Perspective

Location: PAS 2083

Reception to follow in PAS 3005

Bio:

Uli’s first line of research concerns the scientific understanding of happiness. The ultimate goal is to develop a causal theory of happiness that can be used to predict the impact of personal and societal changes on happiness (cf. Kahneman, Schwarz, & Diener, 1999, “Well-Being: The foundations of hedonic psychology”). More recently, he has been instrumental in advancing our understanding of replicability and robustness in psychological science. A strong, and sometimes controversial, critique of current methodological practices in the field, he has been instrumental in pushing the dialogue on the topics of open science, replication, and power, and has proposed several methodological innovations for assessing the credibility of past research.

Abstract: 

It has been known for decades that psychology journals publish nearly exclusively positive results; that is, a rejection of the null-hypothesis with p-values less than 5% (Sterling, 1959). This is not necessarily a problem if (a) the null-hypothesis is rarely true and (b) studies are carried out with high statistical power. However, it has also been known for decades that many psychological studies have low to modest power to produce significant results (Cohen, 1962).  Thus, the high success rate in psychology journals can only be explained with publication bias (Sterling et al., 1995).  Despite warnings by methodologists for decades, psychologists have compensated for low power by using questionable research practices to publish significant results (QRPs, John et al., 2012).  QRPs essentially increase researchers’ success rates by inflating effect sizes and the risk of type-I errors.  Since 2011, some psychologists have started to criticize the use of QRPs and examined the replicability of published results.  The most influential finding comes from the Open Science Collaboration (OSC) replication project.  Only 50% of cognitive findings and 25% of social findings could be replicated.  The main problem is that actual replication studies are costly and time consuming. In my talk, I introduce a novel statistical method that predicts the outcome of replication studies from the test statistics reported in original articles, called z-curve (Brunner & Schimmack, 2018).  The method estimates mean power for a set of significant results, while correction for the inflation introduced by selection for significance.  Simulation studies show that it works well, even if population effect sizes and their distribution is unknown. I then illustrate how this method can be used to examine the credibility of psychological research without costly and sometimes impossible actual replication studies.  The results show that experimental social psychology does have a replicability crisis, while cognitive psychology produces more replicable results.  The main reason for this difference is that experimental social psychologists prefer power-hungry between-subject designs, while cognitive psychologists often use economically within-subject designs with many repeated trials. I also show that there have been relatively few changes in actual research practices in response to the so-called replication crisis.  Theoretically, the solution to the replication crisis in social psychology is simple.  They only need to conduct fewer studies with more power. In practice, however, the focus on flawed quantitative indicators (i.e., publish as many significant results as possible) is a barrier to change. The ability to estimate researchers’ power to produce significant results might help to change this.