These days, the results of large scale replication efforts are more eagerly anticipated than the next season of Game of Thrones (see e.g. Sanjay’s live reporting of a multi-site ego depletion study).
Okay, this may be an exaggeration.How could anything be more eagerly anticipated than the next season of Game of Thrones? But it might be safe to say that, even if replication hasn’t gone fully mainstream yet, it has become more common and more accepted (and maybe even cherished) in many fields of psychology. Think of it as moving from Musique concrète to not quite Rock or Hip Hop, but to the style of music that you wouldn’t mind listening to if it came up on the radio.
First and foremost, this is amazing and an important step toward the maturation of psychology as a science. However, given that resources are scarce and that replications are costly, it might not always be the most efficient step to clarify the scientific record.
For example, the design of some studies could be so fatally flawed that even if the results were replicable, it would not inform us about anything. In other cases, the original authors might have sliced and diced their data in so many ways that nobody who took Stats 101 and paid some attention would take the findings too seriously. Or maybe the authors overlooked that something was wrong with their data which renders the results uninterpretable. In short, the replication value may simply be too low.
Now, there are people out trying to spot these issues in other people’s work. They write critical commentaries, re-examine the statistical evidence behind published studies, and double check the data of others (if they can get their hands on it, that is). This
external scrutiny data-thuggery is essential for science and has gone underappreciated. However, it does have one limitation by design: Somebody who criticises somebody else’s research doesn’t have all the information about how the study was run and about what happened between the lines of the gloss-of-confidence journal article that eventually got published.
Only the original authors and other people who directly worked on the study have full access to that type of information—so they’re the ones who might be able to spot substantial problems that are not visible to others. But they are also the ones who might have least interest in making these problems public, given what our current research culture looks like: We value confident assertions of certainty over disclosures of uncertainty, and admitting mistakes seems about as cool as stabbing yourself in the foot with a table knife. That is a huge cultural issue, and it might also stand in the way of the supposedly self-correcting nature of science.
Enter the Loss-of-Confidence Project, a project on which IThe albatross-of-confidence have been working together with Tal YarkoniWizard of Oz-of-confidence and Christopher ChabrisChriscross-of-confidence to change something about the flaws of our research culture. We’re collecting statements of psychologists who have lost confidence in one of their own findings. Eventually, we plan to put these statements together into a journal article (with every submitter becoming a co-author).
Part of the rationale behind the project is that while it might be very hard to individually stand up and publicly declare “Look, I was wrong about this”,The fact that this is so hard makes it all the more admirable that some people have done it solo. For example, here is Dana Carney’s position on power pose including details about her seminal study on the topic, and here is Will Gervais’ post publication peer review of his own study published in Science. it might be somewhat easier to stand up together and declare “Look, we were probably wrong about these things. It happens, it’s part of how science goes in real life. The important thing is that we want to collectively get it right in the long run.” Of course, that works better the more submissions we get! So if you have a loss-of-confidence statement to share, please consider contributing to the project (and don’t hesitate to contact me in case you have any questions).
Quite obviously, loss-of-confidence statements are not the only way to try to rectify the scientific record as an author.
Check out Rebecca Willén’s list of publications: She has added disclosure statementsLike a boss-of-confidence highlighting where the reporting in her studies was or was not transparent and adding any omitted information that might be necessary to evaluate the evidence.
And if you have a large file-drawer of “failed” studies that might affect how your published studies can be interpreted, you could submit a file-drawer report to the new journal Meta-Psychology that explicitly encourages these types of submissions (and if that file-drawer made you lose confidence in one of your own findings, why not also submit a Loss-of-Confidence statement?).COI: I’m on the Advisory Board of Meta-Psychology. Also, but I guess you have noticed it at this point, I’m running the Loss-of-Confidence project.
There are many different ways to help cleaning up the scientific record, and so there are many different ways in which you can contribute and make a difference!
Footnotes [ + ]
|1.||↑||How could anything be more eagerly anticipated than the next season of Game of Thrones?|
|3.||↑||Wizard of Oz-of-confidence|
|5.||↑||The fact that this is so hard makes it all the more admirable that some people have done it solo. For example, here is Dana Carney’s position on power pose including details about her seminal study on the topic, and here is Will Gervais’ post publication peer review of his own study published in Science.|
|6.||↑||Like a boss-of-confidence|
|7.||↑||COI: I’m on the Advisory Board of Meta-Psychology. Also, but I guess you have noticed it at this point, I’m running the Loss-of-Confidence project.|