Things are bad and yet I’m mildly optimistic about the future of psychology

Psychology is fucked. I’m not going to reiterate the whole mess because it already has been aptly summarized in Sanjay Srivastava’s “Everything is fucked: The syllabus”.[1]but cf. more recent real-world fuck ups: For example, the retraction of a paper by William Hart, supposedly because a grad student faked data but the whole story turns out to be way more messy: The study was ridiculous in the first place and either Hart is exceptionally unlucky in his choice of grad students or he could have known about the issue earlier. The order of events has remained a mystery. And, of course, the instant classic fuck up of the grad student who never said no: Brian Wansink recalls fond memories of a grad student who went on “deep data dives” under his guidance that results in four publications; three brilliant data detectives in turn dive into these papers and find about 150 numerical inconsistencies; Wansink promises to clean up the mess (but really how much hope is there when people are not even able to consistently calculate from two numbers the change in percent). But who cares, it’s not like we are doing chemistry here.
Quite a few people seem to be rather pessimistic about the future of psychology. This includes both old farts[2]Self-designation introduced by Roberts (2016) who are painfully aware that calls for improved methodology[3]cf. Paul Meehl, the Nostradamus of psychology. have been ignored before, as well as early-career researchers who watch peers ascend Mount Tenure with the help of a lot of hot air and only little solid science.

But hey, I’m still somewhat optimistic about the future of psychology, here’s why:[4]Alternative explanation: I’m just an optimistic person. But I’ve noticed that heritability estimates don’t really make for entertaining blog posts.

THE PAST

Sometimes, it helps to take a more historical perspective to realize that we have come a long way. Starting from a Austrian dude with a white beard who sort of built his whole theory of the development of the human mind on a single boy who was scared of horses, and who didn’t seem to be overly interested in a rigorous test of his own hypotheses to, well, at least nowadays psychologists acknowledge that stuff should be tested empirically. Notice that I don’t want to imply that Freud was the founding father of psychology.[5]It was, of course, Wilhelm Wundt, and I’m not only saying this because I am pretty sure that the University of Leipzig would revoke my degrees if I claimed otherwise. However, he is of – strictly historical – importance to my own subfield, personality psychology. Comparing the way Freud worked to the way we conduct our research today makes it obvious that things changed for the better. Sure, personality psychology might be more boring and flairless nowadays, but really all I care about is that it is accurate.
You don’t even have to go back in time that far: Sometimes, I have to read journal articles from the 80s.[6]Maybe the main reason why I care about good science is that sloppy studies make literature search even more tedious than it would be anyway. Sure, not all journal articles nowadays are the epitome of honest and correct usage of statistics but really you don’t stumble across “significant at the p < .20 level” frequently these days. And if you’re lucky, you will even get a confidence interval or an effect size estimate!
And you don’t even have to look at psychology. A Short History of Nearly Everything used to be my favorite book when I was in high school and later as a grad student, reading about the blunder years of other disciplines that grew up fine nonetheless[7]to varying degrees, obviously. But hey, did you know that plate tectonics became accepted among geologists as late as the 1960s? gave me great hope that psychology is not lost.

This giraffe predates Sigmund Freud, yet it doesn’t wear a beard. (Photo: Count de Montizon)

THE PRESENT

Psychologists are starting to try to replicate their own as well as other researchers’ work – and often fail, which is great for science because this is how we learn things.[8]For example, that some effects only work under very strict boundary conditions, such as “effect occurs only in this one lab, and probably only at that one point in time.”
We now have Registered Reports in which peer review happens before the results are known, which is such a simple yet brilliant idea to avoid that undesirable results simply disappear in the file drawer.
To date, 367 people have signed the Peer Reviewers’ Openness Initiative and will now request that data, stimuli and materials are made public whenever possible (it can get complicated though), and 114 people have signed the Commitment to Research Transparency that calls for reproducible scripts and open data for all analyses but also states that the grading of a PhD thesis has to be independent of statistical significance[9]Really this seems to be a no-brainer, but then again, some people seem to mistake the ability to find p < .05 with scientific skills. or successful publication.
The psychology department of the Ludwig-Maximilians-Universität Munich explicitely embraced replicability and transparency in their job ad for a social psychology professor. That’s by no means the norm yet, and I’m not sure whether this particular case worked out, but one can always dream.
The publication landscape is changing, too.
People are starting to uploade preprints of their articles which is a long overdue step in the right direction.
Collabra is a new journal with a community-centered model to make Open Access affordable to everyone.
Old journals are changing, too: Psychological Science now requires a data availability statement with each submission. The Journal of Research in Personality requires a research disclosure statement and invites replications.[10]There are more examples but these two come to my mind because of their awesome editors. There are also journals that take a, uhm, more incremental approach to open and replicable science. For example, I think it’s great that the recent editorial of the Journal of Personality and Social Psychology: Attitudes and Social Cognition concludes that underpowered studies are a problem, but somehow I feel like the journal (or the subfield?) is lagging a few years behind in the whole discussion about replicable science.

Additionally, media attention has been drawn to failed replications, sloppy research, or overhyped claims such as power pose, the whole infamous pizzagate[11]Not the story about the pedophile ring, the story about the psychological study that took place at an all-you-can-eat pizza buffet. story, and the weak evidence behind brain training games. Now you might disagree about that, but I take it as a positive sign that parts of media are falling out of love with catchy one-shot studies because I feel like that whole love affair has probably been damaging psychology by rewarding all the wrong behaviors. [12]Anne is skeptical about this point because she doubts that this is indicative of actual rethinking as compared to a new kind of sexiness: debunking of previously sexy findings. Julia is probably unable to give an unbiased opinion on this as she happens to be the first author of a very sexy debunking paper. Now please excuse me while I will give yet another interview about the non-existent effects of birth order on personality.

And last but not least, we are using the internet now. A lot of the bad habits of psychologists – incomplete method sections, unreported failed experiments, data secrecy – are legacy bugs of the pre-internet era. A lot of the pressing problems of psychology are now discussed more openly thanks to social media. Imagine a grad student trapped in a lab stubbornly trying to find evidence for that one effect, filing away one failed experiment after the other. What would that person have done 20 years ago? How would they ever have learned that this is not a weakness of their own lab, but an endemic problem to a system that only allows for the publication of polished to-good-to-be-true results?[13]In case you know, tell me! I’d really like to know what it was like to get a PhD in psychology back then. Nowadays, I’d hope that they would get an anonymous blog and bitch about these issues in public. Science benefits from less secrecy and less veneer.

Students be like: They are doing what with their data? (Photo: user:32408, pixabay.com)

THE FUTURE

Sometimes I get all depressed when I hear senior or mid-career people stating that the current scientific paradigm in psychology is splendid; that each failed replication only tells us that there is another hidden moderator we can discover in an exciting new experiment performed on 70 psychology undergrads; that early-career researchers who are dissatisfied are just lazy or envious or lack flair; and that people who care about statistics are destructive iconoclasts.[14]What an amazing name for a rock band.
This is where we have to go back to the historical perspective.
While I would love to see a complete overhaul of psychology within just one generation of scientists, maybe it will take a bit longer. Social change supposedly happens when cohorts are getting replaced.[15]cf. Ryder, N. B. (1965). The cohort as a concept in the study of social change. American Sociological Review
30: 843-861.
Most people who are now professors were scientifically socialized under very different norms, and I can see how it is hard to accept that things have changed, especially if your whole identity is built around successes that are now under attack.[16]I have all the more respect for the seniors who are not like that but instead update their opinions, cf. Kahneman’s comment here, or even actively push others to update their opinions, for example, the old farts I met at SIPS. But really what matters in the long run – and I guess we all agree that science will go on after we are all dead – is that the upcoming generation of researchers is informed about past mistakes and learns how to do proper science. Which is why you should probably go out and get active: teach your grad students about the awesome new developments of the last years; talk to your undergraduates about the replication crisis.
Bewildered students that are unable to grasp why psychologists haven’t been pre-registering their hypotheses and sharing their data all along is what keeps me optimistic about the future.

Footnotes

Footnotes
1 but cf. more recent real-world fuck ups: For example, the retraction of a paper by William Hart, supposedly because a grad student faked data but the whole story turns out to be way more messy: The study was ridiculous in the first place and either Hart is exceptionally unlucky in his choice of grad students or he could have known about the issue earlier. The order of events has remained a mystery. And, of course, the instant classic fuck up of the grad student who never said no: Brian Wansink recalls fond memories of a grad student who went on “deep data dives” under his guidance that results in four publications; three brilliant data detectives in turn dive into these papers and find about 150 numerical inconsistencies; Wansink promises to clean up the mess (but really how much hope is there when people are not even able to consistently calculate from two numbers the change in percent). But who cares, it’s not like we are doing chemistry here.
2 Self-designation introduced by Roberts (2016)
3 cf. Paul Meehl, the Nostradamus of psychology.
4 Alternative explanation: I’m just an optimistic person. But I’ve noticed that heritability estimates don’t really make for entertaining blog posts.
5 It was, of course, Wilhelm Wundt, and I’m not only saying this because I am pretty sure that the University of Leipzig would revoke my degrees if I claimed otherwise.
6 Maybe the main reason why I care about good science is that sloppy studies make literature search even more tedious than it would be anyway.
7 to varying degrees, obviously. But hey, did you know that plate tectonics became accepted among geologists as late as the 1960s?
8 For example, that some effects only work under very strict boundary conditions, such as “effect occurs only in this one lab, and probably only at that one point in time.”
9 Really this seems to be a no-brainer, but then again, some people seem to mistake the ability to find p < .05 with scientific skills.
10 There are more examples but these two come to my mind because of their awesome editors. There are also journals that take a, uhm, more incremental approach to open and replicable science. For example, I think it’s great that the recent editorial of the Journal of Personality and Social Psychology: Attitudes and Social Cognition concludes that underpowered studies are a problem, but somehow I feel like the journal (or the subfield?) is lagging a few years behind in the whole discussion about replicable science.
11 Not the story about the pedophile ring, the story about the psychological study that took place at an all-you-can-eat pizza buffet.
12 Anne is skeptical about this point because she doubts that this is indicative of actual rethinking as compared to a new kind of sexiness: debunking of previously sexy findings. Julia is probably unable to give an unbiased opinion on this as she happens to be the first author of a very sexy debunking paper. Now please excuse me while I will give yet another interview about the non-existent effects of birth order on personality.
13 In case you know, tell me! I’d really like to know what it was like to get a PhD in psychology back then.
14 What an amazing name for a rock band.
15 cf. Ryder, N. B. (1965). The cohort as a concept in the study of social change. American Sociological Review
30: 843-861.
16 I have all the more respect for the seniors who are not like that but instead update their opinions, cf. Kahneman’s comment here, or even actively push others to update their opinions, for example, the old farts I met at SIPS.

9 thoughts on “Things are bad and yet I’m mildly optimistic about the future of psychology”

  1. “But really what matters in the long run – and I guess we all agree that science will go on after we are all dead – is that the upcoming generation of researchers is informed about past mistakes and learns how to do proper science. Which is why you should probably go out and get active: teach your grad students about the awesome new developments of the last years; talk to your undergraduates about the replication crisis.”

    Yes! I wonder if it would be a useful idea to actually examine if this is the case. More specifically, whether universities are actually teaching new students about how to do proper science.

    (I did a little N=1 examination myself, by looking at the curriculum of my old university where 5 years ago i was not taught anything about matters like pre-registration, power, effect sizes, the difference between confirmatory and exploratory analyses, publication bias, how analytic flexibility can make anything “significant”, etc. I looked at the courses being taught now, and some of the references they posted, and it did not look to me that anything, has been changed.)

    I think examining this could be really useful, and i wonder if some researchers are (planning on) doing this…

    1. Great question, my sample size is also very limited (N = 1). Here in Leipzig, the crisis is mentioned and more or less elaborated in at least three courses now (methods, personality psych, and a tiny bit in social psych) which was not the case when I got my BSc here.

      I wonder whether the meta-science people are running a study on this? Sounds like a great topic for SIPS.

    2. Here’s a replication of Julia’s N=1 study: one of the essays for honours students here is to read Bem’s infamous article, followed by Richie et al.’s failed replications of those ESP studies and to pick which they believe and argue for why. Good way to illuminate ways of evaluating evidence in published papers me thinks.

  2. One of my favorite graffitis on the Berlin Wall states:

    “Alles wird besser, aber nichts wird gut.”
    [Literally: everything gets better, nothing gets good]

    It seems fitting in this context.

  3. Great story – funny too. 😉

    I will forward it to my research master class; I teach them on pre-registration, power, effect sizes, the difference between confirmatory and exploratory analyses, publication bias, how analytic flexibility can make anything “significant”, etc.
    However, courses like this are incidental. More importantly, even at universities where courses like this are taught, students are taught other courses as well, where they learn other things…

    I am not as optimistic as you are Julia, but I hope you are right that the new generation will be able to change things for the better.

    1. Thanks a lot! Believe it or not, people like you are what makes me optimistic 😉 Even if students also have to attend more “old fashioned” courses, informing them about all these issues can make a huge difference (for some). My advisor sneaked “False Positive Psychology” into the reading list of the personality psychology seminar in the very first year of the bachelor and quite a few students told me that this completely changed their outlook on the whole more “conventional” social psychology lecture.

Comments are closed.