Against public engagement

Estimated reading time: 8-10 minutes

To get the boring stuff out of the way: Of course I’m not against scientists engaging with the public. Academic research is a common good — it exists to serve society and our human ambition, and it is (mostly) funded by the people. To me, that translates into a right of the general public to be informed about scientific processes and outputs, and an obligation of scientists to work transparently and inform the public about their findings. In fact, some of my best friends are science communicatorsAnd lofty principles aside, having scientists and laypeople talk to each other can have obvious benefits like a well-educated, curious society and more relevant, effective research. 

But with the exception of fluffy cat bellies, few things ever are universally good, meaning desirable without any qualifications. In the hope that the disclaimer above convinces you that I’m not a priggish misanthropist who got her boffin head stuck in the ivory tower, I’m going to try and make the case that recent calls for more public engagement of scientists can be a dangerous thing.

Public engagement as part of open science

First, some background: Two weeks ago, Utrecht University made the news by announcing a new ‘Recognition and Rewards Vision,’ which essentially scraps the Impact Factor (IF) from the university’s hiring and promotion criteria. The general idea is to move away from metrics like the IF and the h-index towards more qualitative and comprehensive evaluations of researchers and to incorporate open-science values in this assessment. Since I’m very much in favour of open science and reasonably convinced that the traditional metrics have poor validity for measuring scientific quality (and caused some perversion of academic research via Goodhart’s law), I’m impressed by the decision. It’s a bold move that I hope will be a strong signal to other institutions as well.[1]That it will gain (U)trechtion, if you will.

However, to fix a broken system, we have to replace the bad old parts with new ones that are better. What are the new criteria we’ll use to decide which researchers to hire and to promote? Two big factors in Utrecht’s new vision are ‘societal impact’ and ‘public engagement’:

By embracing Open Science as one of its five core principles, Utrecht University aims to accelerate and improve science and scholarship and its societal impact. Open science calls for a full commitment to openness, based on a comprehensive vision regarding the relationship with society. This ongoing transition to Open Science requires us to reconsider the way in which we recognize and reward members of the academic community. It should value teamwork over individualism and calls for an open academic culture that promotes accountability, reproducibility, integrity and transparency, and where sharing (open access, FAIR data and software) and public engagement are normal daily practice. In this transition we closely align ourselves with the national VSNU program as well as developments on the international level.

(source: https://www.uu.nl/sites/default/files/UU-Recognition-and-Rewards-Vision.pdf; emphasis added)

Of course Utrecht is not unique in emphasising these aspects. ‘Open science’ is commonly understood as a call to make the scientific process and scientific outputs not only more sustainable and reliable, but also more accessible to everyone — for example via open access, science communication, and citizen science. As mentioned in UU’s vision document, the Association of Universities in the Netherlands (VSNU) and the Dutch funding organisation NWO have recently put these goals on their agenda as well. It is extremely heartening to see that big players like these are taking open-science values on board and seem committed to pay more than lip service to them. But since I have a killjoy reputation to defend, I worry that putting the new commitment to societal impact and public outreach into practice could become a case of Verschlimmbesserung.

Why am I such a miserable hater of public engagement? The simplest reason is this: When I look at the status quo of public engagement among social[2]This isn’t meant as a bait-and-switch; I’m focussing on the social sciences here since my home discipline is psychology and I know a lot less about the quality of public engagement among e.g. STEM scientists. scientists — in news articles, pop-science books, TED talks, business talks, as expert witnesses or policy consultants — I really don’t want more of it. The cases of exceptionally ‘successful’ public engagement that first come to my mind are dire failures such as the Wansink affair, power posing, implicit bias, and the terrible takes of some highly vocal behavioural scientists in the beginning of the COVID-19 pandemic. More harrowing examples of societal impact in the form of enormous amounts of wasted taxpayer’s money (and sometimes lost lives) can be found in Stuart Ritchie’s brilliant Science Fictions and in the more psychology-centred The Quick Fix. In short, public engagement can be pretty disastrous.[3]often for everyone involved

Ok, but there are negative examples for everything. Am I so out of touch that all I can think of are these few bad apples? No, it’s the system that is wrong! Without additional regulations, putting even more weight on public outreach than is already the case could cause severe systemic problems.

Why public engagement can be bad for society

Public engagement: Not even once

An obvious downside of a system in which researchers are incentivised to sell their research to the public is that, well — researchers will try to sell their research to the public. If this incentive becomes even stronger than it already is, more individual researchers (perhaps eventually all researchers) get pushed to market their work directly to laypeople. This direct researcher-to-public model would shift the interface at which the outputs of ‘science’ as a system reach society: As a somewhat simplistic example, imagine that instead of the outcome of a big Cochrane review, newspapers would inform their readers about every single study that would eventually be included in the review. To some extent this is already happening of course — in my experience, non-researchers tend to be aware that everything both causes and cures cancer (and aren’t particularly happy about it).

As researchers, we know that science is cumulative and no single study is ever perfectly conclusive. Science works precisely because it is more than a single lab churning out studies: It’s a complex system that, despite consisting of individuals who spend significant amounts of time posting pictures of their pets on Twitter, integrates many sources of information in such ingenious ways that slowly, over time, things tend to converge on what’s true or useful.[4]Fingers crossed! What’s the perfect time to inform the public about what scientists have learned, and who should tell them? I don’t know. But I think the worst time might be ‘after every single study,’ and the worst messengers might be the respective study’s authors.

Confusing news readers with (even) more contradictory findings might be the least problematic outcome of this interface shift though.[5]Although this might in turn lead to decreased trust in science/scientists, which could be quite problematic — but maybe I’m wrong on this one. I’m much more concerned about what happens when all research (i.e., including the shoddier half) has more direct influence on policy, healthcare, education, business practice, court decisions, and so on. Of course we want science to impact all of these areas. But again, it should be science as a system that has this impact. If we try to syphon knowledge off of its lower-level constituents instead, we effectively throw out the magical cumulative and self-correcting properties that made us so fond of science in the first place. Here, too, I don’t know how to optimise the process. I just think that making all scientists shout more loudly/form cosier bonds with people in power doesn’t sound like the most promising solution.

Why public engagement can be bad for science

By definition, laypeople are ill-equipped to judge the veracity of scientific claims.[6]Being good at it would make them experts and not laypeople. That’s why evaluating researchers more strongly based on what laypeople think about them and less strongly based on what their peers think about them can quickly turn into a quality problem for science: It creates an incentive for attention-grabbing claims in favour of rigorously tested ones.

To get ahead in a system where researchers are exclusively evaluated by each other, individuals have to convince their (expert) peers that their work is of high quality. The benefit of this system is that research is evaluated by those most capable of judging its quality, maximising the chance that mistakes get caught and corrected. Don’t get me wrong though, of course peer review has lots of problems and can be an embarrassingly low bar to pass in practice. But certainly the solution won’t be to lower the bar even further![7]The question of how to fix peer review goes far beyond this post, but I think the most promising ideas include increased reviewer diversity and competence, better incentives, and more transparency (allowing uncertainty to propagate). If researchers were evaluated only by laypeople,[8]I don’t think anyone is planning to let laypeople evaluate scientists directly, but this is what taking a scientist’s success in the public sphere into account would boil down to. there is little reason to expect research outputs to be rigorous and reliable — in this much-too-simple scenario, there is no force pushing in that direction.

Of course neither of these two extremes reflects reality. But you get the point: less weight on expert evaluation means that scientific claims are tested less severely. I worry that even when expert evaluation still plays a big role, giving laypeople’s evaluations more weight could allow scientists to ‘escape’ their peers’ severe judgment by building up status in the ill-informed public. In this scenario, a researcher would (by luck or social finesse) ramp up so much clout in the public sphere that some of it would carry over into the scientific sphere, increasing their status among fellow researchers sufficiently for their low-quality work to go unnoticed. I keep thinking that Brian Wansink‘s term as executive director of the Center for Nutrition Policy and Promotion at the US Agriculture Department must have opened more doors to him within academia and made other scientists less inclined to doubt the quality of his work.[9]Although the opposite argument could be made as well: More visibility in the public sphere might attract more scrutiny (because it draws the attention of more researchers and because critics can gain more status by overthrowing a famous figure). And it’s not implausible to think that this is actually why the problems with Wansink’s research were discovered eventually. At the end of the day, both of these effects may be at play, and we need to figure out how to strike the best balance between them.

I have a hunch that this phenomenon may have played a role in psychology’s replication crisis. Some parts of the discipline[10]looking at you, social psych have had a very close relationship with the public for a long time. That’s quite understandable of course; it’s not surprising that people are interested in how their minds and brains work. But I think it’s possible that psychologists became so dependent on public acclaim over time that it prevented the field from becoming more ‘scientific’ and moving towards more sophisticated enquiry at the cost of catchy headlines, easily-digestible self-help advice, and enormous amounts of eager undergrads bringing in tuition money.[11]As always, #notallpsychologists — e.g., cognitive psychology and mathematical psychology seem to have taken a different path. I admit that this is a fairly underdeveloped, hand-wavy claim though, and that psychology’s success in newspapers, talk shows, and on bookshelves may have played no causal role in the discipline’s screw-ups after all.

Vox populi vox Dei

Before we go, there are two additional cans of worms I don’t want to open but don’t want to leave unmentioned either. First, scientific research comes on a spectrum from very basic to very applied, and one end is substantially harder to market to the public than the other. At the risk of straw-manning my hypothetical opponents, I find it difficult to make sense of a blanket call for more public engagement and societal impact in light of this. Second, some may want the interaction between science and the public to go both ways, meaning that laypeople should have a greater say in what researchers study (and/or how they study it). Although I do think that we have to ensure that science stays societally relevant and that we often should make more use of the perspectives of study participants, patients, and citizens, I see a danger in this proposal too. To some extent, science is a gamble on what might become important in the future, and giving taxpayers too much unmitigated influence could shift the balance too much towards the present. Put differently, I worry that such a system could have similar problems as letting voters decide on complex political issues via referenda. But now we’re getting very political, so I’ll stop poking around in the hornets’ nest and wrap up.

Let’s be careful what we wish for

Perhaps I’m making an elephant out of a mosquito. Nobody has the intention of building a direct scientist-to-public model! But we live in exciting times: Many big institutions are starting to recognise problems with the existing incentive structure in academia and seem willing to make changes. What I fear is that some of these changes could give us exactly what we asked for, but not what we wanted. As always, we should first consider the long-term, systemic risks of any concrete change. That’s a really boring platitude to end on though, so here’s another idea: Perhaps we could agree on aiming for better public engagement rather than just more of it. So let’s think about that instead — which scientists should talk to the public, when, where, and how should they do it, and which structures, regulations, or institutions could help optimise knowledge transfer? It’s probably going to require a bunch of specialists. Instead of turning more researchers into salespeople, maybe we should treat the buffer zone between science and society as a system of its own and invest in more specialised science ‘distillers’ and better science journalism.

Footnotes

Footnotes
1 That it will gain (U)trechtion, if you will.
2 This isn’t meant as a bait-and-switch; I’m focussing on the social sciences here since my home discipline is psychology and I know a lot less about the quality of public engagement among e.g. STEM scientists.
3 often for everyone involved
4 Fingers crossed!
5 Although this might in turn lead to decreased trust in science/scientists, which could be quite problematic — but maybe I’m wrong on this one.
6 Being good at it would make them experts and not laypeople.
7 The question of how to fix peer review goes far beyond this post, but I think the most promising ideas include increased reviewer diversity and competence, better incentives, and more transparency (allowing uncertainty to propagate).
8 I don’t think anyone is planning to let laypeople evaluate scientists directly, but this is what taking a scientist’s success in the public sphere into account would boil down to.
9 Although the opposite argument could be made as well: More visibility in the public sphere might attract more scrutiny (because it draws the attention of more researchers and because critics can gain more status by overthrowing a famous figure). And it’s not implausible to think that this is actually why the problems with Wansink’s research were discovered eventually. At the end of the day, both of these effects may be at play, and we need to figure out how to strike the best balance between them.
10 looking at you, social psych
11 As always, #notallpsychologists — e.g., cognitive psychology and mathematical psychology seem to have taken a different path.