Econ Envy

Earlier this year, I went through an academic existential crisis in which I questioned whether the field of research I’m in (personality psychology) is the right one for me.[1]The jury is still out. So, I decided to lurk over the proverbial edge of the plate and ended up attending the Summer School on Socio-Economic Inequality in Bonn.

There are obvious opportunity costs attached to spending a whole week on subjects that will be of little relevance for your future career (optimal tax theory, anyone?) instead of spending the time the way you are supposed to (feeling bad about not getting enough done working on your dissertation). Still, I found the summer school very rewarding, and I am not just saying this because the locations of the social events were much nicer and posher (duh) than those of the psych events that I have attended so far.

Spending some time among economists was a great chance to catch some glimpses of an academic culture that in some ways is very different from the one that I have experienced in psych. There’s that phenomenon that if you spend a lot of time in your own field, you can get so entrenched that you start to assume that the social and cultural norms of your field are set in stone and maybe even necessary preconditions for having a functioning scientific community in the first place. Go and get some fresh air and you may start to see that things could work very differently as well.

So, here’s a list of things that I observed at the summer school that were quite distinct from what I have seen at psych events.

1. Fewer publications

Personality psychologist Brent Roberts gave a talk at the summer school and one of the econ grad students remarked in awe that he must be brilliant because he was publishing a lot, like as many as four papers per year. Now the last years might have made me cynical, but I neither think that four publications per year would qualify as “a lot” nor that they would indicate any sort of brilliance—a lot of psychologists are churning out more papers than that, and the last time I checked, many of them weren’t exactly brilliant.[2]This is not a comment on Brent’s brilliance which, of course, shouldn’t be questioned (given that I might have to ask him for letters of recommendation in the future. Ahem). If one decided to arrange different (sub-)fields of research along a continuum from r-strategists (high quantity of low quality publications) to K-strategists (low quantity of high quality publications), many psych subfields would probably end up closer to the r end of the scale than econ.

I don’t want to accuse anybody else of cutting corners—I feel that way about my own publications! If I had spent some more time and effort on them, they would most likely be better papers. At the same time, I never felt like the research culture in psych encouraged taking things slowly—quite the contrary, I have seen plenty of “academic advice” along the lines of “it’s never going to be perfect anyway, so just submit it already.”

Meanwhile, parts of economics foster a preprint culture in which manuscripts that are already in more-than-great shape (according to my psych standards) get a longer incubation time in which they are shared with others to solicit critical feedback. There’s also that thing called “the job market paper”: an original piece of research to demonstrate one’s abilities which does not need to be published in a journal. Maybe I’m being naive here, but I do find the idea exciting that job committees might actually read a longer piece of your work to judge whether you can do good research instead of simply counting the number of papers or defaulting to other heuristics (“This was published in Science, so it’s probably good, right?”).

2. Math & models

The talks I heard followed a certain pattern: they started with the usual narrative of substantive points about the subject matter, followed by a derivation of a statistical model. This involved math, lots of math, and math beyond statistics.

Now, there’s probably a case to be made that parts of economics went astray in their hunt for impressive-looking mathematics;[3]At least that’s a claim that physicist Sabine Hossenfelder makes in her book “Lost in Math” (which I enjoyed reading). and ramping up math certainly is no panacea in an empirical field. It can go wrong. But that’s not a reason to not learn math: to be able to judge if math is just used for obfuscation and to make the author look smarter, you got to learn math first.

There is also a case to be made that economists’ models of how humans work are mostly wrong. But here’s the deal: The models have been formalized, which makes it possible for them to be wrong. In contrast, many theories in the more social parts of psychology that I have encountered during my studies are soft as pudding, which means that no data could possibly shatter them. They are basically just collections of verbal statements that all more or less align with common sense.

To cite from Paul Smaldino’s kick-ass paper “Models are stupid and we need more of them”:

Verbal models can appear superior to formal models only by employing strategic ambiguity (sensu Eisenberg 1984), giving the illusion of understanding at the cost of actual understanding. That is, by being vague, verbal models simultaneously afford many interpretations from among which any reader can implicitly, perhaps even unconsciously, choose his or her favorite.

3. Thinking hard about causality

At the poster session during the summer school, one faculty member marched up to one of the posters, interrupted the student before he could even properly start his spiel and asked: “Yes, but what’s the causal identification strategy here?” Now this might tell you something about the brashness of economists (in this case, the identification strategy took up a significant part of the poster), but it certainly tells you that economists mean business when they think about causality. A potentially exciting causal claim is only exciting when it’s convincing.

Meanwhile, in psych, I often see a different line of reasoning: “Sure, this is only a correlation, but how else would you explain this pattern?”[4]Checkmate! The favorite causal identification strategy seems to be tentative language, a certain number of somewhat arbitrary control variables, and prayers to Meehl.[5]Again, I am including my own work here. Sometimes, it seems psychologists would rather cut the colons and bad puns from their titles than be explicit about their causal identification strategy.

Of course, that’s not entirely fair. There are psychological researchers who think hard about causal inference.[6]For an example from personality psych, Briley, Livengood, and Derringer have a nice paper on behaviour genetic frameworks for causal reasoning. Plus, psychologists seem to be predominantly trained for experimental research, which is the best causal identification strategy to begin with. If your research completely relies on randomization, why bother learning other, inferior approaches? But at the same time, many fields of psych are interested in the effects of variables than cannot be randomly assigned easily/ethically/at all; and even if you can randomize some factors, you might be interested in mediating mechanisms, or in how effects are modified by other variables–which again will require causal inference skills. Maybe somewhere out there, there is a psych methods course that takes great care to introduce students to e.g. the potential outcomes frameworks and counterfactuals; but I didn’t receive such a training, and it gives me a lot of headache because I’m ill-prepared for my job as a social scientist.

And, again, there are pitfalls to the focus on causal identification: bad instrumental variables galore! And: has the identification police become so powerful that it is holding empirical economics back? But again, to spot weak causal claims or misapplications, you got to learn the nuts and bolts of causal inference first. My psych courses failed to teach me those, and judging from the psych literature, I’d suspect it’s the same for many other researchers.

A picture of a fence with grass
The disciplinary fence. (photo fence: Duong Chung, Unsplash; photo cow: P*ssed again!, Flickr)

4. A more rigorous discussion culture

Economists’ infamous argumentativeness! Heard of that one girl who wanted to give a talk but couldn’t even get through the first slide because she was being bombarded by critical and dismissive questions from a bunch of older econ dudes? There can be drawbacks to a rigorous discussion culture; and I’ve heard lots of complaints about established male econ profs being cocky as hell.[7]Not entirely sure whether you couldn’t claim the same for established male psych profs.

But at the summer school, I was mostly positively impressed by the rigorous discussion culture. Students would ask faculty members hard questions that potentially undermined the conclusions of their talks–not after the talk, but during it. And the faculty members always seemed very willing to take these hard questions seriously. I got the overall impression that both students and faculty were much more willing to consider potentially uncomfortable alternative interpretations of their data if they seemed like a real threat to their conclusions.

For example, Victor Lavy presented results on teachers’ gender bias and its downstream consequences. During the talk, a particular alternative explanation came up: maybe classroom teachers’ differential assessment of girls and boys actually reflects genuine knowledge about the abilities of their students? In other words, maybe classroom teachers’ aren’t biased, they just genuinely know that the boys in their class are worse than the girls (or vice versa). That possibility was not only brought up by a student, but then ruled out by Lavy with data (he also used a gender bias measure based on the previous classroom that should not be influenced by gender differences in performance in the current classroom). This was the most convincing talk on gender bias to date, and I think a lot of it has to do with the willingness to seriously confront alternative interpretations.

Conclusion: the grass is always greener on the other side of the disciplinary fence

Now, I could end with a long list of things that I noticed that I didn’t like and that psych as a field is doing better. To be honest, there wasn’t really much I disliked–this was only a week-long summer school, and I felt pretty relaxed most of the time, probably because I was an outsider and didn’t need to perform in a certain way to impress anybody. I’m sure econ has some deep-seated problems (cough measurement cough), but I’m in no position to discuss these.

I also don’t want to make a point that psych should start to imitate econ, which could go wrong in many ways. But after spending a lot of time surrounded by psychologists, one might come to believe that everybody has to publish dozens of papers (how else could we judge their productivity?); that it’s completely unreasonable to expect people to actually learn some math (it’s just too hard! and we’re so busy publishing papers!); that formalized models are completely unsuitable for human behavior (it’s just too complex! also, think of all the math involved!); that causal inference based on observational data is necessarily futile (so why even try to do it properly? just do an experiment already! also the math!); and that discussing alternative interpretations of your data could weaken your conclusions (let’s not give reviewers the wrong ideas). Hanging out with folks from other fields is a great way to see that things can be quite different.

Footnotes

Footnotes
1 The jury is still out.
2 This is not a comment on Brent’s brilliance which, of course, shouldn’t be questioned (given that I might have to ask him for letters of recommendation in the future. Ahem).
3 At least that’s a claim that physicist Sabine Hossenfelder makes in her book “Lost in Math” (which I enjoyed reading).
4 Checkmate!
5 Again, I am including my own work here.
6 For an example from personality psych, Briley, Livengood, and Derringer have a nice paper on behaviour genetic frameworks for causal reasoning.
7 Not entirely sure whether you couldn’t claim the same for established male psych profs.

7 thoughts on “Econ Envy”

  1. Beautiful. Sometimes I think of psychology’s failure to consider uncomfortable alternative hypotheses as our other methodological crisis.

    1. Thank you! I certainly agree that it’s a massive methodological issue, although it’s probably not unique to psychology. More like a perpetual “humans trying to grasp reality”-crisis.

  2. Just a few comments from one of them old dudes.

    Pre-publication is indeed the norm in econ — and it is a good thing — but the specific case of pre-publication called job market paper is rarely what recruiting committees read. It’s typically down to cv and rec letters and then may be a quick cursory reading of what you consider your best work. Later, if you are invited to an onsite, people might read the paper more carefully and they certainly will challenge you from the very beginning.

    I don’t think it is correct to say that the typical pattern is that you derive a statistical model, or at least only empirical econs do that. Typically, I’d argue, models are derived from basic behavioral assumptions and then explored in various ways (including experimental work). Math and models are to my mind a strong point of the econ tribe.

    But … as it is, for example, the controversy over the reality of cognitive illusions (see Gigerenzer vs Kahneman & Tversky in PR 1996) is all about this issue. Gigerenzer has demanded falsifiable models for the cognitive mechanisms that underlie the alleged biases. There is a reason why social psychology is the trainwreck that it is while cognitive psychology has done much better.

    I completely agree that hanging out with folks from other fields is a great way to see things can be quite different. Preferably though when you have a halfway secure position ;-).

    1. Awww, I knew that the job market paper thing sounded too nice to be entirely implemented that way. And I really love the stuff where models are derived from basic assumptions — that’s also why cognitive psych was my favorite part of my undergrad (how I ended up in personality psych remains a mystery). Thanks for sharing your experiences!

  3. My own experience with models in economics is that the formalization doesn’t help testability too much. Most of the models that you see in theory publications are never tested. Whenever models are tested, it comes in one of two forms. Either only a small portion of the predictions are actually tested, in which case why even have the model at all. Or you see some sort of structural model, which I don’t trust, because it’s too easy to overfit when there isn’t some hold-out set that the researchers can’t access.

Comments are closed.