Intuitive Bayes and the preponderance of pessimism about human reasoning

(This is a 4th post on rational behavior of people unfairly judged to be irrational. See 1, 2, and 3)

Most people believe they are better than average drivers. Is this a cognitive bias? Behavioral economists think so: “illusory superiority.” But a rational 40-year-old having had no traffic accidents might think her car insurance premiums are still extremely high. She may then conclude she is a better than average drivers since she’s apparently paying for a lot of other peoples’ smashups. Are illusory superiority and selective recruitment at work here? Or is this intuitive Bayesianism operating on the available evidence?

Bayesian philosophy is based on using a specific rule set for updating one’s belief in light of new evidence. Objective Bayesianism, in particular, if applied strictly, would require us to quantify every belief we hold – our prior credence –  with a probability in the range of zero to one and to quantify the value of new evidence. That’s a lot of cognizing, which would lead to a lot more personal book keeping than most of us care to do.

As I mentioned last time, Daniel Kahneman and other in his field hold that we are terrible intuitive Bayesians. That is, they believe we’re not very good at doing the equivalent of Bayesian reasoning intuitively (“not Bayesian at all” said Kahneman and Tversky in Subjective probability: A judgment of representativeness, 1972). But beyond the current wave of books and TED talks framing humans as sacks of cognitive bias (often with government-paternalistic overtones), many experts in social psychology have reached the opposite conclusion.

For example,

  • Edwards, W. 1968. “Conservatism in human information processing”. In Formal representation of human judgment.
  • Peterson, C. R. and L. R. Beach. 1967. “Man as an intuitive statistician”. Psychological Bulletin. 68.
  • Piaget, Jean. 1975. The Origin of the Idea of Chance in Children.
  • Anderson, J. R. 1990. The Adaptive Character of Thought.

Anderson makes a particularly interesting point. People often have reasonable but wrong understandings of base rates, and official data sources often vary wildly about some base rates. So what is characterized by critics of humans’ poor performance at Bayesian reasoning (e.g., by ignoring rates) is in fact use of incorrect base rates, not a failure to employ base rates at all.

Beyond the simple example above of better-than-average driver belief, many examples have been given (and ignored by those who see bias everywhere) of intuitive Bayesian reasoning that yields rational but incorrect results. These include not only for single judgments, but for people’s modification of belief across time – Bayesian updates.

For math-inclined folk seeking less trivial examples, papers like this one from Benoit and Dubra lay this out in detail (If a fraction x of the population believes that they rank in, say, the top half of the distribution with probability at least q > 1/2, then Bayesian rationality immediately implies that xq <= 1/2, not that x <= 1/2 [where q is the subject’s confidence that he is in the top half and x is the fraction who think they’re in the top half]).

A 2006 paper, Optimal Predictions in Everyday Cognition, by Thomas L. Griffiths and Joshua B. Tenenbaum warrants special attention. It is the best executed study I’ve ever seen in this field, and its findings are astounding – in a good way. They asked subjects to predict the duration or extent of common phenomena such as human lifespans, movie run times, and the box office gross of movies. They then compared the predictions given by participants with calculations from an optimal Bayesian model. They found that, as long as subjects had some everyday experience with the phenomena being predicted (like box office gross, unlike the reign times of Egyptian pharaohs), people predict extremely well.

The results of Griffiths and Tenenbaum showed people to be very competent intuitive Bayesians. Even more interesting, people’s implicit beliefs about data distributions, be they Gaussian (birth weights), Erlang, (call-center hold times), or power-law (length of poems), were very consistent with real works statistics, as was hinted at in Adaptive Character of Thought.

Looking at the popular material judging people to be lousy Bayesians steeped in bias and systematic error, and far less popular material like that from Griffiths/Tenenbaum, Benoit/Dubra and Anderson, makes me think several phenomena are occurring. To start, as noted in previous posts, those dedicated to uncovering bias (e.g. Kahneman, Ariely) strongly prefer confirming evidence over disconfirming evidence. This bias bias manifests itself both as ignoring cases where humans are good Bayesians reaching right conclusions (as in Griffiths/Tenebaum and Anderson) and as failure to grant that wrong conclusions don’t necessarily mean bad reasoning (auto driver example and the Benoit/Dubra cases).

Further, the pop-science presentation of human bias (Ariely TED talks, e.g.) makes newcomers to the topic feel like they’ve received a privileged view into secret knowledge. This gives the bias meme much stronger legs than the idea that humans are actually amazingly good intuitive Bayesians in most cases. As John Stuart Mill noted 200 years ago, those who despair when others hope are admired as sages while optimists are dismissed as fools. The best, most rigorous analyses in this realm, however, rest strongly with the optimists.

 

  1. #1 by Anton on September 18, 2019 - 11:19 pm

    Repent!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: