Episode 6 − The Experiment: Superstitious belief

, , 1 Comment


Early in the episode and earlier in the course,
we’ve seen that people are a little bit too overeager when it comes to seeing patterns
and noise. As Tom Gilovich said in the interview, if you take a bag of M&Ms and pour it on the
table, it doesn’t look random. You see pockets of color here and there. You see faces in
the things and so on. This happens all of the time. We see patterns among seemingly
random events. Yes, this is true, and it’s very common. It’s been studied for ages. Something else that happens, which is quite
interesting, is detecting relationships that don’t exist as well. What do we mean by that?
There’s a really good one by Danny Kahneman and Amos Tversky—surprise, surprise. They
tested the claim that or they looked into the claim that people report arthritis pain
when there’s a storm approaching. It turns out there’s nothing, no evidence for that
whatsoever. Another one is emergency room nurses claim that there’s a lot more activity
when there’s a full moon during regular days of the month. Sportspeople, athletes, routinely
engaged in all sorts of superstitious types of beliefs. They tie their shoes in a particular
way. They bounce the ball exactly five times. They wear their pair of lucky socks or a pair
of lucky shorts before a game or a competition. This happens all the time. These are superstitious
beliefs as a result of seeing that two things tend to be linked: a pair of lucky socks and
how they perform, or the full moon and the activity in that evening, but there is no
link. There can’t be any link between them, but people see certainly that there is a link. Yes. These superstitions, I think, is related
to something called the conformation bias that we briefly touched on in episode three
when we talked about the interview illusion. Now simply, that is we tend to notice things
that confirm our beliefs, and we don’t notice the things that contradict our beliefs. I
chatted to Tom Gilovich about this, and here’s what he had to say. So, formally, one of the other kinds of cognitive
mechanisms that are operating when we have these beliefs or opinions? Yes. I think one of the most powerful and
most interesting ones is something that a colleague, a former student here at Cornell,
Scott Lilienfeld, calls the mother of all biases known as the confirmation bias. That’s
a term that most people are familiar with. They’re familiar with the idea that if we
want to believe something, we’ll go and seek out evidence for it and we won’t seek out
evidence against it. That is really true. It’s a very pronounced tendency to treat information
that’s consistent with what we want to believe in a pretty friendly way and be really hostile
to information that’s consistent with something we don’t want to believe. It’s almost as if
we ask ourselves of something that we want to believe: can I believe this, or is there
evidence for this? There’s evidence for almost anything. Even the most outlandish things,
there is some evidence for it. The question is: is there enough evidence? Is there sufficient
evidence? We don’t tend to ask ourselves: must I believe this? Is there enough evidence
here? So all of that’s true. All of people can relate
to that, but it’s even more pronounced than that—that is, even if you don’t care about
a particular belief, you have no vested interest in it, you tend to look for evidence consistent
with the idea rather than information that’s inconsistent with it, which, of course, if
we want to have a balanced picture, we’ve got to look at both. If I asked you—I gave
you some plants, a bunch of hostas, and say, “You’re a nice guy. Here are some extra hostas
from my garden. I think they probably need a lot of water,” but you might want to test
that. How would you test that? Well, if you’re like most people, you’d give it a lot of water
and see how they do. What you wouldn’t do is give some a lot of water, some hardly any
water at all, and see which one does better. You look for evidence for it rather than against
it. That’s a very natural tendency. At some level,
it make sense because it reflects a broader belief that, “Look, if this thing’s true,
there must be some evidence for it, so let me look for some evidence for it.” You’re
doing a very reasonable thing. However, you’re doing an incomplete thing as well. You need
to look not only for evidence for something but evidence against it. So if you believe
that cheerful people are more likely to overcome a bout of cancer, you need to look not just
who are the cheerful people you know who’ve done very well, but maybe you know some dour
people also who’ve recovered. That’s the latter step that we tend not to do. In this episode, we started by introducing
the intuitive scientist. We spoke about how we can take some of the formal claims of science
and bring them into the kitchen and to our everyday lives. We also chatted about our
tendency to misperceive random events and random relationships. We spoke a lot about
how we can contest claims, how it is that we can convince ourselves and others that
there’s a real effect here. There’s something genuine that we should pay attention to. Now next week in episode seven, we’re going
to build on this. We’re going to talk more generally about finding things out, about
testing claims, and how to change opinions.

 

One Response

Leave a Reply