What You See Is All There Is

The world doesn’t give us decisions to be made made like answering a multiple choice problem, though sometimes we may believe that’s the case. Nobel Prize Winner in Economics Daniel Kahneman and his collaborator, Amos Tversky, tackled the blind side of decision-making in their pioneering work on prospect theory. Their 1979 paper, Prospect theory: An Analysis of Decision Under Risk, illustrated humanity’s collective divergence from neoclassical economic decision-making. The two conducted a series of psychological tests that unpacked the heuristics and biases that influence how we perceive the costs and benefits of a given decision, and how our judgments are often incongruent with various decisions’ objective values.

I was talking with a friend recently about some choices he’ll soon have to make in his career. We spent a good deal of time on the phone and went through several scenarios and permutations thereof on how things might play out. When I got off the phone with him I couldn’t shake the feeling that we had missed something. Though we swarmed the issue he was dealing with aggressive and penetrating intellectual consideration, I had this sense that we had missed something important.

A few hours later it hit me. Kahneman describes in his book Thinking, Fast and Slow, the notion of “What You See Is All There Is.” We tend to make decisions independent of outside considerations unless we go through the taxing mental exercise of developing counterfactuals. We simply fail to account for the incredible complexity of our world when making decisions and tend to rely on a small and not necessarily representative sample of observations when formulating conclusions or courses of action. Our brain can deal with the known knowns, but has a difficult time conceptualizing known unknowns, phenomena that is relevant to the problem at hand but for which our mental reserves have no information on. And it goes without saying we tend to immediately discount the unknown unknowns, the proverbial “black swans” of our world that Nassim Taleb has brought into so many contemporary intellectual conversations.

I ended up sending my friend a well thought out counterfactual argument to basically everything I had described during our phone call. I had been relying too much on intuitive judgment at the expense of exploring outside possibilities, the known unknowns and the unknown unknowns. The final course of action turned an otherwise myopic and short-sighted plan of action into one that was much broader and more thought through. We turned a yes/no question into a multiple-choice question, and in the process gained a far better sense of what was at stake.

Just remember, zoom out when something feels off and remember that your brain believes that what you see is all there is, especially since the truth is anything but.

Unimpossibility

I’ve of late been putting an inordinate amount of time into philosophical texts. My current literary victim is Nassim Nicholas Taleb and his book, The Black SwanThe Wall Street Journal noted that “he writes in a style that owes as much to Stephen Colbert as it does to Michel de Montaigne.” I would have to agree.

Taleb’s black swan is in academia also referred to as a fat tail, and is the outlier that reintroduces luck and serendipity into the social sciences. Taleb seeks to explain the high impact, “impossible” events that defy our expectations of the material world and so take on the garb of a Power Law distribution. He as well examines humanity’s psychological bias and blindness toward such rare events, and in doing so seeks to answer why we’re so susceptible and blind to them.

Broadly the cognitive bias and, more specifically, the confirmation bias prove problematic as we try to understand the world we live in. By means of naïve empiricism we bipedal thinking things have a tendency to look for instances that confirm the narratives and stories and Platonic understandings of our world. The problem of course is that if you look for confirmation you can find it almost anywhere. Taleb argues that instead of deluding ourselves into thinking we’ve just aroused evidence for our correctness, we should rather scrape and claw and unearth those instances where our method or theory or course of action fails. It’s at that juncture that we actually learn something.

I’m going to try something different this time around and throw the question out to you, my readers. When have you found yourself susceptible to confirmation bias? Or, when you have taken the empiricist’s path and avoided the confirmation bias, and sought to falsify your theory rather than confirm it?