Abductive inference

We learn a great deal from anomalous observations, because anomalies encourage us to search for an explanation. When we search for explanations and adopt the best available explanation as a belief, this is known as abductive inference. In my work, I argue that abductive inference is critical to much of cognition, including not only high-level cognitive processes such as causal reasoning and categorization, but also quite seemingly disparate domains of cognition such as memory, perception, social cognition, and even emotion.

I am currently very excited about finding out how people evaluate different competing explanations, which is a central component in abductive inference. Two recent projects look at how “explanatory virtues” are used in a heuristic way to evaluate hypotheses.

Simplicity

We know from intuition and from earlier psychology research that simpler explanations are usually more satisfying. But in recent work with Andy Jin and Frank Keil, we found that in some situations people actually prefer more complex explanations. As a case study, we looked at how laypeople fit curves to scatterplot data. There are carefully worked out mathematical theorems that determine how complex of a curve to fit to any given data set. But, far from having a simplicity bias, people actually fit curves that are too complex for the information contained in the data. In a follow-up study, we showed that this occurs because of an illusion of fit, wherein people think that data fit closer to more complex curves that simpler curves, even when the simpler curves are actually closer fits.

We argue that this occurs because people possess opponent heuristics for using simplicity and complexity: Whereas they use simplicity to estimate the prior probability of an explanation, they use complexity to estimate its goodness-of-fit or likelihood. These opponent heuristics are both likely to be at play whenever two explanations are compared that vary in their complexity, and the relative emphasis on one heuristic or the other is likely to be determined by a set of contextual factors that we are investigating.

For a preliminary report, see our paper in the CogSci 2014 proceedings.

Latent Scope

Explanations vary in their scope, or the range of actual and potential observations that they could explain. Intuitively, explanations that make many true predictions are good, and explanations that make many false predictions are bad. But what about explanations that make latent predictions, where we don’t know whether they are true or false? Such explanations are commonplace in everyday life (e.g., in jury trials, medical diagnosis, or other situations where we do not have access to all the evidence we might want) as well as in scientific and religious discourse.

Prior research (Khemlani, Sussman, & Oppenheimer, 2011) has found that people find explanations that make latent predictions to be unsatisfying, leading them to make biased inferences. For example, suppose that Disease A causes Symptom X, but Disease B causes both Symptoms X and Y. Then if you as a doctor know that the patient has Symptom X but results are not available for Symptom Y, you are likely to infer that the patient probably has Disease A. In fact, this inference is illusory: If Diseases A and B occur equally often in the population, then ignorance about Symptom Y is no reason to favor one disease over the other.

In work with Greeshma Rajeev-Kumar and Frank Keil, we show that this occurs as a symptom of a broader tendency to go beyond the evidence by guessing whether predictions would be verified or falsified, if we were able to find out. That is, people evaluate explanations based not only on the actual evidence, but also on inferred evidence that is sometimes generated in a biased manner. We suspect that this has relevance not only for causal reasoning, but also in other domains where abductive reasoning occurs, such as categorization.

For a preliminary report, see our other paper in the CogSci 2014 proceedings.