Philosopher explores what it means to be imperfectly rational
Pioneering book on formal epistemology, honored by the American Philosophical Association, explores how flawed reasoners can make better decisions
Humans are often full of uncertainties as we face life’s daily challenges. A meteorologist might tell us there’s a 60% chance of snow today, or an economist might say she’s 40% certain that a stock will skyrocket in value tomorrow. With everything so uncertain, how should we measure those uncertainties so we make the best choices?
Julia Staffel delves into that question in her recent book, . In the book, Staffel argues that formal mathematical models can help us evaluate the hidden biases that could inform our thoughts in uncertain situations, and how this understanding can then help us make our decisions more rational.
Staffel, an associate professor of philosophy and an affiliated faculty member in the Institute of Cognitive Science, won an honorable mention from the American Philosophical Association's 2021 in recognition of this book. The award is given out “in odd years for the best published book by a younger scholar during the previous two years.”
Staffel’s book approaches how philosophers, psychologists and mathematicians have for decades developed and expanded mathematical models which use probability to map the differences between rational and irrational decisions. In philosophy, these models help inform normative theories of rationality, which means that when we make decisions, they should “follow the rules of probability,” Staffel says.
However, “it's impossible for human thinkers to be perfectly rational,” Staffel says, and this is “partly because of the limitations of our brains, and of the computational complexity of some of these reasoning tasks. They're just too hard for us.”
The problem with some formal models is that they assume that ideal agents are making perfectly rational decisions; this overlooks the psychological complexities, biases and contradictions of how humans really make choices.
“And then the natural question arises: If we have these models that describe ideal reasoning, how do those models have anything to do with us when we can't really ever be that ideal in our reasoning?” Staffel asks.
This question is the impetus for Staffel’s book, in which she argues that humans can still achieve varying degrees of rationality, thereby providing better frameworks to help get us closer to making the most rational and beneficial choices.
To illustrate this point, Staffel offers a thought experiment: “Suppose you apply for a job. If you get the job, you get all the benefits. (But) if you’re in second place versus in sixth place among the candidates … you don't get anything better than the sixth place candidate.”
In this situation “either you get all the benefits or you get nothing,” a winner-takes-all scenario that idealistic formal models can often emulate without considering the rational complexity of human thinkers. In these cases, “there's no point in getting closer to being rational because you would only get all the benefits from being fully, perfectly rational,” Staffel explains.
However, this winner-takes-all model of rationality doesn’t match with how humans make choices in reality. For Staffel, a better example of how we can attain degrees of rationality, while still being imperfect, is illustrated by thinking about how we learn a foreign language.
“Suppose you want to be fluent in (a) foreign language. Even if you can't ever be fully fluent, you still get a lot of the benefits of being fully fluent by getting better and better (at speaking it). If I want to go on vacation and live and study (in the country of the foreign language), the closer you get to being fluent, the more benefits you get,” Staffel says.
While we, as humans, might not ever achieve perfect rationality, Staffel’s thought experiment of learning a foreign language shows how we can still achieve varying degrees of rationality, which can still help us make better decisions. And for Staffel, mathematical models that use probability are the best tools to get closer to perfect rationality.
“As you become closer to being perfectly rational, you get increasing portions of those benefits,” Staffel says, “and you can demonstrate in a mathematical model that this really pans out.”
“Now we can really say yes, these ideal models do have something to tell us as imperfect reasoners, because we actually have a concrete thing we can say of why it's good to be closer to being rational, even if we can't ever be as perfect as these models described,” Staffel explains.
Staffel thinks her work is especially important for helping academics, such as philosophers, psychologists and cognitive scientists, understand how the human mind evaluates and makes decisions that often deviate from idealistic models of rationality.
“I'm always very interested in studying the empirical results in cognitive science and psychology, because they tell me something about how real reasoners actually don't comply with these ideal models,” Staffel says. With these empirical results, humans could then make “modifications to the (idealistic) models to actually capture and study (the mind) better, and try to answer questions, like, is this a bad error? Is this a trivial error (that the mind is making)?”
Staffel argues that one way these mathematical models can help scientists and ordinary citizens make more rational choices is by getting us used to recognizing patterns of bias and hidden fallacies that inform the decisions we make in uncertain situations.
“It is very difficult to catch yourself making reasoning mistakes when you reason about uncertainty,” Staffel says. “Really the only way to get better at catching those mistakes is to actually start studying some probability (and) the kinds of fallacies that people make. Then you can start spotting situations that trigger those fallacies.”
David Boonin, a professor and chair of the Philosophy Department, says that Staffel is working on a “really difficult and important topic about human rationality, in addition to being a surprisingly neglected topic” in philosophy.
“It's easy enough to say (that it’s) best to be perfectly rational and have all your beliefs and probabilities all perfectly coherent,” Boonin says. However, Staffel’s book manages to give a novel and nuanced explanation of “what it means to say one set of beliefs on the whole is at least more rational than another, even if it's not perfectly rational, and why is it better to be more than less (rational) even if you're not going to get all the way there.”
“She's made a lot of substantial progress in ways that are innovative and original to move the discussion forward, and I think it's waking a lot of people (up to) this neglected, important question in this area” of philosophy, Boonin says.
Along with Staffel’s recognition by the APA, the faculty at CU Boulder’s Philosophy Department have won more Book Prize awards and honorable mentions than any other department in the country, Boonin says.
“I think it's a really good indication that we're a top-tier research department,” Boonin says, adding that for Staffel, it’s “really excellent evidence that she is not just an outstanding researcher, but has the abilities that we want in our teachers.”
“We want to teach our students how to do philosophy, not just how to read philosophy” Boonin says. And faculty like Staffel “who are excellent at doing philosophy are particularly well positioned to do that.”
When Staffel heard about the honorable mention, she felt “very flattered and validated.” It was “confirmation that what I was trying to accomplish worked, that people could really see that it was making a valuable contribution, and should be read by people outside of my narrow area of specialty as well.”
The APA’s Book Prize also previously recognized CU Boulder faculty include Robert Pasnau’s in 2005 (winner), David Boonin’s in 2005 (honorable mention) and Mitzi Lee’s in 2007 (honorable mention).