The human brain (like any other organ) has evolved for specific environments and is constrained by its chemistry and historical contingencies. In my last post, I discussed the cognitive blind spots that play a role in the epidemic of over-testing and over-treating in medicine. Human decision making is notoriously ill equipped to distinguish between low probability events, prone to make decisions to avoid short term losses, moved disproportionately by the illusion of certainty and the hope of possibilities. These systematic errors in judgment are not only inbuilt (mis)outputs, but they are also compounded by the dearth of introspective facilities to critically assess our own biases. Each of us at some point reaches the limits of our expertise and knowledge and those limits make our misjudgments that lie beyond those boundaries undetectable to us. This leads to individuals with domain expertise (i.e. physicians) to have an enhanced illusion of skill and become unrealistically overconfident in their predictions.
A system that can routinely and unobtrusively perform external checks has the potential to mitigate the downstream impacts (i.e. cost, negative outcomes) of these biases. In considering the key features of such a system, a fundamental property would be its ability to accurately and unequivocally elucidate the concept of sample space. In probability theory, the sample space is defined as the set of all possible outcomes of an experiment. Nobel laureate astrophysicist, Subrahmanyan Chandrasekhar, stated that a “stochastic process is neither deterministic nor random. It is governed by a set of probabilities. Each event has a probability that depends on the state of the system and also on its previous history.” If you consider a patient to be such a system, then the “current state” is the presentation (chief complaint, vital signs, history, review of systems, physical exam) and their “previous history” is their past medical history. The sample space is the probability distribution of the outcome of a given test or treatment.
Identifying a precise and accurate sample space in medicine is thus far a pipe dream because the human body is an incredibly complex, nonlinear system of interlocking feedback loops. We have only scratched the surface in modeling the exposome, and therefore, our predictions are predictably erroneous. The current gold standard of a sample space is clinical practice guidelines. These guidelines, although conceptually and statistically significant are woefully underutilized by the practitioners of medicine because in part they lack the nuance to capture a patient presentation. A complex, multimorbid patient presentation with added layers of psychosocial complexity is typical and not the exception and these guidelines overfit or underfit these presentations. At best a computational system that utilizes outputs from clinical calculators and clinical guidelines as the base rate and then adds layers of complexity will statistically identify a more precise sample and will have quantifiably improved predictions.
Human decision makers tend to treat each decision as an independent event through a process known as narrow framing. Each decision is considered in isolation and decision makers who are prone to narrow framing construct a new preference every time they face a decision. Therefore, each presentation of chest pain is a “new” presentation of chest pain and we neither have the inclination nor the cognitive resources to enforce consistency or coherence in our preferences. Thus, each individual decision is considered in isolation and impacted by cognitive biases such as loss aversion and the possibility effect discussed in my last post. In contrast, broad (wide) framing is defined as making a decision in aggregate. A presentation of “chest pain” is considered within all the similar presentations of chest pain. Decisions, when considered in aggregate, are less prone to be influenced by our cognitive biases.
A computational system that can present the patient in a broad frame by taking into account the current state and the past history of the patient and compare them to an existing database of thousands or millions of similar presentations will be the ultimate form of wide framing. Every patient encounter will maintain its individuality but will also be represented in a broader but more precise frame. Patient presentations will be sorted into more precise sample spaces, thereby, leading to more accurate prediction. I would conjecture that such an algorithm would not only outperform traditional clinical practice guidelines but if strategically built within the electronic health record would serve as an invaluable resource to point of care decision makers. Providers might be more confident, and therefore, more confident in their decision to forego unnecessary testing or treatment for more precisely and accurately elucidated low probability events.
Cognitive psychologist Steven Pinker noted, “when people try to assess risks or predict the future, their heads are turned by stereotypes, memorable events, vivid scenarios, and moralistic narratives.” These cognitive features were evolutionary advantageous but might be disadvantageous in the complex realm of medical decision making. As emergency physicians, we are routinely faced with difficult decisions – decisions that lead to sleepless mornings (after a night shift). Unfortunately, we are forced to make a lot of these decisions without adequate cognitive support and it often leads to a ‘fly by the seat of the pants’ approach to predictions. The emotional and cognitive burden of these decisions are often times overwhelming and beyond the scope of the best trained and most insightful providers. Frictionless, workflow integrated computational systems that identify more precise sample spaces will serve as a welcome reprieve for all of us who make predictions in the uncertain and complex world of patient care.