In my last essay, I wrote about the bold (if foolish) springbok who left the safety of the herd and made his way to the watering hole on a sweltering Namibian day. Although he seemed to be aware and on the lookout for theoretical lions as he episodically and intently scanned the periphery, he was unaware of the two real lionesses sitting underneath the bush. These hypothetical lions were known-unknowns and the decision to make the walk to the empty watering hole was potentially a (mis) quantified risk. Could the springbok have better quantified the risk? For example, should an empty watering hole on a hot day serve as evidence of potential danger? Should that empty watering hole on a hot sunny day be even more salient if on previous visits it was teeming with animals? Analogously, every patient encounter in the emergency department could be considered a walk to the watering hole. Theoretical bad outcomes loom perilously and actionable and salient information can be missed in an often emotionally charged and fast-paced environment, where seemingly minor decisions can have outsized consequences.
Similarly to springboks (and other animals), humans use experience, memory, pattern recognition, and feedback to learn (next essay), explain, and make predictions of their worlds. However, unlike springboks (or any other animals) our own models can be quantified and augmented through the methodology of science. We don’t have to solely rely on our personal experiences but can fortify our cognitive abilities by accumulating and sharing a cultural repertoire of mental tools, skills, concepts, and categories. We can test our models prospectively in experimental settings and generalize these results to the real world. Theoretically, this entails better explanations and more prescient predictions of the natural world. In medicine, biomedical research generates an evidence base (future essay) through retrospective and prospective data generation and analysis, with the end goal of augmenting diagnosis and prognosis. In theory, the use of evidence generated through the methodologies of science should inform decision making and facilitate risk management in the emergency departments. However in clinical practice, this evidence is incorporated hesitantly at best, but mostly haphazardly or negligibly.
The reasons for this are manifold and in one framework include a stepwise leakage of incorporation starting from awareness and acceptance and proceeding to agreement and adherence. An intermediary step that I would like to hone in on is the applicability of the evidence in clinical scenarios. Evidence gathered from clinical trials in academic or tertiary care centers from a carefully selected cohort of patients is either not generalizable to a more complex patient population treated in community hospitals or is not applicable to the particular patient being seen. Physicians have to balance the risks and benefits for individual patients, not groups. Additionally, emergency physicians often do not have time to ruminate on decisions and consider all possible ramifications of their decisions as outcomes are often time-sensitive. We want to know the risks and benefits for the patient we are treating. In the current age of population medicine, the particular patient rarely merits attention. The particular patient is equated to a group and predictions come to naught when applied to an individual in a group. Therefore, when I see a 19-year-old with a headache and vomiting and I am trying to differentiate between a non-emergent migraine headache versus a life-threatening diagnosis such as meningitis or subarachnoid hemorrhage, I am in many ways “reduced” to the toolkit of a springbok; using experience, memory, heuristics, and pattern recognition in the context of clinical cues to navigate the known-unknowns of an empty watering hole on a hot sunny day.
The 20th-century physicist Niels Bohr said, “prediction is very difficult, especially if it’s about the future.” Patient care takes place in an increasingly complex health care system. In healthcare, risk abounds and surrounds not only our models but also our observations and contexts of use. While there is no certainty or zero-risks, it is crucial to understand what the risk is, the time frame and size of the risk, and whether it applies to the patient being seen. Many new physicians desperately cling to clinical practice guidelines to anchor their predictions, as they lack the experience and the observations to contextualize the individual patient to an appropriate reference class. Invariably, when leaning too heavily on these guidelines, predictions go astray. Ideally, physicians would have diagnostic instruments that are both predictive and simple enough to use in everyday decision making. In the interim, the experienced and attentive springbok will survive, whereas, the inexperienced and inattentive will not sense the predatory lionesses underneath the bush.