If the “burnt out” attrition of emergency physicians, the shortage of emergency nurses, the unfilled emergency medicine residency positions, the prevalence of errors (here and here), the persistence of misdiagnosis, or the news headlines (here, here, and here) are relevant indicators, then the emergency department (ED) could be considered a failed – or at least a failing niche. Failing its doctors, nurses, and its patients. There are a number of factors for this decline; many of which reside outside the sphere and immediate control of the emergency department, and instead within the larger healthcare system and society generally. For example, the conceptual design of the ED as a “common” rather than a public good is leading to the tragedy of the commons. The unanticipated developments in society and healthcare that have turned a system that was designed as a safety net for care, to become the default location for primary and psychiatric care for large segments of the population, and thereby causing it crumble under the strain of volume pressures. The combination of the consumerist culture of immediate gratification, with expectations that far surpass what modern medicine can offer, has created perpetually dissatisfied patients. However, a prime factor (and the subject of this essay) in the crisis in ED care is the failure of technology to keep up with these developments and adequately support end-user (ED clinician) cognition.
In my estimation, the primary and fundamental goals of the ED are to identify risk – screen – and to minimize the harm from risk – derisk. Screening is not only diagnosing those that are ill but also those who are at risk of illness. It means not only recognizing the obviously ill but also the atypically ill – the anomaly (future essay). De-risking means not only minimizing harm from the illness, but also from the treatment. Although simple in principle, these tasks become significantly more difficulty in an environment that is dripping with emotion and strained by cognitive overload. In a domain with significant knowledge gaps, where “noise” is often indistinguishable from “signal,” attention to salience is at a premium, where the downside of an action or inaction is not immediately obvious, and decisions are often time-sensitive – errors of cognition, unless scaffolded, should be expected. In environments such as these, technology should be designed with end-users in mind and clear purpose at hand. It should generate affordances (next essay) and create fluency. It should not only be retrodictive, but also predictive. It should help calibrate the level of action to the confidence in the signal and potential harm of error. “Lower-level” tasks should be automated with the recognition of the trade-offs inherent in all automation. Lastly, technology solutions should be designed with our evolved cognitive tendencies, psychological preferences, and homeostatic requirements in mind.
However, in reality the key cognitive tools – as currently designed or deployed – at the disposal of emergency physicians are mostly unsuited to the tasks of screening or de-risking. Tools such as the physical exam, the electronic health record, clinical protocols, and radiology studies lack specificity, are not designed for anomaly detection, riddled by the tyranny of false positives, and are leading to an epidemic of overtreatment (next essay). Rather than decreasing cognitive load, they often impose unnecessary and unfair cognitive demands on its end users. They are essentially neither fit for purpose nor fit for mind. In response – and as a survival adaptation – emergency physicians excessively rely on heuristics and intuition to make decisions, they often take actions for the sole purpose of satisfying an imposed metric, they succumb to decision fatigue, and are emotionally spent and generally burnt out.
Discover more from S-Fxn
Subscribe to get the latest posts sent to your email.
[…] However, the variable sensitivity/specificity of physical exam findings (here and here), the contextual factors of the ED environment, and the perceived cost-free wide-availability of the imaging and biomarker modalities (upcoming […]
[…] and also atypical presentations of high prevalence and high risk diseases. It can be a key step in screening and derisking under-differentiated emergency patients. In terms of the under-measured, it is an opportunity to […]
[…] ED have consistently increased and is projected to continually increase. Mandated with the task to screen and de-risk, ED physicians recognize the singular power of radiological imaging to tune the affordance space. […]
[…] Risk-management is a foundational competency of not only the emergency physician, but the department (ED) as a whole (future essay). Utilizing a suite of tools and processes, the department aims to identify and stratify – often surreptitious – risk in an environment that is time, attention, and informationally constrained. It is tasked to rule-in/rule-out high-morbidity diseases and safely differentiate between the sick, the potentially sick, and the worried well. Two emergency department specific tools that are explicitly designed for risk-management are triage screens and risk-stratification scores. Risk-assessment begins at the point of contact between the patient and the ED – at triage. In addition to the Emergency Severity Index (ESI), EDs utilize screening algorithms to identify and anticipate high-risk and time-sensitive diseases. These screens are top-of-the-funnel identification algorithms to identify patients with high-risk diseases such as sepsis and stroke. On the other hand, risk-stratification scores are designed to integrate patient generated data such as the history, physical exam, and biomarker results to quantify and categorize risk. Both these tools are key components of ED risk-management toolkit. They influence patient trajectories, enable next-best actions, and anchor medical decisions. […]