for Fingerprints, BP Misasi

Tags: safety culture, ambulance service, approach, systems approach, accountability, understanding, systems, medical director, errors, managers, Complex dynamics, Cognitive challenges, Safety-Critical, margin of safety, problem solving, situation, airline pilots, environment, human contribution, occasional human contribution, Glen Stewart, British Airways, personal accountability, complex systems, imperfect knowledge
Content: | By Paul Misasi, MS, NREMT-P for Fingerprints How to reinforce the foundation of an EMS safety culture In 1989 British Airways 747 Captain Glen Stewart became the first commercial pilot to face criminal penalties for jeopardizing his passengers' lives.1,2 Faced with bad weather, problems with the autopilot, a stronger than anticipated headwind, two of three crew members suffering from an acute gastrointestinal illness,a last-minute runway change,critical fuel status and a first officer assisting with a complicated approach that he had not been endorsed to fly (for which he was granted an in-flight dispensation), Capt. Stewart was unable to land successfully on the first attempt and nearly missed impacting a hotel that was obscured by fog.The crew did, however, manage to land the aircraft safely on the second attempt. Capt. Stewart reported the mishap and was subsequently brought up on criminal charges, demoted and fined.The airline managed to save face by vilifying Capt. Stewart, claiming he acted unprofessionally and that his flying skills had been deteriorating for some time. Three years after the incident, he committed suicide.1,2 Responding to the airline's argument that Capt. Stewart endangered the lives of his passengers, a fellow pilot jabbed at the airline, stating "We do that every time we fly!" Just like airline pilots, EMS professionals work in complex, high-risk, safety-critical socio-technical systems.3-6 And if we come to understand the nature of this complexity, we can view our profession through a new lens to craft better improvements in patient and provider safety. Because the price of failure is paid with the morbidity and mortality of our patients and coworkers, it is imperative that we dig deeper, beyond human attributions, beyond the "bad apple theory," to understand failure and guard against it.The purpose of this article is to help reinforce the foundation of a safety culture with essential concepts and tenets. 63 | NOVEMBER 2013
Juggling at the Sharp End An angry medical director wrote to the general manager of an Ambulance Service following a response to a code blue in his jurisdiction. The medical director--who also responded to the call--was upset because the crew on the call had to go back out to the ambulance on three separate occasions to retrieve equipment he believed should have been in the monitor case to begin with. The medical director made it clear to the crew that he was not happy that they were unprepared for the call. The medical director had been very clear of his demands for providers to apply capnographic monitoring within one minute of intubating any patient. This was a standard of quality care in his system to which he wanted tight compliance. Investigation into this incident, using a systems approach rather than the Bad Apple theory, revealed the crew fell victim to latent error and a perfect example of conflicting system priorities: the need to provide high-quality care and the operational reality of their system. Under the medical director's order, the wamithbiunlatnhceepsreercveidceinhgawdeimekp. lTehmeefinltteedr-tlihnee/uosxeyogfennocna-ninnvualasidveevEicteCsOw2 meroentiotobreing stocked in the monitor, in addition to the endotracheal filter-line sampling device. On the morning in question, the crew was supposed to go available at 0900 hours, and as most crews do, they arrived early to begin checking their rig. As fate would have it, that particular morning was quite busy,* and of the 13 ambulances deployed in the system at 0845 hours, only two were left. The operations supervisor on duty called the station to see if the crew could go available early because there was a code blue near their location, and the next closest unit to respond was at least 20 minutes away. The crew, who had not had an opportunity to thoroughly check their equipment and supplies, advised dispatch that they could be available for the call and responded--15 minutes before their shift was to begin. After the call, when the medical director asked why they did not have the proper equipment, they explained the circumstances of being called out before they were done checking their truck, but that they had "eyeballed" the stuff in the monitor and they saw the orange connector of the filter-line and thought they had what they needed when, in fact, it was the non-invasive sampling line and not the endotracheal sampling line. In his e-mail to the general manager, the medical director stated that there should be redundancy of supplies for critical equipment. This was, in fact, already the ambulance service's policy--there should have been two endotracheal tube filter-lines in the monitor bag. Now it would be easy to hunt down the bad apples who failed to resupply their equipment properly before going home the day before and make an example of what happens to medics who do not resupply their equipment, but this would only have revealed that the medics who used the rig the day before made the exact same mistake that the medics did on the call in question: they saw the orange connector tip of the filter-line that was in the bag (the non-invasive sampling line) and assumed it was the endotracheal tube sampling line. Using a systems approach to understanding errors, it is clear that the system owns this mistake, that it was only a symptom of trouble deeper in the system. Where would your organization have stopped looking for answers? Would it have punished the crew who responded without the proper equipment? What would punishment teach this crew and others about showing up to work early? About answering the telephone when the supervisor calls before their start of shift? How would crews feel about running calls with the medical director in the future? Would your organization have hanged the unit's previous crew to make an example of them for not restocking the ambulance? What about the problem of high demand at the 0830­0900 hour issue--is this community relegated to a lower standard of care at 9 o'clock? Would the people who establish your service's budget willingly acknowledge their hand in this mistake? *It should also be noted that the 0830­0900 hour time period had been identified (for well over a year) as a time of higher demand for which there were insufficient resources in the system resulting in frequent "late calls." The ambulance service management team had not yet addressed this issue because it could not afford at this time to add any more ambulances to the system, and moving a 12hour ambulance start time from 0900 to 0830 hours was met with significant opposition from the already resource-strained night shift.
A Safety-Critical Endeavor A safety-critical business is one that operates in the face of the following challenges:1, 3-9 · Time compression: It is important to act swiftly and effectively, and failure to act can be harmful. · Complex dynamics: Situations change for reasons that are not always clear to those making decisions and frequently the information available for basing those decisions is incomplete, inaccurate and untimely. In some situations, the necessary information is only made available by way of an intervention. · Cognitive challenges: A person may be tasked with managing a situation or conditions they have little experience with or exposure to, and existing guidelines (i.e., protocols, policies and procedures) are not--nor can they be--exhaustive, covering every eventuality of field work. · Dangerous variables: A large number of variables in a situation can be difficult to predict and control, and may simply be uncontrollable and volatile. · Unruly technology: Specialized equipment can obscure processes and information necessary to make decisions, occupy the limited mental resources necessary for problem solving, shift providers' focus from managing the overall situation to managing devices (a concept known as dynamic fault management), promote the construction of inaccurate mental models and complicate the problem solving process. Modern healthcare has been described as "the most complex activity ever undertaken by human beings"3 with many parallels drawn to the nature of military combat. Viewed in this light, it's a wonder we ever get anything right! Yet as consumers, managers and practitioners, we still expect perfection: perfect knowledge, perfect execution, perfect performance, no excuses. Unfortunately, there is a tremendous price to be paid for our expectation of perfection.13,14 Practical Drift and Goal Conflict Confounding the issues listed above are conflicts of personal and organizational values. Operators at the sharp end are told to"make safety your highest priority" and "follow the rules," then in the other ear are told "don't cost us money,""get it done faster,""keep people happy,""do it with less,""we have to (or are being forced to) cut resources," and "make do with what you have." Managers demand employees be safe and provide quality care to their patients, but often measure their performance by expediency.Tacitly,providers may feel an unspoken pressure to trade accuracy and safety for speed and efficiency, a concept known as "practical drift."9,13,14,18 It is truly a testament to the caliber of
prehospital providers whose role it is to resolve ambiguity and carefully achieve the organizational mission in spite of these conflicting priorities. But if we (managers and providers) believe we need only rely on providers with the capacity to overcome system limitations (i.e., the "good apples") our systems will be brittle. Because of increasing demand and decreasing resources, managers are faced with a dilemma: maintain the quality (and safety) of services, effectively decreasing their availability, or operationalize whatever margin of safety that has been achieved through improved processes, procedures, efficiencies,equipment and technology.5,9 Consider stretcher weight limits as an example: In order to accommodate patients who exceed our stretcher's 500 lb. weight limit, we design stretchers to hold up to 700 lbs., except now we try to use it for an 800 lb. patient, whom we never would have tried to place on the old stretcher. Every decremental step away from safety makes strategic sense--it solves a problem. Unfortunately, the distance we "drift" is only seen in the light of a rare and apparently isolated catastrophic failure (e.g., the 800 lb. patient we dropped off the cot is someone we should not have been trying to move in the first place). Safety Is Emergent, It Is Not Inherent Safety is a "dynamic non-event" created by people; it is not an inherent property of the system. It emerges as a product of decisions made by front-line operators who strike a balance between competing goals, achieving the mission and holding together a "pressurized patchwork" of systems, processes and technologies.1,4,6,7,9,11,14,15,19,20 Murphy's Law is commonly known as the phrase "whatever can go wrong will," but in a safety-critical organization like EMS, this law is arguably incorrect, or at least incomplete.7 In EMS what can go wrong usually goes right because providers make it go right, where the system alone (as designed) would have come up short. Front-line providers invest themselves into understanding the margins of their abilities, the capabilities of their equipment, the nature of the environment, their part-
ners and other responders so they know how far they can allow a situation to safely stretch limits, and intervene when necessary.The occasional human contribution to failure occurs because complex systems* need an overwhelming human contribution for their safety.7 * The term "complex" and the phrase "complex system" do not mean complicated.The concept of complex systems is one that "consists of numerous components or agents that are interrelated in all kinds of ways.They keep changing in interaction with their environment, and their boundaries are fuzzy.It can be hard to find out (or it is ultimately arbitrary) where the system ends and the environment begins."4 Complex systems arise because of and are held together by local interactions only, and are highly dependent on initial conditions. The components of the system do not bear all the traits of the system, but because of the interconnectedness of the componentry and their relationships,"the action of any single agent controls little but influences almost everything."4,5 The Failure of Manhunts When something goes wrong though, it is tempting to simply dust for fingerprints and look for the bad apple. This approach to failure generally consists of a manhunt, followed by organizational discipline, license forfeiture and financial penalties; in some cases providers are branded criminally negligent and have even lost their freedom.1,10 We quickly identify an individual's failure, conclude their guilt and convince ourselves it would not have happened if it wasn't for them. We defame them for their complacency, their ignorance, incompetence and, perhaps, for being just plain lazy. Those who espouse this approach, known as the Old View or the Bad Apple theory,1,7 are essentially arguing that their systems are perfectly safe and failure is an aberrancy--that errors, mistakes and mishaps really have nothing in common except for those lazy, self-centered people at the sharp end. Thus there is nothing more fundamental, complex, time intensive or expensive that we must invest resources into changing.1,7 This approach is popular, fast and
painless (for those handing out the reprimands) and there's an added bonus: It saves system administrators, the public and politicians from having to consider that they may share in the responsibility for the outcome. This approach doubles as a nice personal defense mechanism for providers as well, because we can use it to shield ourselves from realizing that we too are vulnerable to error21--e.g., Fred gave an overdose of Fentanyl. Fred is an idiot and a lousy paramedic. I'm not a lousy paramedic therefore that will never happen to me. I'm safe from making that mistake. Cause is something we construct, not something we find.7 It is a social judgment fraught with biases and where we choose to lay blame ultimately says more about us than it does about the nature of the incident in question. Processes Produce Results Processes do not understand goals; they can only produce what they were designed to produce, regardless of what the goal is.20 If a process is not meeting a goal, all the inputs (machines, methods, materials, data, environment, etc.) must be considered, not just the people. People are but a small fraction of the things that influence processes and their outcomes. Variability, including errors, is built into the system, yet we continue to treat errors as if there was some special cause. Research has demonstrated that a significant majority of managers and peers will stop their investigation of an incident once the person "at the wheel" has been identified and punished.14 Perhaps the best and simplest way to determine whether problems or errors are built into the system is to ask yourself the following question:"If we fired and replaced everyone today, could this problem happen again tomorrow?" If your answer is yes, then what you have is not a bad apple problem, and it is time to start asking why. Improvement Equals Accountability Another significant incentive for sticking with the Bad Apple theory is that it gets organizations out from under the microscope (or "retrospectoscope") of legal and public scrutiny. When we are 65 | NOVEMBER 2013
faced with a lynch mob biased by hindsight, we can identify those people who left their fingerprints on the wreckage (literally or figuratively), punish them, dust off our hands and tell the public it will never happen again because someone was "held accountable." Yet few would agree that firing someone for an error guarantees it will never happen again. Punishment does not teach desired behavior, nor will it reveal the frequency with which others engage in the same practice. On the contrary, it simply guarantees that we will never know the real rate at which errors occur, thereby precluding any organizational learning and true system improvement. No organization has ever reprimanded its way to success.22,23 Even B.F. Skinner--the psychologist known for describing operant conditioning--in retrospect said that while using punishment as a tool for behavior modification worked well in rats, the technique was never successful at motivating voluntary behavior improvement in humans.24 What this approach does create, however, is manipulation, reprisal and resentment in people who, when confronted with the evidence of their failures, will avoid, deny and fabricate to avoid the pain, shame, judgment, ridicule, blame and embarassment.24 This is not true accountability; this is backwardlooking accountability. A key precept of a safety culture is understanding that accountability is about trust10--the trust that the public, our stakeholders and our customers place in us to ensure we provide them with the safest possible services. Forward-looking accountability, on the other hand, means an organization and its employees will do everything necessary to actually make improvements, even if it means the organization itself must assume some, if not all, of the responsibility for a bad outcome.10,20 Understanding Facilitates Learning The core problem with the Bad Apple approach is actually quite succinct: We learn nothing. We learn nothing about our systems, how to make improvements, how to increase safety and ultimately how to achieve real, forward-looking accountability. Human error must be a starting 66 NOVEMBER 2013 |
point for investigation, not an endpoint-- it begs explanation, it is not a conclusion (this includes any folk models we use to disguise "human error," such as "loss of situational awareness," "loss of crew resource management," "failure to...," "they should have...," etc.).25 The key to improving quality and safety is to figure out why the decisions people made and the actions people took made sense to them at the time.The problem lies not in the mind, but in the situation that the mind found itself.1,7 Explanations of human behavior are only relevant in the context in which it was produced.9 We have to be willing "A key precept of a safety culture is understanding that accountability is about trust." to view the event as an insider with the information, cues, environment, tools and focus of attention that the providers had at the time.If you ever find yourself asking, "How could they not have known?" then you are viewing the event from a biased, retrospective outsider standpoint and have to work harder at understanding why the actions and decisions of insiders made sense. This can be accomplished using Sakichi Toyoda's "Five Whys" technique: simply asking "Why?" successively of each answer produced to the previous "Why?" question.20 Systems are imperfect, equipment is faulty, humans are fallible.26 People in safety-critical jobs do not come to work to screw up, to make mistakes, deliver overdoses, intubate the esophagus or design lousy processes. What people did or were doing at the time of an incident made sense given their prioritization of competing goals, imperfect knowledge, deficient systems, ambiguous cues, focus of attention, limited tools and faulty technology. Otherwise they would have done something else--after all, their lives and their livelihoods are at stake.1,4,5,7
Engineering Resilience Organizations that safely navigate high-risk environments--which are known as High Reliability Organizations or HROs, e.g., aircraft carrier operations, Nuclear power generation, air travel operations, etc.--are those that are not only open to bad news, but actively look for it and create non-punitive systems for reporting it. HROs do not exhort people to follow the rules, they actively seek to understand and are sensitive to the context of errors, how things actually work "in the streets," and then develop processes that support predictably fallible human beings.7,9,16 The processes they develop are not attempts to "dumb down" the services providers deliver, but to create barriers, redundancy and recovery processes that address the ubiquitous cognitive (mental) and physical vulnerabilities of human beings so that their customers are more than one human error away from harm.They promote cultures that value safety and adopt managerial practices that balance system and personal accountability in support of their values. In a culture of safety, a high-reliability organization is a meta-learning organization, actively seeking to learn how it learns about safety in the system, identify latent errors and understand why it has drifted into at-risk behaviors.5,23 Learning is about identifying, acknowledging and influencing the real sources of operational vulnerability and continually expanding our capacity to understand complexity, clarify vision, improve our mental models and create the results we desire.7 Ultimately, it takes a tremendous amount of personal and organizational courage and humility to overcome the momentum of sociopolitical pressures, face the lynch mob, stand up for the virtue of true forward-looking accountability and trade indignation for understanding. Perhaps then we will realize that the set of fingerprints we found on the wreckage is really nothing more than that--a set of fingerprints. Paul Misasi, MS, NREMT-P, is the deployment/quality improvement manager at Sedgwick County EMS in Wichita, KS. Find the references for this article online at

BP Misasi

File: for-fingerprints.pdf
Title: EMS_63-66_Fingerprints1113.indd
Author: BP Misasi
Author: nperry
Published: Sat Nov 9 18:12:39 2013
Pages: 4
File size: 0.4 Mb

Linda S. Cordell, 18 pages, 0.33 Mb

Where in the World, 6 pages, 0.09 Mb

The truth about grief, 18 pages, 0.24 Mb
Copyright © 2018