Journal Logo

Spontaneous Circulation, now discontinued, focused on advanced ECG interpretation, cardiac pharmacology, hemodynamic assessment and resuscitation, and managing acute coronary syndrome. It was devoted to translating the best evidence-based treatments from critical care, resuscitation, and trauma for bedside use in the emergency department.

Monday, August 3, 2015


Emergency physicians are able to quickly decode the ciphers of ECGs into meaningful clinical data, at least until they are faced with a pediatric ECG. It breaks their pattern recognition, and they are forced to use the slow part of their brain (recommended reading: Daniel Kahneman’s Thinking, Fast and Slow). Even though the pediatric ECG looks vaguely similar to all of the other ECGs seen during a shift, this one turns into a mystery and confidence drains. It doesn’t look normal based on an adult tracing, but they are unsure if it’s normal for the child. The best ECG readers can resort to folding the tracing underneath and relying on the computer read. But we might make the ECG a little less mysterious if we can decipher the differences of the pediatric heart.


Heart Rate: The heart rate increases during the first month of life because of increasing autonomic drive, and then it gradually decreases over years to a normal adult rate driven by changes in intrinsic sinus node activity. Despite these changes, the cardiac output does not change much. The heart rate is slowing and the left ventricle is enlarging, but the product (stroke volume x heart rate) remains relatively constant.


P Waves: Atrial depolarization indicated by the P wave starts at the SA node and proceeds inferiorly and to the left toward the AV node, the same as in adults. This means the P wave should be positive in lead I and aVF. The infant heart is usually aligned more vertically in the chest, so the aVF component may be more prominent. As the child ages, the right atria enlarges and the P wave widens to show the additional time required to depolarize the larger atrium.


PR Interval: During the child’s first month, an autonomic surge stimulates the AV node speeding conduction. Once this tapers off, the PR interval settles at about 90 ms, which is considerably shorter than the adult normal of 120 ms.


QRS Complex: The sequence of ventricular depolarization is the same in children and adults, and the amplitude and morphology of the QRS complex depends on the relative mass of the right and left ventricles, the cardiac axis, the position of the heart in the thorax, and overlying soft tissue. At birth the right ventricle is larger than the left, but they become equal by about 1 month of age. The adult ratio is reached by about 6 months, and remains the same during the rest of the cardiac growth. The early prominence of the right ventricle leads to prominent R waves in right precordium and deep S waves in the left precordium. The overall cardiac mass is smaller, though, so the QRS complex is generally narrower (53 ms at term). By adolescence, it has lengthened to 70, most of which can be attributed to larger ventricular mass rather than conduction changes.


Q waves are particularly meaningful in pediatric ECGs. They are normal in the inferior and left lateral precordial leads. Even though they may have a large amplitude, the duration should be less than 20 ms. Deep (>3mm) and wide (>30ms) Q waves in leads I and aVL, especially without other normal Q waves, can be a sign of anomalous origin of the left coronary artery. ( Q waves in the right precordium are always pathological and indicate right ventricular hypertrophy. Deep Q waves in the left lateral precordial leads are seen in left ventricular hypertrophy of various etiologies and should be investigated.


ST Segment: Because the P wave often overlaps with the previous T wave, identification of the ST segment isoelectric line can be difficult. This segment is preferred to the PR interval for establishing the tracing baseline, especially given the short PR interval. ST-segment elevation >1mm in children is rare in normal pediatric patients. If seen in the precordial leads, it should occur either where the T wave orientation is in transition or related to early depolarization, which should be accompanied by an ST angle greater than 20 degrees. Pathologic ischemic changes to the ST segment are more likely to be secondary to congenital or acquired disease, but the age for considering atherosclerotic disease is getting younger because of higher rates of obesity.


T Waves: The right precordial T waves (V1) start out upright like adults, but invert around seven days and remains inverted for six or seven years. In this age range, an upright T wave in these leads can indicate right ventricular hypertrophy, which should prompt further evaluation. If this persists into adulthood, it is labelled persistent juvenile T wave pattern.


QT Interval: Like in adults, exact measurement of the QT interval is difficult. In early ECG machines, each channel was recorded separately even if the resultant printout was on a single sheet of paper in standard 12-lead layout. Lead II was the preferred lead to measure the QT interval in these situations. Modern machines measure all leads simultaneously, and the QT interval can be measured from the earliest onset of the QRS complex to the end of the T wave where it rejoins the baseline. The longest QT interval should be used when there is a discrepancy between the leads. Similar to adults, teenage girls have longer corrected QT intervals, but this has not been confirmed in younger children.


Fast heart rates in young children may cause the P interval to be superimposed on the T wave. Identifying the QT interval in this situation may require extrapolation from the T wave using the PR segment as the isoelectric baseline.


Children also typically have pronounced sinus arrhythmia, which leads to RR variability. Because this measurement is part of the QT interval heart rate correction, you end up with a beat-to-beat changing of the QT interval. There is no real agreement of whether the shortest RR interval or an averaged value should be used to establish the QT interval, so just be mindful that the cut-points for normal values will depend on the method used.


The familiar Bazett rate-correction formula for the QT interval (which uses a square root of the RR interval) overcorrects for children. The Fridericia formula or nomograms are probably more accurate, though less familiar to emergency physicians. Fortunately, the short-cut method for estimating a prolonged QT interval if it exceeds half of the RR interval has been validated in children. Be aware that the automated QT calculations can be inaccurate and should be verified with manual calculations.


A normal corrected QT interval is less than 440 ms, borderline 440-60 ms, and >460 abnormal. Evaluation one minute into recovery after exercise may increase the discriminant ability of the ECG.

Indications: Outside of the emergency department, many pediatric ECGs are obtained to screen for congenital heart disease. Without the presence of a murmur, screening asymptomatic children has very low sensitive and specificity. Additionally, studies routinely demonstrate that they don’t change management.


More clinically relevant to EPs is the evaluation of suspected arrhythmias, especially in identifying ventricular pre-excitation and prolonged QT intervals. An ECG has demonstrated value for evaluating presentations of syncope and seizure.


The short PR interval may make it difficult to recognize the ventricular pre-excitation of the QRS complex (delta waves). If the PR interval is less than 100 ms, the absence of Q waves in the left lateral precordial leads and left axis deviation may be a useful secondary indicator.


ECGs are also useful to monitor medication-related changes, such as treatment with pro-kinetic agents, antidepressants, atypical antipsychotics, stimulants, and antiarrhythmic medications. This is easiest accomplished by measuring the QT interval at similar baseline heart rates, say, 60 bpm.

By far the most common indication for pediatric ECG ordered in the ED is the evaluation of chest pain. They are often unrevealing, but may uncover ventricular ectopy, QT interval prolongation, or ventricular hypertrophy. 

Facebook Twitter

Wednesday, July 1, 2015

A man in his 30s comes to your emergency department at 3 a.m. profoundly diaphoretic and reporting severe 10/10 chest pain. He has been at a party all night, and the chest pain started about 30 minutes earlier. He had a previous heart attack, but cannot remember many of the details. He reports no medication or drug use. No doubt this is a concerning presentation, and you immediately order an ECG, blood work, and an aspirin.


While this is in process, you review the electronic medical information, which reveals that the previous “heart attack” was actually observation for chest pain rule-out. The ECG showed nonspecific ST/T-wave changes, and serial troponin measurements were negative. He had undergone a stress echocardiogram, which was a good quality study, and demonstrated no inducible ischemia or reproducible symptoms. The patient had a urine drug screen during that previous admission, however, that was positive for cocaine.



With that information, cocaine-associated chest pain is high on your differential, but you have many questions and are not sure how to proceed.


How useful is a urine drug screen for determining if the patient used cocaine?

The urine drug test for cocaine is highly specific (95%) and sensitive (99%). Cocaine itself is eliminated from urine in about 12 hours, but this can be delayed up to 72 hours in chronic or heavy users. The standard ELISA assay, however, does not test for cocaine, but instead for the metabolite benzoylecgonine, which is detectable in urine in as little as two hours after use and remains detectable for three to four days. Amoxicillin was previously thought to cause false-positives, but recent studies show that is less likely. Despite similar endings, other -caine drugs such as lidocaine, procaine, and articaine lack the central ecgonine structure, and therefore are not registered as false-positives.


How certain should you be that the positive urine cocaine test is related to the patient’s current chest pain?

The cardiac effects of cocaine are seen rapidly after ingestion. The majority of patients develop myocardial infarction within an hour, usually by three hours after cocaine use. The cocaine metabolites, however, may cause delayed or waxing-waning coronary vasoconstriction, and therefore the ischemic chest pain can occur up to four days afterward. If a patient can give you an accurate history of cocaine ingestion that falls within the window, you should have a very high suspicion that the chest pain symptoms are cocaine-related.


What is the cocaine doing to the patient?

Cocaine inhibits the repute of norepinephrine and dopamine, which causes vasoconstriction and increased cardiac contractility. Cocaine use increases the patient's heart rate, systolic and diastolic blood pressure, and mean arterial pressure.


Heart rate and blood pressure effects are dose-dependent, but cocaine does exhibit a significant tachyphylaxis pattern. The level at which this is reached depends on the individual and his history of cocaine use.


The effects of cocaine also depend on the route of exposure. Snorting or smoking cocaine has fewer blood pressure effects because of compensation baroreceptors. If it is injected, though, the baroreceptors are bypassed, and the full hemodynamic effects of norepinephrine and dopamine are in place.


Greater effects are seen at areas that have pre-existing atherosclerotic disease, but the effect of cocaine coronary vasospasm does not usually produce complete coronary occlusion. Unfortunately, long-term cocaine users tolerate the development of coronary atherosclerosis and the development of coronary aneurysms.


Cocaine can contribute directly to thrombosis by activating platelets and positive endothelial dysfunction, which can secondarily contribute to the platelet activation. Even without myocardial ischemia, cocaine and its metabolites are directly toxic to myocardial cells. This leads to infiltration of inflammatory cells, evidence of myocardial necrosis with elevation of troponin enzymes, and fibrosis, which can result in cardiac dysfunction.


Can cocaine cause chest pain from areas outside the heart?

Outside the heart, increased shear force from cocaine-induced tachycardia and hypertension dramatically increases the risk of aortic dissection. Cocaine can also cause acute pulmonary hypertension, leading to chest pain and shortness of breath. Inhalation of cocaine can also produce "crack lung,” which results in bilateral pulmonary infiltrates, hypoxia, hemoptysis, and respiratory failure.


How worried should I be that chest pain is related to myocardial infarction?

The increased heart rate, contractility, and blood pressure associated with cocaine use increase myocardial wall stress and oxygen demand. At the same time, coronary artery vasospasm with or without thrombosis reduce coronary blood flow and oxygen delivery. Myocardial ischemia occurs when the oxygen supply is not sufficient to meet demand. If the ischemia is prolonged, necrosis develops, which leads to release of troponins. Several reliable studies have documented the cocaine-related acute myocardial infarction at one to six percent.


What evaluation is needed for cocaine chest pain patients?

A detailed history, including drug use, is one of the most important parts of chest pain evaluation. Just like acute coronary syndrome, there is little predictive value in the reported severity, character, quality, or location of the chest pain or in the presence or absence of diaphoresis, any shortness of breath, and nausea/vomiting.


The ECG has a poor sensitivity and low predictive value in distinguishing between benign cocaine chest pain and myocardial infarction. This is complicated by the high frequency of benign early repolarization found in patients evaluated for cocaine chest pain because they are typically younger.


Troponins are highly specific for myocardial injury with necrosis, but they do not provide insight into the exact mechanism. (EMN 2015 Feb 2; The myocardial injury may be secondary to direct myocardial toxicity, vasoconstriction limiting oxygen supply, increased myocardial demand, thrombosis, or plaque rupture. Unless the ECG demonstrates ST-elevation, distinguishing among a NSTEMI type I myocardial infarction, type 2 myocardial infarction, and nonischemic myocardial injury can be very difficult, and with cocaine it is likely multifactorial.


Who should be considered low risk? Is observation appropriate?

High-risk patients can be identified by the presence of ST-segment elevation or depression >1 mm, elevated cardiac troponin, a high Killip score, recurrent chest pain, or hemodynamic instability. Of these patients, 20 to 25 percent will ultimately be diagnosed with a myocardial infarction. Low-risk patients can often be placed on observation or observed in the emergency department, and monitored for six to 12 hours including repeat measurements of cardiac troponin. Stress testing should be arranged for follow-up.


How do you treat cocaine-associated myocardial infarction?

Unlike other patients with acute coronary syndrome unrelated to cocaine, benzodiazepines are the first-line therapy for cocaine chest pain. Benzodiazepines work by blunting the sympathomimetic effects of cocaine (tachycardia, hypertension), relieving chest pain, and improving cardiac hemodynamics. They have the secondary benefit of relieving the agitation and psychiatric stimulation effects of cocaine. Benzodiazepines alone are often effective in controlling the hypertension and tachycardia, but use nitroglycerin, nitroprusside, or phentolamine if additional blood pressure control is needed. Beta-blockers should be avoided because of the potential of worsening coronary vasoconstriction.


In high-risk patients or those who have a ST-elevation myocardial infarction, antiplatelet and anticoagulation medications should be given, and the patient should undergo cardiac angiography and stenting if a culprit lesion is identified.


The early development of ventricular dysrhythmias can usually be attributed to the sodium channel effect of the myocardium and are responsive to sodium bicarbonate. If the ventricular arrhythmias develop several hours after the last use of cocaine, however, this is much more suspicious for ischemia-related dysrhythmias. In this situation, standard treatment included amiodarone and lidocaine as the preferred agents.


Our Patient’s Outcome

Our patient’s ECG showed no injury. His urine drug screen was positive for cocaine, and when presented with this information, he admitted using cocaine at the party. He was treated with several doses of intravenous midazolam, which dramatically improved his tachycardia and reduced his blood pressure. He was placed on observation, and serial troponin measurements were negative. Given his previous stress echocardiogram, no further provocative testing was pursued. He was counseled on abstaining from further cocaine use.


Important Points to Know about Cocaine and Chest Pain

n Cocaine blocks the reuptake of norepinephrine and dopamine, leading to increased catecholamines with powerful sympathomimetic effects.

n Cocaine has multiple hemodynamic and cardiac effects including tachycardia, hypertension, coronary artery vasospasm, increased oxygen demand, increased platelet aggregation and thrombosis, aortic dissection, and direct myocardial toxicity.

n The incidence of myocardial infarction in patients presenting to the emergency department with chest pain after having used cocaine is one to six percent.

n Cocaine use should be part of the history of every patient presenting with chest pain.

n Qualitative immunoassay for cocaine and its metabolites is sensitive and specific, but may remain positive for up to four days after use.

n Benzodiazepines are first-line treatment for cocaine-induced chest pain.

n Beta blockade is contraindicated for cocaine-induced myocardial infarction because of the possibility of worsening coronary vasospasm.

n Observation and repeat measurement of troponin level is appropriate for low-risk cocaine chest pain.


Facebook Twitter

Monday, June 1, 2015

In 1977, while I was busy with Star Wars and action figures, Andreas Gruentzig was using his kitchen-made balloon catheter to dilate and open a highly stenotic LAD coronary artery. Fixing atherosclerotic disease of the coronary arteries had previously required open heart bypass surgery, a procedure only 10 years old at the time. He had taken coronary catheterization, which until then had only been used for diagnostics and surgical planning, and became the first to perform transluminal interventional therapy. Unfortunately, it turned out that the coronary artery would often close either immediately or over the following days to months after balloon dilation. The next 40 years would be a struggle to learn how to keep the artery open. And complaining about George Lucas. Lots of complaining.


Plain Old Balloon Angioplasty: Balloon angioplasty, formally known as percutaneous transluminal coronary angioplasty (PTCA), deforms the coronary artery to overcome an acute thrombus or a stenotic atheroma. The forceful enlargement by its nature causes dissection in the vessel intima and distention of the adventitia. The defects in the endothelium are physiologically similar to a plaque rupture seen in acute myocardial infarction, and exposes the highly thrombogenic media to circulating platelets and coagulation factors. A thrombus forms over the site, and an occlusion of the artery can occur. The stretched elastic adventitia also has a tendency to recoil.


These two effects could cause abrupt closure of the vessel about five percent of the time with balloon angioplasty, which could cause myocardial infarction and possibly require emergent bypass surgery. Antiplatelet therapy, such as glycoprotein IIb/IIIa inhibitors, helped reduce the rate of thrombus formation, but could not prevent the recoil. Even if acute closure did not occur, neointimal proliferation and vascular remodeling would lead to in-stent restenosis (ISR) more than 50 percent of the time, with recurrence of the patient’s ischemic symptoms for up to six months post-procedure.


Development of Stenting: The answer to these problems seemed to be to use an intracoronary scaffold, which could seal the dissection flap and prevent elastic recoil. The first stents were used in 1986. The first balloon-expandable stent by Palmaz-Schatz became available in 1989. But the initial use of stents was problematic. They were bulky and required the use of multiple catheters, making them a technical challenge to deploy. There was a high failure rate and frequent embolization. These issues were fixed over time with better stents, catheters, and technique. Operators learned to size the stent and balloon to the vessel diameter, ensure that the deployed stent was against the coronary wall along the entire length, avoid edge dissection, and minimize residual stenosis.


The abrupt closure was reduced to three percent when coronary stenting was combined with balloon angioplasty. The scaffolding also proved effective in preventing elastic recoil, and the rates of ISR fell from more than 50 percent to about 30 percent. With this dramatic success, stents were used in more than 90 percent of intervention cases by 1999.


The downside of these early stents, however, were a much higher rate of early in-stent thrombosis (see table) compared with balloon angioplasty. Exposed metal in the coronary artery lumen is extremely thrombogenic and serves as platform for the aggregation of platelets and thrombus formation. Within two to four weeks the endothelium of the artery grows over the exposed metal, which prevents contact between the platelets and the thrombogenic metal, but until that time there is a risk of a thrombus formation and acute myocardial infarction. The thrombus formation tends to be more extensive and technically more challenging than native artery thrombosis, and therefore the extent of the infarct and mortality are higher. To counter the early in-stent thrombosis, high levels of anticoagulation and antiplatelet therapies were used, which led to serious bleeding complications.



Drug-Eluting stents: The device companies first tried to improve the performance of the stents by applying special coatings to make them inert in the body. Gold (thought to be inert), diamond, and phosphorylcholine (simulating the bodys cell membrane) were ineffective. Heparin and steroids (anti-inflammatory) showed only the most inconsequential benefit. Even at 30 percent, though, the rate of restenosis rate was unacceptably high. Manufacturers developed polymer coatings that would allow the stent to function as a drug delivery tool. The drug-eluting stents (DES) introduced in 2003 used sirolimus and paclitaxel, antimitotic chemotherapeutic agents. The coating allowed for a controlled long-acting local drug delivery to stop the cell hyperplasia and proliferation. In repeated randomized-controlled studies (RAVEL, SIRIUS, ISAR-DESIRE), the drug-eluting stents showed significant reduction of in-stent restenosis compared with bare-metal stents (BMS). They became wildly successful, and captured nearly 90 percent of the market within two years.


Dual-Antiplatelet Therapy: Within a few years, however, it became clear that the drug-eluting stents came with an increased risk of late in-stent thrombosis. The same anti-proliferative properties that helped DES prevent restenosis also delayed the endothelialization of the stent, leaving the thrombogenic stent exposed to blood much longer than bare-metal stents. Various combinations of antiplatelet and anticoagulation therapies were tried, including aspirin, heparin, and warfarin.


Unfortunately, the bleeding rates with these treatments were exceptionally high, and often required emergency coronary artery bypass surgery. This would change with the development of the thienopyridine class of antiplatelet medications. Initially, ticlopidine (Ticlid), and later clopidogrel (Plavix) combined with aspirin (dual-antiplatelet therapy, DAPT) proved a potent inhibitor of the early in-stent thrombosis seen in BMS and late in-stent thrombosis seen in DES without excessive bleeding complications. DAPT was cemented with the positive results of the PCI-CURE trial.


It should not be overlooked that significant patient-related factors affect restenosis and thrombosis risks. Patients with diabetes, decreased left ventricular function, and renal disease have shown to benefit the most from the DES benefits preventing in-stent restenosis, but they are also at a higher risk of stent thrombosis. DAPT is most critical for these patients. The patient’s ability to continue DAPT uninterrupted (from adherence, bleeding complications, need for surgery, or financial costs) as well as clopidogrel response, increase the risk of in-stent thrombosis. The length of DAPT depends on whether the PCI was performed for ACS vs. angina, type of stent (BMS vs. DES), bleeding risk, and comorbidities.


This is not the end of our quest for permanent artery openness. Work is ongoing to define the optimal length of DAPT, image-guided stent deployment, novel proliferative drugs, directional drug delivery, biodegradable polymers, bioresorbable scaffolds, pro-healing stents, and many more advances. We look forward to advances, but know they bring their own new unique set of problems. Just like Episode VII.


Facebook Twitter

Friday, May 1, 2015

Pacemaker and implantable cardioverter defibrillator development has revolutionized the treatment of many kinds of cardiac diseases. The technology advancements have been tremendous, and once-large external batteries are now replaced by gumstick-sized modules as sophisticated as any computer. Unfortunately, the leads that provide sensing, pacing, and defibrillation represent a vulnerable part of the system, and have short- and long-term failure modes. Failure modes can cause a spectrum of issues from minor annoyances to catastrophic failure and death. Failure usually requires replacement of the generator, leads, or both, which subjects the patient to an additional procedure with its own associated complications.


Failure modes include acute perforation, dislodgement, infection, vein thrombosis, migration, conduction failure, insulation damage, and wire externalization.


Silicone Casing Structural Failure: Insulation failure has been seen in several lead designs. The silicone casing around transvenous leads is subjected to conditions that can lead to fracture and structural failure. The proximal length undergoes high muscular stress from the pectoral muscles. It can also be crushed because it’s above the bony thorax wall, causing lead-to-lead or lead-to-can disruption. The distal segment experiences high intracardiac forces and dynamic flexing. There are also fatigue initiation points where the shocking coil is attached to the silicone. The most common site of failure is just below the tricuspid valve. The failure rate is higher in lead designs with dual shocking coils, which has contributed to recommendations against their use. (Figure 1.)



Figure 1. Structural failure of the pacemaker/ICD lead silicone casing and externalized conductors.


The defibrillation or sensing wires can be externalized when the insulin fails. The externalized conductors remain electrically silent and no detectable electrical abnormalities may be seen if their ETFE coating remains intact. The coating, however, is not designed to withstand the intravascular or intracardiac environment, and often becomes damaged quickly and leads to electrical short circuits.


Asymmetric Lead Design: This problem was faced by St. Jude Medical with their Riata silicone-insulated leads, and led to a physician advisory by the Food & Drug Administration. The cross-section of a typical asymmetrical lead design (Riata family) is shown in Figure 2.


Figure 2. Schematic cross-section of a pacemaker/ICD lead. Asymmetric design with redundant defibrillation and sensing wires.


A silicone casing forms the main structural component of the lead. The shocking coil rests on the outside of silicone, and four chambers are within the silicone. Redundant conductors carry the sensing and dual defibrillation wires in three distributed compression chambers. A central axis chamber facilitates the use of a stylet during insertion, and is surrounded by the central pacing multifilar conductor.


Detecting Lead Structural Failure: Silicone structural failure usually occurs four to five years after implantation, and is diagnosed in asymptomatic patients undergoing routine imaging or more commonly after changes in electrical conduction are discovered during interrogation of malfunctioning devices.


Fluoroscopy has demonstrated to have positive and negative predictive value of 88 percent and 99 percent, respectively, and is the gold standard method for detecting lead failure. ( Small studies have suggested that chest radiography may be an adequate screening tool, but this has not been widely adopted. Echocardiography cannot reliably visualize a structurally failed lead or externalized conductors, but should be able to demonstrate any intracardiac thrombus that has formed. (Figure 3.)


Complications of Structural Lead Failure: Besides electrical failure, the externalized conductors are highly thrombogenic and can cause thrombus formation. Usually, this can be treated with systemic anticoagulation, though the extent of the clot and secondary effects, such as SVC syndrome, may require explanation of the leads.


Unfortunately, for patients who have these leads implanted, there are no current recommendations for screening intervals or long-term follow-up. Routine removal of the leads is not recommended. If a thrombus formation does occur, systemic anticoagulation should be pursued before surgical or transvenous removal of the leads, which can be difficult and has a high complication rate.


Figure 3. Fluoroscopy image (schematic) of failed pacemaker/ICD lead with externalized wire.



Facebook Twitter

Tuesday, March 31, 2015

Ancient societies figured out that hypothermia was useful for hemorrhage control, but it was Hippocrates who realized that body heat could be a diagnostic tool. He caked his patients in mud, deducing that warmer areas dried first.


Typhoid fever, the plague of Athens in 400 BC and the demise of the Jamestown Colony in the early 1600s, led Robert Boyle to attempt to cure it around 1650 by dunking patients in ice-cold brine. This is likely the first application of therapeutic hypothermia, but it failed to lower the 30 to 40 percent mortality rate. One hundred years later, James Currie tried to treat fevers by applying hot, cold, and warm to the surface and having the patients drink liquids at those temperatures. These innovations were not any more successful than the brine, however.


Hydropaths, popular in the early 1800s, were referred to by Sir William Osler as “hermaphrodite practitioners who look upon water as a cure-all.” He realized, however, the therapeutic effects of using water for compresses and baths. One hydropath taught Osler that a rigid protocol of cold baths for typhoid fever could save lives, and Osler implemented this at Johns Hopkins. He published this protocol in the article, “The Cold-Bath Treatment of Typhoid Fever” in 1892, and physicians everywhere saw a drop in mortality.


Physicians had come to believe by the 1930s that cold was incompatible with life. All clinical thermometers of the time were calibrated only to 94°F, and this thermal barrier was so deeply ingrained in medical techniques that subnormal temperatures were combatted at all costs. Electrical heating devices or hot water bottles and warm blankets were considered necessary emergency equipment in every hospital, but this was about to change.


Therapeutic hypothermia’s father was Temple Fay, MD, a neurosurgeon at Temple University. As a medical student, he was unable to come up with a response when his mentor asked why tumors were less common in the extremities. This ultimately led him to experimental cancer study. He published work in 1937 on how hypothermia suspended cancer cell growth but normal temperatures allowed their growth to resume.


He treated his first patient with hypothermia in 1938 to prevent cancer cells from multiplying. Chloral hydrate and sodium bromide (sedatives) were given by rectum the night before. Paraldehyde, another sedative, was given immediately before hypothermia induction. The patient was cooled to 32°C for 24 hours.


He described it like this: “The first attempt at general refrigeration was made on November 28, 1938…. I … shut off the heat … and opened the windows [to aid] the cracked ice. … For many reasons, chiefly because of the prejudice on the part of the nurses, we had not dared submerge the entire patient in a bed of cracked ice. … The nurses’ home, interns’ quarters, and [other services] were alive with dubious comment….”


A series of patients were treated, but the nurses detested working on the “refrigeration service,” as they called it. “Frankly, the nurses were scared. They could not get the patients’ temperature with the clinical thermometers. The long-stem laboratory thermometers might break in getting a rectal reading. The ice and ice water were always in the way, even when the patient was turned. The pulse was weak. The breathing was shallow. They couldn’t get the patient’s blood pressure. … The entire project of general refrigeration had snowballed into a vast issue of distortions of truth, and even my friendly colleagues began to look askance, and asked how long this absurd experiment was going to be permitted.”


The program was almost shut down, but Dr. Fay and the hospital engineers made blankets from rubber tubes to carry a cold solution from a special “beer cooler.” Commercially available machine pumps were found useful in this technique, and they also developed electric thermocouples for 24-hour charting of rectal temperatures.


“What we learned after breaking the human thermal barrier on the hypothermic side was that human survival was possible under proper supervision. When total body refrigeration was established above 24, that hypothermic state could be maintained for 10 days (probably longer if required) when temperature levels of 29.4-32.3 were maintained,” he wrote. Dr. Fay also developed refrigeration techniques to reduce pain, and in 1945, was first to publish on using hypothermia for cerebral trauma.


During World War II, Germans confiscated one of Dr. Fay’s manuscripts that had been sent to Belgium for publication. German pilots downed in frigid waters would succumb to freezing temperatures, even if rescued quickly. This led to hypothermia recovery experiments on concentration camp victims in one of the most grotesque, unethical distortions of medicine ever. When the German atrocities were discovered, it set back the field by 10 years, but by the 1950s, research in hypothermia expanded and great strides were made.


Bigelow, et al. quickly perfected general hypothermia for intracardiac surgery in 1950, benefiting the brain and the heart. (Ann Surg. 1950;132[5]:849.) Rosomoff and Holaday worked out much of the physiology in 1954, figuring out that therapeutic hypothermia reduced cerebral oxygen consumption, blood flow, and metabolic rate, demonstrating a direct effect between body temperature, intracranial pressure, and brain volume. (Am J Physiol 1954;179[1]:85.)


Niazi and Lewis found in the late 1950s that patients’ temperatures could be lowered to even 9°C and then rewarmed with complete recovery. This would help patients with accidental hypothermia.


Some 20 years later, a landmark paper by G. Rainey Williams, Jr., MD, and Frank Spencer, MD, from Johns Hopkins reviewed four cases where it was used in cardiac arrest. (Ann Surg 1958;148[3]:462; This is an absolutely fascinating report of patients who suffered cardiac arrest, received open chest cardiac massage to achieve return of spontaneous circulation, and were treated with hypothermia. It had some early success, but severe complications overwhelmed the benefits as its use became more widespread. Cardiac irritability and ventricular fibrillation when patients’ temperatures were below 30°C were problems because precisely meeting a target temperature with the available equipment was nearly impossible.


Later, in 1959, Williams and Spencer, now joined by Benson and Yates, published “Use of Hypothermia after Cardiac Arrest.” (Anesth Analg 1959;38:423.) Two of 27 patients failed to achieve ROSC; six had no coma and were excluded from analysis. Twelve of the remaining 19 received hypothermia; seven did not. One of the seven survived in the untreated group, and six of 12 survived in the hypothermia group (14% vs 50%). They treated the patients at 30-32°C from 34 to 84 hours, and stopped based on the patient’s response. By 1959, induced hypothermia was also used for cardiac surgery and by neurosurgeons for head and spinal cord injuries.


Severe complications became apparent as therapeutic hypothermia became more widespread. Cardiac irritability and ventricular fibrillation when patients’ temperatures were below 30°C became problematic because it was nearly impossible to precisely meet a target temperature with the available equipment. There also was a much higher rate of infections, most significantly a decreased clearance rate of staphylococcal bacteremia, and vasospasm, increased plasma viscosity, hyperglycemia, cardiac dysfunction, and coagulopathies. These complications made its use risky and difficult to manage without intensive care, and the technique was essentially abandoned.


The few human papers on therapeutic hypothermia published after this time reinforced hypothermia’s problems. Bohn, et al. published a case series in 1986 of 24 children who remained in persistent coma after being resuscitated from drowning. (Crit Care Med 1986;14[6]:529.) They used therapeutic hypothermia to treat elevated intracranial pressure; those treated had a much higher rate of neutropenia and septicemia than the control group.


Cardiac arrest care, however, advanced during this period. Zoll published a study about counter shocks for ventricular fibrillation (N Engl J Med 1956;254[16]:727), and Kouwenhoven (JAMA 1960;173:1064) and Safar (Anesth Analg 1961;40:609) introduced closed chest massage in 1960. The 1970s and 1980s saw cardiac arrest care systems developed such as defibrillators in 1979. “Accidental Death and Disability: The Neglected Disease of Modern Society” in 1966 by the National Academy of Sciences spurred the development of EMS systems across the country. ( Richard Cummins, MD; Joseph Ornato, MD; William Thies, PhD; and Paul Pepe, MD published their “chain of survival” concept in 1991. (Circulation 1991;83[5]:1832.) CPR and defibrillators were now available outside the hospital and in the field, and widespread early resuscitation from cardiac arrest became a reality.


Disappointment set in again, though. Patients continued to do poorly despite improvements in cardiac arrest care. Only a tiny fraction was surviving, and they suffered profound neurologic sequelae. Becker, et al. summed up the frustration with the paper, “Outcome of CPR in a Large Metropolitan Area — Where Are the Survivors?” in the Annals of Emergency Medicine. (1991;20[4]:355.)


Sterz, et al. published a 1991 animal study showing mild hypothermia initiated immediately after ROSC improved neurologic outcomes. (Crit Care Med 1991;19[3]:379) The target of 34-36°C temperature target was unique and several degrees warmer than much of the earlier literature. More than that, they maintained the target temperature only for one hour post-resuscitation, and then let the temperature climb passively. Therapeutic hypothermia was back in fashion.


Bernard, et al. from Monash Medical Centre in Australia published the paper, “Clinical trial of Induced Hypothermia in Comatose Survivors of Out-of-Hospital Cardiac Arrest,” in Annals of Emergency Medicine in 1997. The pilot study prospectively followed 22 comatose resuscitated patients treated at 33°C for 12 hours, and compared them with 22 retrospectively matched controls. Mortality was 10 vs 17, and CPC 1 or 2 was 11 vs 3.


Bernard and his colleagues expanded on their paper in 2002 in the New England Journal of Medicine article, “Treatment of Comatose Survivors of Out-of-Hospital Cardiac Arrest with Induced Hypothermia.” (N Engl J Med 2002;346[8]:557.) They prospectively randomized 77 patients with resuscitated VF with persistent coma. They excluded pregnancy and persistent cardiogenic shock despite epinephrine. All received lidocaine. The MAPs were maintained between 90-100 mm Hg, pO2 > 100, and pCO2 of 40. Patients were cooled to 33°C for 12 hours before being allowed to rewarm passively. Mortality was similar in both groups, but patients with CPC 1 or 2 were 21 vs 9 if they had been treated with hypothermia.


Another study, “Mild Therapeutic Hypothermia to Improve the Neurologic Outcome after Cardiac Arrest,” enrolled 237 patients with resuscitated ventricular fibrillation arrest treated with therapeutic hypothermia to 32-34°C for 24 hours and compared them with normothermic patients. Median time to starting cooling was 105 minutes post-arrest, but the patients did not reach target temperature until nearly eight hours after the arrest. There was a 14 percent absolute risk reduction in mortality and a 16 percent improvement in CPC 1 and 2 scores for patients treated with hypothermia.


Hypothermia was then endorsed by the American Heart Association in 2002 and by the Advance Life Support Task Force of the International Liaison Committee on Resuscitation in 2003. Its use in post-resuscitation care spread widely and quickly as a standard of care. But other controversies developed as the practice became more widespread. Treatment was expanded to resuscitated rhythms other than the ventricular fibrillation and ventricular tachycardia under the assumption that brain ischemia from any source would benefit from hypothermia. Guidelines had adopted 32-24°C, but the optimal temperature target was uncertain.


Work at the Safar Center for Resuscitation in Pittsburgh led to the paper by Logue and Callaway, “Comparison of the Effects of Hypothermia at 33°C or 35°C after Cardiac Arrest in Rats.” They were able to demonstrate that minimal hypothermia at 35°C was as good as cooling to 33°C in mortality and neurologic outcomes, and both were better than normothermia. (Acad Emerg Med 2007;14[4]:293.)


Zeiner, et al. published data that a fever in the post-cardiac arrest period had adverse neurologic outcomes, so researchers postulated that the neurologic benefit had little to do with hypothermia and is more the result of preventing hyperthermia. (Arch Intern Med 2001;161[16]:2007.)


Then came a paper by Nielsen, et al. that enrolled 950 patients in 36 centers remained unconscious after being resuscitated from out-of-hospital cardiac arrest. (N Engl J Med 2013;369[23]:2197.) Any initial rhythm was allowed, which is in line with current practice. They were randomized to temperatures of 33°C or 36°C for 28 hours and rewarmed. Normothermia was maintained until 72 hours post-arrest. There was a non-significant two percent mortality difference at the end of the treatment protocol and 180 days later. Patients with a CPC score 3-5 were 54% vs 52%; this was not statistically significant.


Critics, however, said patients in this study had short no-flow times that may not have been valid for a wider population. The study was designed as a non-inferiority study powered to find an 11% absolute risk reduction between 36°C and 33°C, which would be a NNT of 9. That is asking a lot of any cardiac arrest treatment besides chest compressions and defibrillation.


The adoption of higher temperatures is widespread but not complete. The critical care involved in maintaining a patient at 33°C compared with 36°C is much larger but not significantly so. The risks and complications are higher but, again, not significantly. Hypothermia seems to convey a mortality and neurologic benefit compared with normothermia, but preventing hyperthermia may be the greatest benefit.


Read more about therapeutic hypothermia in our archive.


Facebook Twitter