Spontaneous Circulation focuses on advanced ECG interpretation, cardiac pharmacology, hemodynamic assessment and resuscitation, and managing acute coronary syndrome. It is devoted to translating the best evidence-based treatments from critical care, resuscitation, and trauma for bedside use in the emergency department.
Wednesday, July 01, 2015
A man in his 30s comes to your emergency department at 3 a.m. profoundly diaphoretic and reporting severe 10/10 chest pain. He has been at a party all night, and the chest pain started about 30 minutes earlier. He had a previous heart attack, but cannot remember many of the details. He reports no medication or drug use. No doubt this is a concerning presentation, and you immediately order an ECG, blood work, and an aspirin.
While this is in process, you review the electronic medical information, which reveals that the previous “heart attack” was actually observation for chest pain rule-out. The ECG showed nonspecific ST/T-wave changes, and serial troponin measurements were negative. He had undergone a stress echocardiogram, which was a good quality study, and demonstrated no inducible ischemia or reproducible symptoms. The patient had a urine drug screen during that previous admission, however, that was positive for cocaine.
With that information, cocaine-associated chest pain is high on your differential, but you have many questions and are not sure how to proceed.
How useful is a urine drug screen for determining if the patient used cocaine?
The urine drug test for cocaine is highly specific (95%) and sensitive (99%). Cocaine itself is eliminated from urine in about 12 hours, but this can be delayed up to 72 hours in chronic or heavy users. The standard ELISA assay, however, does not test for cocaine, but instead for the metabolite benzoylecgonine, which is detectable in urine in as little as two hours after use and remains detectable for three to four days. Amoxicillin was previously thought to cause false-positives, but recent studies show that is less likely. Despite similar endings, other -caine drugs such as lidocaine, procaine, and articaine lack the central ecgonine structure, and therefore are not registered as false-positives.
How certain should you be that the positive urine cocaine test is related to the patient’s current chest pain?
The cardiac effects of cocaine are seen rapidly after ingestion. The majority of patients develop myocardial infarction within an hour, usually by three hours after cocaine use. The cocaine metabolites, however, may cause delayed or waxing-waning coronary vasoconstriction, and therefore the ischemic chest pain can occur up to four days afterward. If a patient can give you an accurate history of cocaine ingestion that falls within the window, you should have a very high suspicion that the chest pain symptoms are cocaine-related.
What is the cocaine doing to the patient?
Cocaine inhibits the repute of norepinephrine and dopamine, which causes vasoconstriction and increased cardiac contractility. Cocaine use increases the patient's heart rate, systolic and diastolic blood pressure, and mean arterial pressure.
Heart rate and blood pressure effects are dose-dependent, but cocaine does exhibit a significant tachyphylaxis pattern. The level at which this is reached depends on the individual and his history of cocaine use.
The effects of cocaine also depend on the route of exposure. Snorting or smoking cocaine has fewer blood pressure effects because of compensation baroreceptors. If it is injected, though, the baroreceptors are bypassed, and the full hemodynamic effects of norepinephrine and dopamine are in place.
Greater effects are seen at areas that have pre-existing atherosclerotic disease, but the effect of cocaine coronary vasospasm does not usually produce complete coronary occlusion. Unfortunately, long-term cocaine users tolerate the development of coronary atherosclerosis and the development of coronary aneurysms.
Cocaine can contribute directly to thrombosis by activating platelets and positive endothelial dysfunction, which can secondarily contribute to the platelet activation. Even without myocardial ischemia, cocaine and its metabolites are directly toxic to myocardial cells. This leads to infiltration of inflammatory cells, evidence of myocardial necrosis with elevation of troponin enzymes, and fibrosis, which can result in cardiac dysfunction.
Can cocaine cause chest pain from areas outside the heart?
Outside the heart, increased shear force from cocaine-induced tachycardia and hypertension dramatically increases the risk of aortic dissection. Cocaine can also cause acute pulmonary hypertension, leading to chest pain and shortness of breath. Inhalation of cocaine can also produce "crack lung,” which results in bilateral pulmonary infiltrates, hypoxia, hemoptysis, and respiratory failure.
How worried should I be that chest pain is related to myocardial infarction?
The increased heart rate, contractility, and blood pressure associated with cocaine use increase myocardial wall stress and oxygen demand. At the same time, coronary artery vasospasm with or without thrombosis reduce coronary blood flow and oxygen delivery. Myocardial ischemia occurs when the oxygen supply is not sufficient to meet demand. If the ischemia is prolonged, necrosis develops, which leads to release of troponins. Several reliable studies have documented the cocaine-related acute myocardial infarction at one to six percent.
What evaluation is needed for cocaine chest pain patients?
A detailed history, including drug use, is one of the most important parts of chest pain evaluation. Just like acute coronary syndrome, there is little predictive value in the reported severity, character, quality, or location of the chest pain or in the presence or absence of diaphoresis, any shortness of breath, and nausea/vomiting.
The ECG has a poor sensitivity and low predictive value in distinguishing between benign cocaine chest pain and myocardial infarction. This is complicated by the high frequency of benign early repolarization found in patients evaluated for cocaine chest pain because they are typically younger.
Troponins are highly specific for myocardial injury with necrosis, but they do not provide insight into the exact mechanism. (EMN 2015 Feb 2; http://bit.ly/1KK1Bzu.) The myocardial injury may be secondary to direct myocardial toxicity, vasoconstriction limiting oxygen supply, increased myocardial demand, thrombosis, or plaque rupture. Unless the ECG demonstrates ST-elevation, distinguishing among a NSTEMI type I myocardial infarction, type 2 myocardial infarction, and nonischemic myocardial injury can be very difficult, and with cocaine it is likely multifactorial.
Who should be considered low risk? Is observation appropriate?
High-risk patients can be identified by the presence of ST-segment elevation or depression >1 mm, elevated cardiac troponin, a high Killip score, recurrent chest pain, or hemodynamic instability. Of these patients, 20 to 25 percent will ultimately be diagnosed with a myocardial infarction. Low-risk patients can often be placed on observation or observed in the emergency department, and monitored for six to 12 hours including repeat measurements of cardiac troponin. Stress testing should be arranged for follow-up.
How do you treat cocaine-associated myocardial infarction?
Unlike other patients with acute coronary syndrome unrelated to cocaine, benzodiazepines are the first-line therapy for cocaine chest pain. Benzodiazepines work by blunting the sympathomimetic effects of cocaine (tachycardia, hypertension), relieving chest pain, and improving cardiac hemodynamics. They have the secondary benefit of relieving the agitation and psychiatric stimulation effects of cocaine. Benzodiazepines alone are often effective in controlling the hypertension and tachycardia, but use nitroglycerin, nitroprusside, or phentolamine if additional blood pressure control is needed. Beta-blockers should be avoided because of the potential of worsening coronary vasoconstriction.
In high-risk patients or those who have a ST-elevation myocardial infarction, antiplatelet and anticoagulation medications should be given, and the patient should undergo cardiac angiography and stenting if a culprit lesion is identified.
The early development of ventricular dysrhythmias can usually be attributed to the sodium channel effect of the myocardium and are responsive to sodium bicarbonate. If the ventricular arrhythmias develop several hours after the last use of cocaine, however, this is much more suspicious for ischemia-related dysrhythmias. In this situation, standard treatment included amiodarone and lidocaine as the preferred agents.
Our Patient’s Outcome
Our patient’s ECG showed no injury. His urine drug screen was positive for cocaine, and when presented with this information, he admitted using cocaine at the party. He was treated with several doses of intravenous midazolam, which dramatically improved his tachycardia and reduced his blood pressure. He was placed on observation, and serial troponin measurements were negative. Given his previous stress echocardiogram, no further provocative testing was pursued. He was counseled on abstaining from further cocaine use.
Important Points to Know about Cocaine and Chest Pain
n Cocaine blocks the reuptake of norepinephrine and dopamine, leading to increased catecholamines with powerful sympathomimetic effects.
n Cocaine has multiple hemodynamic and cardiac effects including tachycardia, hypertension, coronary artery vasospasm, increased oxygen demand, increased platelet aggregation and thrombosis, aortic dissection, and direct myocardial toxicity.
n The incidence of myocardial infarction in patients presenting to the emergency department with chest pain after having used cocaine is one to six percent.
n Cocaine use should be part of the history of every patient presenting with chest pain.
n Qualitative immunoassay for cocaine and its metabolites is sensitive and specific, but may remain positive for up to four days after use.
n Benzodiazepines are first-line treatment for cocaine-induced chest pain.
n Beta blockade is contraindicated for cocaine-induced myocardial infarction because of the possibility of worsening coronary vasospasm.
n Observation and repeat measurement of troponin level is appropriate for low-risk cocaine chest pain.
Monday, June 01, 2015
In 1977, while I was busy with Star Wars and action figures, Andreas Gruentzig was using his kitchen-made balloon catheter to dilate and open a highly stenotic LAD coronary artery. Fixing atherosclerotic disease of the coronary arteries had previously required open heart bypass surgery, a procedure only 10 years old at the time. He had taken coronary catheterization, which until then had only been used for diagnostics and surgical planning, and became the first to perform transluminal interventional therapy. Unfortunately, it turned out that the coronary artery would often close either immediately or over the following days to months after balloon dilation. The next 40 years would be a struggle to learn how to keep the artery open. And complaining about George Lucas. Lots of complaining.
Plain Old Balloon Angioplasty: Balloon angioplasty, formally known as percutaneous transluminal coronary angioplasty (PTCA), deforms the coronary artery to overcome an acute thrombus or a stenotic atheroma. The forceful enlargement by its nature causes dissection in the vessel intima and distention of the adventitia. The defects in the endothelium are physiologically similar to a plaque rupture seen in acute myocardial infarction, and exposes the highly thrombogenic media to circulating platelets and coagulation factors. A thrombus forms over the site, and an occlusion of the artery can occur. The stretched elastic adventitia also has a tendency to recoil.
These two effects could cause abrupt closure of the vessel about five percent of the time with balloon angioplasty, which could cause myocardial infarction and possibly require emergent bypass surgery. Antiplatelet therapy, such as glycoprotein IIb/IIIa inhibitors, helped reduce the rate of thrombus formation, but could not prevent the recoil. Even if acute closure did not occur, neointimal proliferation and vascular remodeling would lead to in-stent restenosis (ISR) more than 50 percent of the time, with recurrence of the patient’s ischemic symptoms for up to six months post-procedure.
Development of Stenting: The answer to these problems seemed to be to use an intracoronary scaffold, which could seal the dissection flap and prevent elastic recoil. The first stents were used in 1986. The first balloon-expandable stent by Palmaz-Schatz became available in 1989. But the initial use of stents was problematic. They were bulky and required the use of multiple catheters, making them a technical challenge to deploy. There was a high failure rate and frequent embolization. These issues were fixed over time with better stents, catheters, and technique. Operators learned to size the stent and balloon to the vessel diameter, ensure that the deployed stent was against the coronary wall along the entire length, avoid edge dissection, and minimize residual stenosis.
The abrupt closure was reduced to three percent when coronary stenting was combined with balloon angioplasty. The scaffolding also proved effective in preventing elastic recoil, and the rates of ISR fell from more than 50 percent to about 30 percent. With this dramatic success, stents were used in more than 90 percent of intervention cases by 1999.
The downside of these early stents, however, were a much higher rate of early in-stent thrombosis (see table) compared with balloon angioplasty. Exposed metal in the coronary artery lumen is extremely thrombogenic and serves as platform for the aggregation of platelets and thrombus formation. Within two to four weeks the endothelium of the artery grows over the exposed metal, which prevents contact between the platelets and the thrombogenic metal, but until that time there is a risk of a thrombus formation and acute myocardial infarction. The thrombus formation tends to be more extensive and technically more challenging than native artery thrombosis, and therefore the extent of the infarct and mortality are higher. To counter the early in-stent thrombosis, high levels of anticoagulation and antiplatelet therapies were used, which led to serious bleeding complications.
Drug-Eluting stents: The device companies first tried to improve the performance of the stents by applying special coatings to make them inert in the body. Gold (thought to be inert), diamond, and phosphorylcholine (simulating the body’s cell membrane) were ineffective. Heparin and steroids (anti-inflammatory) showed only the most inconsequential benefit. Even at 30 percent, though, the rate of restenosis rate was unacceptably high. Manufacturers developed polymer coatings that would allow the stent to function as a drug delivery tool. The drug-eluting stents (DES) introduced in 2003 used sirolimus and paclitaxel, antimitotic chemotherapeutic agents. The coating allowed for a controlled long-acting local drug delivery to stop the cell hyperplasia and proliferation. In repeated randomized-controlled studies (RAVEL, SIRIUS, ISAR-DESIRE), the drug-eluting stents showed significant reduction of in-stent restenosis compared with bare-metal stents (BMS). They became wildly successful, and captured nearly 90 percent of the market within two years.
Dual-Antiplatelet Therapy: Within a few years, however, it became clear that the drug-eluting stents came with an increased risk of late in-stent thrombosis. The same anti-proliferative properties that helped DES prevent restenosis also delayed the endothelialization of the stent, leaving the thrombogenic stent exposed to blood much longer than bare-metal stents. Various combinations of antiplatelet and anticoagulation therapies were tried, including aspirin, heparin, and warfarin.
Unfortunately, the bleeding rates with these treatments were exceptionally high, and often required emergency coronary artery bypass surgery. This would change with the development of the thienopyridine class of antiplatelet medications. Initially, ticlopidine (Ticlid), and later clopidogrel (Plavix) combined with aspirin (dual-antiplatelet therapy, DAPT) proved a potent inhibitor of the early in-stent thrombosis seen in BMS and late in-stent thrombosis seen in DES without excessive bleeding complications. DAPT was cemented with the positive results of the PCI-CURE trial.
It should not be overlooked that significant patient-related factors affect restenosis and thrombosis risks. Patients with diabetes, decreased left ventricular function, and renal disease have shown to benefit the most from the DES benefits preventing in-stent restenosis, but they are also at a higher risk of stent thrombosis. DAPT is most critical for these patients. The patient’s ability to continue DAPT uninterrupted (from adherence, bleeding complications, need for surgery, or financial costs) as well as clopidogrel response, increase the risk of in-stent thrombosis. The length of DAPT depends on whether the PCI was performed for ACS vs. angina, type of stent (BMS vs. DES), bleeding risk, and comorbidities.
This is not the end of our quest for permanent artery openness. Work is ongoing to define the optimal length of DAPT, image-guided stent deployment, novel proliferative drugs, directional drug delivery, biodegradable polymers, bioresorbable scaffolds, pro-healing stents, and many more advances. We look forward to advances, but know they bring their own new unique set of problems. Just like Episode VII.
Friday, May 01, 2015
Pacemaker and implantable cardioverter defibrillator development has revolutionized the treatment of many kinds of cardiac diseases. The technology advancements have been tremendous, and once-large external batteries are now replaced by gumstick-sized modules as sophisticated as any computer. Unfortunately, the leads that provide sensing, pacing, and defibrillation represent a vulnerable part of the system, and have short- and long-term failure modes. Failure modes can cause a spectrum of issues from minor annoyances to catastrophic failure and death. Failure usually requires replacement of the generator, leads, or both, which subjects the patient to an additional procedure with its own associated complications.
Failure modes include acute perforation, dislodgement, infection, vein thrombosis, migration, conduction failure, insulation damage, and wire externalization.
Silicone Casing Structural Failure: Insulation failure has been seen in several lead designs. The silicone casing around transvenous leads is subjected to conditions that can lead to fracture and structural failure. The proximal length undergoes high muscular stress from the pectoral muscles. It can also be crushed because it’s above the bony thorax wall, causing lead-to-lead or lead-to-can disruption. The distal segment experiences high intracardiac forces and dynamic flexing. There are also fatigue initiation points where the shocking coil is attached to the silicone. The most common site of failure is just below the tricuspid valve. The failure rate is higher in lead designs with dual shocking coils, which has contributed to recommendations against their use. (Figure 1.)
Figure 1. Structural failure of the pacemaker/ICD lead silicone casing and externalized conductors.
The defibrillation or sensing wires can be externalized when the insulin fails. The externalized conductors remain electrically silent and no detectable electrical abnormalities may be seen if their ETFE coating remains intact. The coating, however, is not designed to withstand the intravascular or intracardiac environment, and often becomes damaged quickly and leads to electrical short circuits.
Asymmetric Lead Design: This problem was faced by St. Jude Medical with their Riata silicone-insulated leads, and led to a physician advisory by the Food & Drug Administration. The cross-section of a typical asymmetrical lead design (Riata family) is shown in Figure 2.
Figure 2. Schematic cross-section of a pacemaker/ICD lead. Asymmetric design with redundant defibrillation and sensing wires.
A silicone casing forms the main structural component of the lead. The shocking coil rests on the outside of silicone, and four chambers are within the silicone. Redundant conductors carry the sensing and dual defibrillation wires in three distributed compression chambers. A central axis chamber facilitates the use of a stylet during insertion, and is surrounded by the central pacing multifilar conductor.
Detecting Lead Structural Failure: Silicone structural failure usually occurs four to five years after implantation, and is diagnosed in asymptomatic patients undergoing routine imaging or more commonly after changes in electrical conduction are discovered during interrogation of malfunctioning devices.
Fluoroscopy has demonstrated to have positive and negative predictive value of 88 percent and 99 percent, respectively, and is the gold standard method for detecting lead failure. (http://bit.ly/1BokSBa.) Small studies have suggested that chest radiography may be an adequate screening tool, but this has not been widely adopted. Echocardiography cannot reliably visualize a structurally failed lead or externalized conductors, but should be able to demonstrate any intracardiac thrombus that has formed. (Figure 3.)
Complications of Structural Lead Failure: Besides electrical failure, the externalized conductors are highly thrombogenic and can cause thrombus formation. Usually, this can be treated with systemic anticoagulation, though the extent of the clot and secondary effects, such as SVC syndrome, may require explanation of the leads.
Unfortunately, for patients who have these leads implanted, there are no current recommendations for screening intervals or long-term follow-up. Routine removal of the leads is not recommended. If a thrombus formation does occur, systemic anticoagulation should be pursued before surgical or transvenous removal of the leads, which can be difficult and has a high complication rate.
Figure 3. Fluoroscopy image (schematic) of failed pacemaker/ICD lead with externalized wire.
Tuesday, March 31, 2015
Ancient societies figured out that hypothermia was useful for hemorrhage control, but it was Hippocrates who realized that body heat could be a diagnostic tool. He caked his patients in mud, deducing that warmer areas dried first.
Typhoid fever, the plague of Athens in 400 BC and the demise of the Jamestown Colony in the early 1600s, led Robert Boyle to attempt to cure it around 1650 by dunking patients in ice-cold brine. This is likely the first application of therapeutic hypothermia, but it failed to lower the 30 to 40 percent mortality rate. One hundred years later, James Currie tried to treat fevers by applying hot, cold, and warm to the surface and having the patients drink liquids at those temperatures. These innovations were not any more successful than the brine, however.
Hydropaths, popular in the early 1800s, were referred to by Sir William Osler as “hermaphrodite practitioners who look upon water as a cure-all.” He realized, however, the therapeutic effects of using water for compresses and baths. One hydropath taught Osler that a rigid protocol of cold baths for typhoid fever could save lives, and Osler implemented this at Johns Hopkins. He published this protocol in the article, “The Cold-Bath Treatment of Typhoid Fever” in 1892, and physicians everywhere saw a drop in mortality.
Physicians had come to believe by the 1930s that cold was incompatible with life. All clinical thermometers of the time were calibrated only to 94°F, and this thermal barrier was so deeply ingrained in medical techniques that subnormal temperatures were combatted at all costs. Electrical heating devices or hot water bottles and warm blankets were considered necessary emergency equipment in every hospital, but this was about to change.
Therapeutic hypothermia’s father was Temple Fay, MD, a neurosurgeon at Temple University. As a medical student, he was unable to come up with a response when his mentor asked why tumors were less common in the extremities. This ultimately led him to experimental cancer study. He published work in 1937 on how hypothermia suspended cancer cell growth but normal temperatures allowed their growth to resume.
He treated his first patient with hypothermia in 1938 to prevent cancer cells from multiplying. Chloral hydrate and sodium bromide (sedatives) were given by rectum the night before. Paraldehyde, another sedative, was given immediately before hypothermia induction. The patient was cooled to 32°C for 24 hours.
He described it like this: “The first attempt at general refrigeration was made on November 28, 1938…. I … shut off the heat … and opened the windows [to aid] the cracked ice. … For many reasons, chiefly because of the prejudice on the part of the nurses, we had not dared submerge the entire patient in a bed of cracked ice. … The nurses’ home, interns’ quarters, and [other services] were alive with dubious comment….”
A series of patients were treated, but the nurses detested working on the “refrigeration service,” as they called it. “Frankly, the nurses were scared. They could not get the patients’ temperature with the clinical thermometers. The long-stem laboratory thermometers might break in getting a rectal reading. The ice and ice water were always in the way, even when the patient was turned. The pulse was weak. The breathing was shallow. They couldn’t get the patient’s blood pressure. … The entire project of general refrigeration had snowballed into a vast issue of distortions of truth, and even my friendly colleagues began to look askance, and asked how long this absurd experiment was going to be permitted.”
The program was almost shut down, but Dr. Fay and the hospital engineers made blankets from rubber tubes to carry a cold solution from a special “beer cooler.” Commercially available machine pumps were found useful in this technique, and they also developed electric thermocouples for 24-hour charting of rectal temperatures.
“What we learned after breaking the human thermal barrier on the hypothermic side was that human survival was possible under proper supervision. When total body refrigeration was established above 24℃, that hypothermic state could be maintained for 10 days (probably longer if required) when temperature levels of 29.4-32.3℃ were maintained,” he wrote. Dr. Fay also developed refrigeration techniques to reduce pain, and in 1945, was first to publish on using hypothermia for cerebral trauma.
During World War II, Germans confiscated one of Dr. Fay’s manuscripts that had been sent to Belgium for publication. German pilots downed in frigid waters would succumb to freezing temperatures, even if rescued quickly. This led to hypothermia recovery experiments on concentration camp victims in one of the most grotesque, unethical distortions of medicine ever. When the German atrocities were discovered, it set back the field by 10 years, but by the 1950s, research in hypothermia expanded and great strides were made.
Bigelow, et al. quickly perfected general hypothermia for intracardiac surgery in 1950, benefiting the brain and the heart. (Ann Surg. 1950;132:849.) Rosomoff and Holaday worked out much of the physiology in 1954, figuring out that therapeutic hypothermia reduced cerebral oxygen consumption, blood flow, and metabolic rate, demonstrating a direct effect between body temperature, intracranial pressure, and brain volume. (Am J Physiol 1954;179:85.)
Niazi and Lewis found in the late 1950s that patients’ temperatures could be lowered to even 9°C and then rewarmed with complete recovery. This would help patients with accidental hypothermia.
Some 20 years later, a landmark paper by G. Rainey Williams, Jr., MD, and Frank Spencer, MD, from Johns Hopkins reviewed four cases where it was used in cardiac arrest. (Ann Surg 1958;148:462; http://1.usa.gov/1BvriDi.) This is an absolutely fascinating report of patients who suffered cardiac arrest, received open chest cardiac massage to achieve return of spontaneous circulation, and were treated with hypothermia. It had some early success, but severe complications overwhelmed the benefits as its use became more widespread. Cardiac irritability and ventricular fibrillation when patients’ temperatures were below 30°C were problems because precisely meeting a target temperature with the available equipment was nearly impossible.
Later, in 1959, Williams and Spencer, now joined by Benson and Yates, published “Use of Hypothermia after Cardiac Arrest.” (Anesth Analg 1959;38:423.) Two of 27 patients failed to achieve ROSC; six had no coma and were excluded from analysis. Twelve of the remaining 19 received hypothermia; seven did not. One of the seven survived in the untreated group, and six of 12 survived in the hypothermia group (14% vs 50%). They treated the patients at 30-32°C from 34 to 84 hours, and stopped based on the patient’s response. By 1959, induced hypothermia was also used for cardiac surgery and by neurosurgeons for head and spinal cord injuries.
Severe complications became apparent as therapeutic hypothermia became more widespread. Cardiac irritability and ventricular fibrillation when patients’ temperatures were below 30°C became problematic because it was nearly impossible to precisely meet a target temperature with the available equipment. There also was a much higher rate of infections, most significantly a decreased clearance rate of staphylococcal bacteremia, and vasospasm, increased plasma viscosity, hyperglycemia, cardiac dysfunction, and coagulopathies. These complications made its use risky and difficult to manage without intensive care, and the technique was essentially abandoned.
The few human papers on therapeutic hypothermia published after this time reinforced hypothermia’s problems. Bohn, et al. published a case series in 1986 of 24 children who remained in persistent coma after being resuscitated from drowning. (Crit Care Med 1986;14:529.) They used therapeutic hypothermia to treat elevated intracranial pressure; those treated had a much higher rate of neutropenia and septicemia than the control group.
Cardiac arrest care, however, advanced during this period. Zoll published a study about counter shocks for ventricular fibrillation (N Engl J Med 1956;254:727), and Kouwenhoven (JAMA 1960;173:1064) and Safar (Anesth Analg 1961;40:609) introduced closed chest massage in 1960. The 1970s and 1980s saw cardiac arrest care systems developed such as defibrillators in 1979. “Accidental Death and Disability: The Neglected Disease of Modern Society” in 1966 by the National Academy of Sciences spurred the development of EMS systems across the country. (http://bit.ly/1BC5XZG.) Richard Cummins, MD; Joseph Ornato, MD; William Thies, PhD; and Paul Pepe, MD published their “chain of survival” concept in 1991. (Circulation 1991;83:1832.) CPR and defibrillators were now available outside the hospital and in the field, and widespread early resuscitation from cardiac arrest became a reality.
Disappointment set in again, though. Patients continued to do poorly despite improvements in cardiac arrest care. Only a tiny fraction was surviving, and they suffered profound neurologic sequelae. Becker, et al. summed up the frustration with the paper, “Outcome of CPR in a Large Metropolitan Area — Where Are the Survivors?” in the Annals of Emergency Medicine. (1991;20:355.)
Sterz, et al. published a 1991 animal study showing mild hypothermia initiated immediately after ROSC improved neurologic outcomes. (Crit Care Med 1991;19:379) The target of 34-36°C temperature target was unique and several degrees warmer than much of the earlier literature. More than that, they maintained the target temperature only for one hour post-resuscitation, and then let the temperature climb passively. Therapeutic hypothermia was back in fashion.
Bernard, et al. from Monash Medical Centre in Australia published the paper, “Clinical trial of Induced Hypothermia in Comatose Survivors of Out-of-Hospital Cardiac Arrest,” in Annals of Emergency Medicine in 1997. The pilot study prospectively followed 22 comatose resuscitated patients treated at 33°C for 12 hours, and compared them with 22 retrospectively matched controls. Mortality was 10 vs 17, and CPC 1 or 2 was 11 vs 3.
Bernard and his colleagues expanded on their paper in 2002 in the New England Journal of Medicine article, “Treatment of Comatose Survivors of Out-of-Hospital Cardiac Arrest with Induced Hypothermia.” (N Engl J Med 2002;346:557.) They prospectively randomized 77 patients with resuscitated VF with persistent coma. They excluded pregnancy and persistent cardiogenic shock despite epinephrine. All received lidocaine. The MAPs were maintained between 90-100 mm Hg, pO2 > 100, and pCO2 of 40. Patients were cooled to 33°C for 12 hours before being allowed to rewarm passively. Mortality was similar in both groups, but patients with CPC 1 or 2 were 21 vs 9 if they had been treated with hypothermia.
Another study, “Mild Therapeutic Hypothermia to Improve the Neurologic Outcome after Cardiac Arrest,” enrolled 237 patients with resuscitated ventricular fibrillation arrest treated with therapeutic hypothermia to 32-34°C for 24 hours and compared them with normothermic patients. Median time to starting cooling was 105 minutes post-arrest, but the patients did not reach target temperature until nearly eight hours after the arrest. There was a 14 percent absolute risk reduction in mortality and a 16 percent improvement in CPC 1 and 2 scores for patients treated with hypothermia.
Hypothermia was then endorsed by the American Heart Association in 2002 and by the Advance Life Support Task Force of the International Liaison Committee on Resuscitation in 2003. Its use in post-resuscitation care spread widely and quickly as a standard of care. But other controversies developed as the practice became more widespread. Treatment was expanded to resuscitated rhythms other than the ventricular fibrillation and ventricular tachycardia under the assumption that brain ischemia from any source would benefit from hypothermia. Guidelines had adopted 32-24°C, but the optimal temperature target was uncertain.
Work at the Safar Center for Resuscitation in Pittsburgh led to the paper by Logue and Callaway, “Comparison of the Effects of Hypothermia at 33°C or 35°C after Cardiac Arrest in Rats.” They were able to demonstrate that minimal hypothermia at 35°C was as good as cooling to 33°C in mortality and neurologic outcomes, and both were better than normothermia. (Acad Emerg Med 2007;14:293.)
Zeiner, et al. published data that a fever in the post-cardiac arrest period had adverse neurologic outcomes, so researchers postulated that the neurologic benefit had little to do with hypothermia and is more the result of preventing hyperthermia. (Arch Intern Med 2001;161:2007.)
Then came a paper by Nielsen, et al. that enrolled 950 patients in 36 centers remained unconscious after being resuscitated from out-of-hospital cardiac arrest. (N Engl J Med 2013;369:2197.) Any initial rhythm was allowed, which is in line with current practice. They were randomized to temperatures of 33°C or 36°C for 28 hours and rewarmed. Normothermia was maintained until 72 hours post-arrest. There was a non-significant two percent mortality difference at the end of the treatment protocol and 180 days later. Patients with a CPC score 3-5 were 54% vs 52%; this was not statistically significant.
Critics, however, said patients in this study had short no-flow times that may not have been valid for a wider population. The study was designed as a non-inferiority study powered to find an 11% absolute risk reduction between 36°C and 33°C, which would be a NNT of 9. That is asking a lot of any cardiac arrest treatment besides chest compressions and defibrillation.
The adoption of higher temperatures is widespread but not complete. The critical care involved in maintaining a patient at 33°C compared with 36°C is much larger but not significantly so. The risks and complications are higher but, again, not significantly. Hypothermia seems to convey a mortality and neurologic benefit compared with normothermia, but preventing hyperthermia may be the greatest benefit.
Read more about therapeutic hypothermia in our archive.
Monday, March 02, 2015
The vagaries of any list or group are that invariably some members are far more popular than others. Hyperkalemia gets all of the attention when we talk about the cardiac effects of electrolyte abnormalities. It is certainly important (read: life-threatening), and we have multiple life-saving treatments that lend themselves well to testing.
We are well versed in hyperkalemia, though one of its treatments has become controversial (I am looking at you, kayexalate). But other electrolyte abnormalities beyond hyperkalemia also deserve attention.
Hypokalemia: The potassium level in the body is closely regulated, but hypokalemia can still develop by several mechanisms, including gastrointestinal loss, renal potassium wasting, or shifting potassium into the intracellular space with an alkalosis. Characteristic changes in the ECG are associated with hypokalemia, which become more prominent as the hypokalemia worsens. The T waves flatten, and may disappear. A U wave may develop, seen as a small deflection after the T wave and in the same direction. Its magnitude is usually <0.5 mm, but it is inversely proportional to the pulse, becoming larger at a slower heart rate. It is most prominent in V2 and V3. It is important not to mistake the QU interval for a prolonged QT interval.
ECG of patient with hypokalemia and hypomagnesemia.
The myocardium is very sensitive to hypokalemia, with its largest effect on inhibiting the action of the delayed rectifier potassium channels (IKr) reducing the outward potassium current. Even though the cardiac action potential is prolonged, the refractory period remains unchanged, which dramatically increases the chance after depolarizations that can lead to ventricular arrhythmias. These effects may be exacerbated by ischemia or digoxin toxicity. Hypokalemia also increases the hyperpolarization in the AV node, which increases the effects of acetylcholine suppression on AV conduction (negative dromotropic).
Hypomagnesemia: Hypomagnesemia seldom occurs by itself, and is usually associated with hypokalemia and hypocalcemia. Magnesium is an important cofactor for ATP pumps as is found in the renal tubule responsible for potassium reabsorption, which is why hypomagnesemia seldom occurs in isolation, and is almost invariably associated with renal potassium loss and hypokalemia. Ninety-five percent of body magnesium is intracellular. Serum magnesium levels are a poor indicator of total body magnesium level, and usually reflect acute events. The multiple electrolyte abnormalities make it difficult to document isolated ECG effects of the low magnesium. Nevertheless, the most common ECG effects are global T wave inversions and prolonged QT interval.
The most significant effects of hypomagnesemia are atrial and ventricular ectopy and dysrhythmias. Magnesium levels affect the release of calcium from the sarcoplasmic reticulum by blocking the L-type calcium channels. Calcium is blocked when magnesium levels are high. Conversely, additional calcium is released into the myocyte cytoplasm at low magnesium levels. This calcium has an important role in excitation contraction coupling of the myocardial cell, which precipitates all manner of arrhythmias. It is difficult to tell, however, if this is a specific effect of the hypomagnesemia or from the concurrent hypokalemia.
Whatever the origin, torsades de pointes and polymorphic ventricular tachycardia in QT interval prolongation respond to magnesium infusion. The magnesium suppression of early afterdepolarization (EAD) occurs by blocking the calcium influx so that the amplitude of EAD is reduced to subthreshold levels.
Hypercalcemia: Half of serum calcium is bound to proteins (mostly albumin), and the remaining unbound (or ionized) calcium produces the physiological effects and ECG changes. The amount of calcium bound to protein varies with acid-base balance. As the blood becomes more alkalemic, the more calcium becomes ionized. Calcium channels act mainly in the phase 2 of the myocardial action potential.
The QT interval is shortened with higher worsening hypercalcemia. The ST segment in severe cases may be shortened so much that it appears absent, and the T wave starts almost at the end of QRS complex. An uncommon finding of hypercalcemia is ST-segment elevation mimicking acute myocardial infarction. Because the QT interval is shortened in hypercalcemia, initial upslope of the T wave, which starts immediately after the QRS complex, mimics the hyperacute phase of acute myocardial infarction.
Severe hypercalcemia can cause appearance of Osborn waves, which are positive deflections occurring at the junction between the QRS complex and the ST-segment. The excess of calcium can also cause ventricular ectopy and irritability, which can lead to malignant arrhythmias and cardiac arrest.
ECG of patient with hypercalcemia and hypokalemia.
Hypocalcemia: Hypocalcemia affects mainly the L-type calcium channel, and prolongs phase 2 of the cardiac action potential. This can be seen in the ECG as a prolongation of the ST-segment. Calcium channels close at the end of phase 2. The T wave from phase 3 repolarization is mostly related to potassium channel activity, and it is not significantly affected by calcium levels. ST-elevation mimicking myocardial infarction can be seen in cases of severe hypocalcemia.
ECG of patient with hypocalcemia and hypokalemia. U waves are demonstrated as are prolonged ST-segment and QT intervals.
Hypocalcemia is often associated with other derangements, which obscure the ECG findings, as with electrolyte abnormalities. Renal failure, for example, can often produce hyperkalemia and hypocalcemia. Treating these patients with calcium replacement can be especially beneficial because it not only reduces the myocardial irritability of hyperkalemia but also treats the hypocalcemia.