Skip Navigation LinksHome > Blogs > The Spine Blog
The Spine Blog
Friday, August 22, 2014

Treatment decision-making can be challenging in some cases of cervical spondylotic myelopathy (CSM). The spine community agrees that surgery is indicated in the case of a healthy, middle-aged patient with clinically significant myelopathy. However, there is scant data to guide treatment in patients with mild signs and symptoms, and the natural history of mild myelopathy is unknown. Choosing treatment for the frail, elderly patient with severe disease is also challenging as the benefits may not justify the substantial risks of extensive surgery. Given these difficulties in deciding whether to perform surgery for myelopathy, efforts have been made to determine if demographic, clinical, or radiographic characteristics can predict outcomes after surgery. In the August 15 issue, Dr. Fehlings and his colleagues from Toronto analyzed the associations between MRI findings and baseline and post-operative disease severity in 134 CSM patients. Transverse area (TA) of the spinal cord at the level of maximum compression as well as the presence of intramedullary T2 hyperintensity and/or T1 hypointensity were determined. These were correlated with physical exam findings, mJOA scores, Nurick grade, SF-36 scores, and timed walking test. The investigators found that there were moderate correlations between TA and physical exam findings, Nurick grade, and mJOA scores, both pre- and post-operatively. Correlations between intramedullary cord signal changes and disease characteristics were much weaker and generally not statistically significant. Interestingly, neither TA or cord signal changes were associated with change scores, indicating that patients with severe cord compression and signal changes could still have significant improvement with surgery.


This study is helpful in that indicates that the degree of cord compression is fairly well-correlated with the clinical severity of myelopathy. It should serve as a reminder to the spine physician to be suspicious of the diagnosis of CSM in patients with pronounced findings and mild cord compression on MRI. It does not shed much light on the prognostic role of cord signal change as the study found minimal correlation between cord signal change and disease severity or degree of improvement. Such a result could be due to a true lack of correlation in the study population or represent a Type II error due to lack of power. Some prior studies have suggested that cord signal changes were associated with less post-operative improvement, though that finding has not been consistent. While studies such as this one provide some prognostic information to spine surgeons and patients about what to expect post-operatively, the lack of a non-operative cohort limits how much it can assist with surgical decision-making. For patients struggling with the decision about whether or not to undergo surgery, prognostic models need to be able predict both surgical and non-operative outcomes in order to determine the likely benefit of surgery. Studies that could provide these data are difficult to perform as most spine surgeons feel patients with a diagnosis of CSM should be treated surgically, so there is a lack of equipoise to perform a randomized clinical trial. The best option going forward is probably an observational study in which patients who are felt to be surgical candidates yet decline surgery are followed to get a better sense of the natural history of CSM using modern outcome measures. Until such a study is performed, patients and spine surgeons are left with predictive data from only one side of the treatment decision-making equation.


Please read Dr. Fehlings’s article on this topic in the August 15 issue. Does this change how you see the prognostic role of MRI findings in CSM? Let us know by leaving a comment on The Spine Blog.

Adam Pearson, MD,MS

Associate Web Editor

Friday, August 15, 2014

As the population ages, the number of dens fractures in the elderly has increased and placed a large burden on the healthcare system. These patients frequently have multiple comorbidities, leading to high rates of complications and mortality whether they are treated surgically or with non-operative treatment. Additionally, the best treatment for the geriatric odontoid fracture remains controversial, with a recent, large observational trial suggesting better patient reported outcomes measures and lower mortality with surgery.1 However, this study was not randomized and there were likely unmeasured differences between the surgery and non-operative group that could have confounded the results. Given the increasing number of these fractures and the high cost to society associated with their treatment, Dr. Daniels and his colleagues from Providence performed a retrospective administrative database analysis using the National Inpatient Sample from 2000-2010. They found that the overall incidence increased over two-fold over the decade, with the greatest increase in patients over age 84 (3 fold increase). The use of a halo-vest decreased from 25% to 10%, while the rate of surgical treatment increased from 13% to 16%, changes consistent with recommendations in the literature. Not surprisingly, the comorbidity burden increased, which was likely responsible for the non-significant increase in inpatient mortality (4.9% to 6.7%). The inpatient mortality rate was lower for patients undergoing surgery compared to those treated non-operatively (3.1% vs. 7.5%), though the surgery patients were healthier. Inpatient hospital charges were approximately twice as high for surgical patients compared to patients not undergoing surgery.


The findings of this paper suggest that we might have an epidemic of C2 fractures on our hands. While the fracture incidence could plausibly be increasing due to a more active elderly population or the prolonged life expectancy of less healthy patients at greater risk for C2 fracture, given the increased incidence even in younger age groups suggests other factors may also be contributing. The authors point out that the rate of advanced imaging has likely increased over the past decade, and the likelihood of diagnosis is much greater with a CT scan than with plain radiographs. Additionally, coders may have become more likely to code the specific fracture level over time, which could also give the appearance of an increased incidence. Even if the incidence was not increasing, the absolute number of fractures would be increasing simply due to the increasing number of elderly patients in the population. The authors indicate that hospital charges for C2 fractures exceeded $1.5 billion dollars in 2010, so this is clearly a problem that needs further study. Given that prevention will likely be difficult, determining the most cost-effective treatment for dens fractures is essential. While surgery may lead to better outcomes and lower mortality in certain subgroups, it may be less beneficial or harmful in others.2 Future studies should focus on determining the most cost-effective treatment for patients based on their specific characteristics.


Please read Dr. Daniels’s article on this topic in the August 15 issue. Does it change how you look at C2 fractures in the elderly? Let us know by leaving a comment on The Spine Blog.

Adam Pearson, MD, MS

Associate Web Editor




1.            Vaccaro AR, Kepler CK, Kopjar B, et al. Functional and quality-of-life outcomes in geriatric patients with type-II dens fracture. The Journal of bone and joint surgery American volume 2013;95:729-35.

2.            Schoenfeld AJ, Bono CM, Reichmann WM, et al. Type II Odontoid Fractures of the Cervical Spine: Do Treatment Type and Medical Comorbidities Affect Mortality in Elderly Patients? Spine (Phila Pa 1976) 2011;36:879-85.



Friday, August 08, 2014

Incidental durotomy is a common complication in lumbar spine surgery and can be seen in cervical spine surgery as well. It has been studied extensively, though questions still remain about the best way to manage dural tears. Given that durotomy is a common complication that is treated with a period of bedrest, it is well known that it increases length of stay and costs to the healthcare system. As such, Dr. Singh and his colleagues from Chicago performed a cost analysis using the National Inpatient Sample (NIS) database to compare costs, length of stay, and complications associated with incidental durotomy in cervical and lumbar spine surgery. They analyzed over 275,000 cases captured in the database and reported a durotomy rate of 0.4% for cervical surgery and 2.9% for lumbar surgery. After controlling for demographic characteristics, comorbidity burden, and hospital factors, they determined that incidental durotomy increased hospital costs by $7,638 for cervical surgery and $2,412 for lumbar surgery. Durotomy increased length of stay by 1.8 days after cervical surgery and 1.3 days after lumbar surgery. The authors also reported increased rates of other complications following durotomy including hematoma, neurological injury, DVT, PE, ileus, and UTI.


The results of this study come as no surprise, but the question that emerges is how much the durotomy actually contributes to the observed differences. In this study and prior studies on the topic, durotomies were more likely to occur in patients undergoing more extensive, more complex surgery, and these patients were older with a greater comorbidity burden. While the authors attempted to control for some of these factors in their analysis, the number of covariates that can be gleaned from billing data is quite limited, and there are clearly many unmeasured confounders that are likely contributing to the reported differences in addition to the incidental durotomy. Another major limitation in studying complications in the NIS database is that they are frequently not recorded in the billing data, and this is reflected in the very low rates of incidental durotomy in this study. The lumbar decompression group has a durotomy rate of 3.5%, markedly lower than the 9% reported in the SPORT spinal stenosis study. It is highly likely that many of the durotomies were not coded, which limits the conclusions that can be drawn from the study. Similarly, while bedrest following durotomy could increase the rates of complications such as DVT, PE, and UTI, it is also possible that hospitals that more frequently code durotomy also code other complications more frequently. This study serves as a reminder that durotomy is not a completely benign complication and increases costs to the healthcare system. Given that durotomies are a part of spine surgery that cannot be eliminated, future efforts should focus on the most cost-effective way to treat them. Studies should determine if the use of patches and sealant is cost-effective and if using such technology can decrease the need for bedrest. There is wide variation in how incidental durotomies are treated, and a high quality multicenter study evaluating different approaches might allow for the development of a widely-accepted protocol that is both safe and cost-effective.


Please read Dr. Singh’s article on this topic in the August 1 issue. Does this article change how you view incidental durotomies? Let us know by leaving a comment on The Spine Blog.

Adam Pearson, MD, MS

Associate Web Editor

Friday, August 01, 2014

Spine surgeons have long believed that fusion likely accelerates degeneration at adjacent levels, though the data supporting this belief has been limited. In general, long-term RCTs comparing fusion to non-operative treatment have been underpowered to evaluate adjacent segment disease (ASD) in the long-term, so the question has remained unanswered. In order to overcome the reduced power related to attrition, Dr. Mannion and her colleagues combined data from four major RCTs comparing fusion to non-operative treatment for low back pain and obtained radiographs and patient-reported outcome measures at a mean follow-up of over 13 years. While just over 50% of patients were lost to follow-up, over 350 patients were included in the analysis, making this the largest data-set ever collected to look at long-term ASD after lumbar fusion. They found that the fusion group had significantly greater loss of disk height at the adjacent level and two segments cranial to the fused level. However, there was no correlation between disk height loss and clinical outcome measures (ODI and low back pain scale). This led the authors to conclude that while fusion had at least some role in accelerating ASD, this was not symptomatic, at least out to 13 years.


The authors should be congratulated on working together to create such a large dataset with long-term follow-up in an attempt to answer the age-old question about fusion and ASD. The paper does provide rather compelling evidence that fusion does accelerate ASD, but that development of ASD has minimal effect on clinical outcomes. The most concerning aspect of this study is the large loss to follow-op, though nearly 50% follow-up over a decade from enrollment is actually quite good. There were also a substantial number of crossovers from non-operative treatment to surgery. While these factors could cause selection and attrition bias, there was no indication that the patients who were lost to follow-up or crossed over were much different than those who did not. One could also suggest that disk height loss may not be the best marker for ASD and that MRI findings and/or dynamic radiographs would be better. Despite these minor shortcomings, this paper represents the best data on this topic in the lumbar spine.


Please read Dr. Mannion’s article in the August 1 issue. Does this paper change how you think about ASD after lumbar fusion? Let us know by leaving a comment on The Spine Blog.

Adam Pearson, MD, MS

Associate Web Editor

Saturday, July 26, 2014

Any reader of the spine literature has come across an increasing number of studies based on large administrative databases recently. Due to the popularity of these studies that were much less common in the spine literature in the past, Drs. Yoshihara and Yoneoka published a journal club article discussing the strengths and limitations of this study design. Before considering the pros and cons of this research approach, it is worth considering why so many administrative database studies are being performed. For one, computing and statistical methods continue to improve, and “big data” are being used more frequently in many disciplines, medicine included. These large databases capture huge numbers of patients, orders of magnitude more than can be followed in a traditional prospective, clinical trial. This allows for the analysis of relatively rare events and the study of subgroups that could never be evaluated in clinical trials that would be markedly underpowered for such analyses. Additionally, clinical trials are becoming increasingly more difficult to perform due to regulatory issues, demands for more rigorous study designs, and a lack of grant funding for such research, especially for spine-related topics. Finally, researchers have discovered that these studies are relatively easy to perform once a database has been obtained and an analyst has learned their way around it. While querying the databases is relatively easy, formulating questions that can actually be answered with administrative databases and using appropriate analytic methodology can be quite difficult.


Administrative databases are very useful for studying treatment trends over time, evaluating regional variation, and calculating costs to payers. They also allow for the study of rare events—like death following spine surgery—and can have enough patients to evaluate the risk factors for these uncommon complications. They also allow for comparison of the rates of certain well-defined complications, such as readmission, reoperation, and death, among different treatment techniques. While these are all worthy research pursuits, one can argue that the most important outcome following spine surgery is patient reported quality of life. These outcome measures are not included in administrative databases. This is the major limitation of spine surgery database studies, and it precludes comparing efficacy among different treatments. Another limitation of many commonly used databases in the spine literature is that they include only inpatient (i.e. the National Inpatient Sample) data from a single hospital admission or are limited to a 30 day window following the date of surgery (i.e. the National Surgical Quality Improvement Program). Given that many of the most concerning complications such as infection, readmission, and reoperation frequently occur outside these windows, studies analyzing complication rates with these databases likely underestimate the true rate of complications. Researchers need to recognize these limitations while designing their studies, and they should limit their questions to those that can reasonably be answered with the available data. Additionally, they need to take into account that patients selected for different procedures were likely different in ways not captured in billing data (i.e. symptoms and physical exam findings, radiographic characteristics, work status, etc.), and these unrecorded differences cannot be controlled for statistically. Journal reviewers and editors also need to be more stringent in evaluating these articles and should not accept an article for publication simply because it included 100,000 patients. Like most powerful tools, large administrative databases have the ability to be of great benefit, yet, if misused, can cause more harm than good.

Please read Dr. Yoshihara’s article on this topic in the July 15 issue. Does this article change how you view large administrative database studies? Let us know by leaving a comment on The Spine Blog.

Adam Pearson, MD, MS

Associate Web Editor

About the Blog

Spine Journal
This Blog provides a forum for discussion about high impact articles published in Spine, including the bi-annual publication of "Evidenced-Based Recommendations for Spine Surgery." Website users can use this forum to discuss how the articles have affected their practice and query the authors about their findings and recommendations.