In his novel Anna Karenina, Leo Tolstoy wrote, “Happy families are all alike; every unhappy family is unhappy in its own way.” This came to be known as the “Anna Karenina principle.”
Aristotle states the same principle in the Nichomachean Ethics: “It is possible to fail in many ways, while to succeed is possible only in one way, for which reason also one is easy and the other difficult; to miss the mark easy, to hit it difficult.”
It occurred to me that the same principle might apply to academic cancer centers or at least to major endeavors undertaken by those centers. That would be useful information. I have worked in, around, and for academic cancer centers for 50 years and I find them endlessly fascinating and frustrating. Trying to understand how and why they work, succeed, and fail has been a hobby of mine throughout my senior years (after I had a accumulated a body of experience and battle scars to work from).
Are all “successful” cancer centers alike, while those that “fail” do so in their own way?
There is no easy answer, of course, but let's give it a try.
A Little Background
First, a little background from the NCI website, which had some surprises in it. Around 1960 a growing and pro-active NIH established General Clinical Research Center Grants. In 1961 the NCI established several cancer research and research infrastructure grants that proved to be the first steps to cancer centers, as we know them today.
“By 1963, there was a fairly well–defined cancer centers program of approximately $6 million at 12 institutions. The activities of these centers were diverse, including research in [clinical disciplines] as well as basic science. Except as a category within the NCI budget, little effort was made to define or organize the cancer centers until 1968 when the National Cancer Advisory Board provided guidelines….The Cancer Centers Branch was formally conceived and established as a result of the National Cancer Act of 1971; the Act gave a broad mandate to the centers that includes research, excellence in patient care, training and education, demonstration and technologies, and cancer control. The initial model for a cancer center was drawn from several of the older, free-standing institutions: Roswell Park, Memorial Sloan-Kettering, M.D. Anderson, and Fox Chase.”
Two things strike me in this history. First, the National Cancer Act in 1971 established a “broad mandate to the centers that includes research, excellence in patient care, training and education.” As the Cancer Center Support Grant (CCSG) has evolved, however, it has no requirement for excellence in patient care and no mechanism to measure it, and training and education are not included in the evaluation of the cancer center; in fact, training grants are usually discounted, directly or indirectly, when a tally of cancer funding is made.
The second is that the initial model was based on free-standing cancer centers, but most NCI cancer centers today are not free-standing. Both of these issues from the Act have come to haunt today's cancer centers, especially those that are university-based.
The national and NCI reputations of the top university-based cancer centers are based on the quality and quantity of meaningful research and the skills and consistency of the leadership. But with rare exceptions, their clinical reputation is based on the opinions of local and referring physicians and patients.
The CCSG does not address clinical care in any way. This means that a university-based cancer center is, in effect, a chimera: it is expected to meet the stringent requirements for receiving a CCSG from the NCI in order to be in that elite cancer research club. And to make this work it must do so in a system that has a clinical and educational mission and in which the department chairs have most of the power.
An outstanding clinical cancer program is the most difficult function to attain and sustain in these centers (perhaps in any environment). There are several areas of tension that can lead to instability in this model: decisions on recruitment needs such as recruiting physician-scientists with a small clinical role and those who are mainly clinicians; management of an essentially collaborative care model with multiple departments and divisions to deal with; hospital interests sometimes conflicting with academic or clinical interests; clinical activities are a dominant source of medical center revenues and the distribution of any surplus can be contentious. Also, the turnover of vice presidents, deans, and chairs can rapidly reverse the fortunes of these centers.
The financial stability and growth potential of university-based cancer centers is often problematic because these centers typically have only three sources of discretionary revenue (“institutional funds”)—philanthropy, contributions from clinical cancer revenue surpluses, and a share of indirect cost revenues from cancer grants. The amounts from each source vary widely among centers.
One must keep in mind that, with rare exceptions, university-based cancer center directors do not have the power of faculty appointment, so control of space and money are the only sources of tangible power for fulfilling the mission of the center. Of course, the current academic and political environment threatens financial stability of centers, as well.
Does the Anna Karenina principle apply to academic cancer centers? Deciding whether an institution is successful depends on the factors used to judge. One may choose the cliché of the “three-legged stool”—education, research, and clinical care. One could measure research success by the number of grants and grant dollars and publications in elite journals, but impact on moving a scientific area forward would be better. But this cannot be measured precisely and one would need to rely on the opinion of eminent scientists.
Measuring the quality of education and clinical care are even more difficult and oftentimes the eminence and stature of an organization is used as a surrogate.
So we are left with numerical measures and “expert opinion” to assess the success of cancer centers: grant dollars (especially from the NCI); papers published in elite journals; U.S. News and World Report ratings, which rely a great deal on “expert opinion” of deans and others deemed worthy; volume of patients on clinical trials, irrespective of whether the trials are meaningful or not; the number of fellows who go into academic medicine; or the stature of the institution or the director—e.g., “cancer care at Hopkins must be good because Hopkins is a great institution.”
Useful, But Miss the Mark
These measures are useful but miss the mark because of the wide-ranging size and environments of the centers; a center in a state with only two million inhabitants spread over a broad geographic area, as in Utah and New Mexico, has underappreciated difficulties to overcome.
So we are left with the measures used by the NCI in assessing a cancer center. They are decent measures for the organization of cancer research and its productivity. The quality of cancer care and the community impact of the cancer center are not measured. The widely accepted standard of a cancer center's “success” is the peer-review score and funding received from the review of its CCSG application.
So back to the Anna Karenina principle: What does a happy family mean to Tolstoy? Is it freedom from strife, poverty, or disease? Or is it mainly the presence of a mutually loving environment? Although an oversimplification, I believe it is most likely the latter.
And what is a successful cancer center? I believe it is above all one that gives excellent care to patients and is engaged in competitive research and training, all of which are appropriate for the environment in which it lives.
But these are very hard (or impossible) to measure and compare in a generally acceptable way. So I conclude that in 2011 the Anna Karenina principle does not apply to the evaluation of cancer centers.
Oh well, “expert opinion,” with all its warts and biases, doesn't look like such a bad way to go after all—especially from one who considers himself an expert.