Skip Navigation LinksHome > Blogs > Musings of a Cancer Doctor
Musings of a Cancer Doctor
Wide-ranging views and perspective from George W. Sledge, Jr., MD
Thursday, July 31, 2014

Recently, in preparation for a lecture I gave on writing scientific papers, I had the opportunity to re-read Watson and Crick’s original 1953 Nature paper, “Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid.” It is not just the most important scientific paper in biology written in the last century, it is also one of the most beautifully written: concise, clear, jargon-free, and with the greatest punch line for any paper in the scientific literature: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.” Try and top that for a discussion section.

 

I downloaded the paper from the Nature website, which required me to go to the actual April 25, 1953 issue. My eyes wandered for a few seconds over the table of contents for the issue, and suddenly I had a thought: I wonder what it was like to publish a paper in that issue of Nature if your name wasn’t Watson or Crick? To be remembered, forever in the annals of science, as one of the also-rans.

 

There are many equivalents, both inside and outside science: giving a talk at the Linnaean Society in 1858 in the same session where Darwin and Wallace’s first papers on evolution were presented. Or, perhaps, being the back-up act on the 1964 Ed Sullivan Show for the Beatles. A wonderful trivia question, by the way: Las Vegas entertainers Allen & Rossi, and the impressionist Frank Gorshin, in case you want to wow someone at a dinner party.

 

6 Articles, 23 Letters

So who were the also-rans? There were six articles and 23 letters in the issue. Of the six articles, three were about the structure of DNA. In addition to the Watson and Crick article, there was Maurice Wilkins’ “Molecular Structure of Nucleic Acids: Molecular Structure of Deoxypentose Nucleic Acids” and Rosalind Franklin’s “Molecular Configuration in Sodium Thymonucleate.”

Franklin’s paper includes the beautiful x-ray crystallography that has become a commonplace for biology texts ever since, a ghostly X of salmon DNA.

 

The Wilkins and Franklin papers represented important support to the Watson-Crick modeling paper, which is why the two Cambridge researchers had encouraged their King’s College colleagues to submit papers at the same time.

 

The interaction of Watson and Crick with Wilkins and Franklin (both of King’s College. London), and for that matter the fraught relationship of Wilkins and Franklin, are part of the lore of molecular biology. Watson and Crick ran no experiments, relying instead on the work of others, most prominently Rosalind Franklin’s X-ray diffraction studies of DNA, which they correctly interpreted as evidence in favor of a double helix. It was a crucial piece of experimental data, and one that both Wilkins and Franklin had misinterpreted. Watson’s classic memoir, The Double Helix, describes the interactions in picturesque detail, and his description of Franklin in particular struck many then and later as both petty and sexist.

 

The Nobel Prize can be given out to a maximum of three investigators, and in 1962 Wilkins was investigator number three. Franklin had, in the intervening years, developed ovarian cancer, and died at the age of 37, some four years before the prize was awarded. She spent some of her final months with Crick and his wife, receiving chemotherapy. The oncologist in me has always wondered whether the Jewish Franklin carried an Ashkenazi BRCA mutation.

 

Would she have been awardee number three had she lived? This is an unanswerable question, but her premature death doomed her to permanent also-ran status, making her a feminist icon in the process. Could she have come up with the double helix structure on her own, given sufficient time? Perhaps. But her Nature article concludes with a sad, might-have-been-but-wasn’t, admission: “Thus our general ideas are not inconsistent with the model proposed by Watson and Crick in the preceding communication.” Not inconsistent with: a grudging admission that she had failed to see what her own data supported.

 

Impact Factor

Editors of scientific journals, and contributors to those journals, scrutinize the impact factor. The impact factor is quite simple: add up the total number of citations for a journal in the first two years after publication, and divide by the number of citable items (usually articles and reviews). The best journals (Science, Nature, Lancet, New England Journal of Medicine) usually have the highest impact factors.  The vast remainder are decidedly mediocre: a sea of journals with an impact factor of 2 or thereabouts. Most published science is unmemorable, rapidly forgotten even by the authors. Citations do not have a Gaussian distribution; instead they follow a power law distribution, with a few 900-pound gorillas followed by a long tail. And seldom has the power law been so severe as that issue of Nature.

 

Impact factor is a way of keeping score, though not without its own issues. One problem with it is that it cannot measure the long-term impact of a paper. With 60-plus years of follow-up, however, we can look at the long-term impact of the work published in that April 25, 1953 issue of Nature. Watson and Crick are the huge winners, of course: Google Scholar says their paper has been cited 9866 times, and we hardly need the citation numbers to realize the revolutionary importance of that paper. The Wilkins and Franklin papers clock in at 618 and 833 citations, marking them as important contributions. But what of the others?

 

Let’s begin with the other three articles. They are, in order of presentation, “Refractometry of Living Cells”; “Microsomal Particles of Normal Cow's Milk”; and “Measurement of Wind Currents in the Sea by the Method of Towed Electrodes.” None is particularly remembered today, I think it is safe to say. Google Scholar reports the three articles as having had, respectively, 172, 41, and 15 citations. OK citation numbers, even today, but definitely in the also-ran category. Interestingly, the refractometry paper was cited by a 1991 Science paper on optical adherence tomography that itself has been cited 8400 times.

 

Letters

The letters range from 0 to 235 citations, according to Google Scholar. The most recognized is W.R. Davis’s “An Unusual Form of Carbon”, which has been cited 235 times, most recently in 2010. The story here is interesting. Davis and his colleagues worked at the British Ceramic Research Association in Stoke on Trent in England, where they studied the carbon deposited on the brickwork of blast furnaces. They identified what they described as tiny “carbon vermicules,” some as small as 100 angstroms. In retrospect (hence the 235 citations) they were early discoverers of carbon nanotubes, members of the fullerene structural family, now actively studied for their fascinating physicochemical properties. Three researchers received Nobel prizes in the 1996 for fullerene studies, so one can feel for Davis and his fellow Stoke on Trent ceramicists. They were dual also-rans, publishing in the same issue as Watson and Crick, and coming along a few decades too early in the as-yet-unnamed fullerene field.

 

Of the other papers published as letters, what is there to say? I love the title of K.A. Alim’s “Longevity and Production in the Buffalo,” though R. Hadek’s “Mucin Secretion in the Ewe's Oviduct” runs a close second for my affections.

 

But is easy to understand why such articles are poorly cited and long forgotten, given the relative obscurity of the topics. One of the keys to success in science is choosing the right problem to work on, and mucin secretion in the ewe’s oviduct probably cannot compete with decoding the key to life on earth.

 

Least Cited, But...

The least cited paper (I could not find any citations using Google Scholar) is my favorite: Monica Taylor’s “Murex From the Red Sea”. Taylor wrote from Notre Dame College in Glasgow, where she curated a natural history collection. Sir John Graham Kerr had collected some 300 Murex tribulus shells from the Great Bitter Lake in Egypt. Murex was what would now be called an alien invasive species, introduced following the completion of the Suez Canal, and Sir John (would a 2014 letter to Nature ever refer to a Sir John?) and Monica wondered whether the species had altered its presentation, “relieved of the shackles of environment.” The letter was a plea for specimens from the Red Sea so that the comparison might be made. Wikipedia informs me that Murex tribulus is a large predatory sea snail, with a distribution from the Central Indian Ocean to the Western Pacific Ocean.

 

Did Monica Taylor ever get her Red Sea specimens? Life is full of small mysteries. Taylor herself is a fascinating soul. Born in 1877, the daughter of a science teacher and the cousin of Sir Hugh Taylor, one-time Dean of the Princeton Graduate School, she trained as a teacher prior to becoming a nun. So Monica Taylor was actually Sister Monica, and Sister Monica dearly wanted to become a scientist. But the road was not smooth for a woman, let alone a nun, wishing to be a scientist in the early twentieth century. She was refused permission to attend the University of Glasgow, and was unable to complete an external degree from the University of London due to Notre Dame’s inadequate laboratory facilities.

 

After several thwarted attempts, according to the University of Glasgow website, “She was eventually granted permission to do laboratory work in the Zoology Department of the University of Glasgow, provided she did not attend lectures and was chaperoned by another Sister at all times.” There she impressed Professor Graham Kerr, who encouraged her to pursue an advanced degree, and obtained permission for her to attend classes. After receiving a DSc from the University of Glasgow in 1917, she headed the Science Department at Notre Dame College until her retirement in 1946, all the while conducting significant research in amoebic zoology.

 

In 1953, the year of her Murex letter, she was awarded an honorary doctorate from the University of Glasgow for being "a protozoologist of international distinction." She died in 1968; six years after Watson and Crick got their Nobel prizes. No citations, and no Nobel, but perhaps you will remember this also-ran, a woman of courage and fortitude.

 

Most of us are also-rans, if judged against those carved into science’s Mount Rushmore. Glory would not be glorious if it was common. But maybe we have it wrong if we think the also-rans felt demeaned by their also-ranness. Maybe Dr. Alim or Dr. Hadek or Sister Dr. Taylor enjoyed their brush with greatness. And maybe, just maybe, they were satisfied with lives well lived in service to science and mankind.


Monday, June 23, 2014

I recently read Dave Eggers’ brilliant and frightening novel The Circle, his modern take on Orwell. In 1984 the totalitarian state was the enemy of freedom, a hard tyranny motivated by the state’s desire to crush all opposition. Eggers thinks we are headed for a much softer tyranny, one dominated by do-gooder Silicon Valley business types determined to eliminate—indeed, undermine the justification for—any attempt at privacy. 1984 has come and gone; the world of The Circle careens towards us at high speed.

 

The sphere of privacy continues to shrink. If it even is a sphere anymore; sometimes it seems shriveled to a point that is no longer three dimensional, almost a bodiless mathematical concept. Hardly a day goes by without some new assault, or potential assault, finding its way into the news. Some examples are from the scientific literature, and some from current politics, but the vector is always the same: towards an era where privacy ceases to exist.

 

Much of this is technology-driven. That technology affects privacy rights is nothing new. Previous generations had to deal with wiretapping and listening devices, after all. But this seems both quantitatively and qualitatively different.

 

Here are a few I’ve collected in the last year. What impresses is the sheer variety of attacks on privacy, coming from every degree of the compass:

 

·    An article in PNAS demonstrates that your Facebook "likes" could do a pretty good job of defining your sexual orientation (85% predictive ability for male homosexuality), race, and political identity.

 

·    The FBI recently announced that it had deployed drones on American soil four times in recent years. Police departments are considering doing the same in American cities, supplementing the now ubiquitous CCTV cameras.

 

·    A group of Harvard engineers recently created a quarter-sized “fly”, a robotic insect capable of controlled flight (for a first look at the future look at http://www.youtube.com/watch?feature=player_embedded&v=cyjKOJhIiuU). Give it some “eyes”, deploy a few hundred, and then imagine some hovering outside your bedroom window. Flutter, flutter, buzz, buzz. Is that a mosquito I just slapped, or some investigator's eyeballs?

 

And while I could not exist anymore without Google, the company has been the great destroyer of privacy. If you want to know someone's life history, it is probably out there somewhere on a server farm. Try a little experiment: Starting with a colleague's name, see how much personal information you can find in an hour. Next, imagine how much of that information you could have found in an hour a decade or two ago. You would be amazed.

 

Maybe that's a good thing: it depends on who's searching, and why. If you don’t want to date a convicted felon, going online is a good way to avoid that particular mishap. But it is, of course, much more than just that. Every little thing about you can end up out there. When I was a teenager we used to laugh at school administrators who claimed that some minor infraction would “become part of your permanent record.” Not anymore: the record is not only permanent, it is global. In some important way there are no second chances anymore.

 

Google (don't be evil) is responsible for other transgressions. We now know that Google StreetView collected more than pictures of your front door: they went behind the door to collect passwords, email, and medical and financial records, as the company admitted in court. The court fined the company a half-day of its profits, which I suppose tells you how much the courts value your privacy. Google also promised to go and sin no more, which I know will reassure us all. And they will now require employees to take the computer industry equivalent of a HIPAA course. All right, I admit, cruel and unusual punishment.

 

These Internet assaults are layered on top of other privacy intrusion: ubiquitous CCTV’s following you down city streets, your cell phone movements describing where you have travelled with great precision, your credit card history demonstrating a financial phenotype, your checkout at the grocery story generating targeted advertising on the back of the receipt, and your web history now the technological equivalent of a trash can, to be sorted through by private investigators digging for dirt. Sometime in the last decade privacy became a fossil word. The expectation of privacy is disappearing from public discourse.

 

About the only data that doesn't want to be free is personal health information, and that only because accessing it is a felony. How long before technology effectively eliminates medical privacy? For the moment, at least, medicine remains Privacy Island. Sort of, anyway: all around us the sharks circle, waiting to strike. Don’t step too far off the shore.

·    An article in Science showed that one could, by merging publically available “de-identified” data from the 1000 Genomes project with popular online genealogy data, re-identify the source individuals.

·    If a patient accesses a free online health website for information on “cancer” or “herpes” or “depression”, according to Marco Huesch of USC, that information will, on average, be shared with six or seven third-party elements.

 

I used to think that conspiracy theorists were crackpots, with their ideas of the government tracking us via RFID-encoded greenbacks through airport security checkpoints. It turns out that the crackpots were right, though wrong about the purveyor (the TSA? Seriously? The dysfunctional guys who still can't stop people from carrying box cutters through security checkpoints?).

 

It’s much closer to home than that, as I’ve recently discovered. Hospitals are taking a deep dive into something called RTLS, or Real Time Location Services. I found out about this technology (as usual, I am the last to know) through service on a committee overseeing the building of a new hospital. RTLS places RFIDs on every large object (in RTLS-speak, “assets”) as well as on patients and hospital employees. It can then localize all of these to within three feet, via carefully placed receivers scattered throughout the building. Several companies (Versus, CenTrak) have sprung up to commercialize this technology.

 

These devices have real uses. They keep large, expensive pieces of equipment from walking out the door, or at least allow you to track which door they walked through. Recently a hospital in Munster, Indiana suffered an outbreak of MERS. Because hospital employees were tracked through Versus RTLS technology, the hospital was able to hand CDC investigators a list of everyone who had come in contact with the patient. Sounds great, right? An epidemiologist’s dream.

 

Perhaps. The cynic in me says that sometime in the near future some hospital administrator will use the technology to decide, and chastise, employees who spend too much time in the bathroom. Administrators just can’t help themselves.

 

RTLS is part of the larger “Internet of Things”, or IoT as it has come to be called. A recent article in The Economist suggests that by 2020 some 26 billion devices will be connected to the Cloud. Consider IoT’s potential. Will I end up with a brilliant toaster, one that knows exactly how long to brown my bread? A mattress that diagnoses sleep apnea? A lawn mower whose handles sends my pulse and blood pressure to my internist? A microwave oven that refuses to zap hot dogs when I go over my fat gram budget for the day? Maybe I’ll be healthier. I’ll certainly be crankier: the soft tyranny of the IoT will drive me off the deep end in a hurry.

 

The bottom line, though, is this: we are entering an era of ubiquitous monitoring, and it is here to stay. I wish I was (just) being paranoid. Here are the words of a recently released government report entitled Big Data: Seizing Opportunities, Preserving Values: “Signals from home WiFi networks reveal how many people are in a room and where they are seated. Power consumption data collected from demand-response systems show when you move about your house. Facial recognition technologies can identify you in pictures online and as soon as you step outside. Always-on wearable technologies with voice and video interfaces and the arrival of whole classes of networked devices will only expand information collection still further. This sea of ubiquitous sensors, each of which has legitimate uses, make the notion of limiting information collection challenging, if not impossible.”

 

Discussions on privacy are frequently cadged in terms of crime or terrorism: wouldn't you prefer to give up another bitty little piece of your soul, a loss you would hardly notice, to avoid another 9/11 or even an armed robbery? The security state, adept at guarding or criminalizing the release of information harmful to its reputation, cannot imagine its citizens deserving or even wanting the same protections. One is reminded of Ben Franklin’s prescient statement: “They who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.”

 

But government intrusions are becoming almost irrelevant. I doubt the TSA or NSA or FBI or CIA or any other of the alphabet soup of government agencies really cares very much about my Internet history. It is, rather, the Pacific ocean of data being collected, and the certainty that it will be shared with, well, everyone. Remember the words of that government report? “This sea of ubiquitous sensors, each of which has legitimate uses, make the notion of limiting information collection challenging, if not impossible.” Challenging, if not impossible: I’ll opt for impossible.

 

And will we care? I doubt it, so long as the data theft remains quietly in the background, being used for purposes no more nefarious than designing advertising strategy or predicting sales.


Monday, May 26, 2014

Over time I have accumulated numerous defects. When I was four years old I hypothesized that I could balance on top of a large beach ball. I was wrong, and in proof of the null hypothesis received several stitches over my left eyebrow. When I was in fifth grade I broke a finger playing football. Six weeks later the splint came off. I went outside that same afternoon, played football, and broke another finger on the same hand. My GP looked at me with mild disgust and said, "You clown," having used up all his compassion with the first fracture. The following year I assaulted a plate glass window with my left knee, leaving another scar.

 

In high school I spent a day helping my dad put shrubs and trees--a veritable forest of shrubs and trees--into our back yard, after which my back went into spasm. I have never had it imaged, but I am pretty sure I herniated a disc. Every two or three years it reminds me of that day in my teens. Joyce Kilmer obviously never spent an afternoon planting trees, or he would have found them less charming. 

 

I was blessedly free of further physical insults for nearly a decade. In my fellowship I had a varicocoele ligation: more scars. Early on as an Assistant Professor I found that I had lattice degeneration, a precursor lesion for retinal detachment. I was told not to worry about it unless I found myself going blind suddenly, and not to play football without a helmet. I haven't gone blind yet, but my football days are over. On a recent exam I was told that I had early cataracts. And, by the way, I do not appreciate being labeled a degenerate by an ophthalmologist.

 

And don't even get me started on my teeth. A bunch of my adult teeth never came in, a condition known as hypodontia. I kept several of my baby teeth well into my 40s before they all eventually fell out, to be replaced over time with dental implants. According to Wikipedia, hypodontia is statistically linked to epithelial ovarian cancer, which I guess is interesting. As to etiology, Wikipedia says “the condition is believed to be associated with genetic or environmental factors during dental development.” Duh. Anyways, some defects you accumulate, some you are born with.

 

I've accumulated a number of benign neoplasms and the like: nevi, warts, a few small colon polyps (resected times 2 per colonoscopy). I think I have an early actinic keratosis, some lichen planus, and a few other nonthreatening-looking skin nodules that I am sure are named after someone or have some Latin name known only to dermatologists, their verbal equivalent of a secret handshake.

 

A couple of winters ago, on Christmas Eve, I ran into a pillar in the foyer of my house: more stitches, this time above my right eyebrow, more scar tissue. Shortly after that I caught my heel on a splinter while climbing up the basement stairs. I hobbled around for a week or two, self-medicating, until I had sufficient scar tissue to regain unhampered mobility. 

 

Of course it's the defects you accumulate internally that you worry about. Once you get past the skin or gastrointestinal mucosa it's hard to detect them with ease. My cholesterol is up there without a statin, and my glucose level has suggested from time to time that I may be prediabetic. Coronary arteries, carotid arteries, pancreas? Which one is lying in wait to do me in? The problem with getting older is that you take increasingly frequent journeys to Comorbidity Island, accumulating defects like some crazy recluse collect cats.

 

Below the tissue level, the medical oncologist in me always worries about the accumulation of genetic defects. Our DNA is under constant assault: one paper I read, looking at a mouse model, estimated somewhere in the range of 1500 to 7000 DNA lesions per hour per cell. Most of them don’t take: we’re fitted out by nature with excellent DNA damage repair mechanisms.

 

Still, it gives you pause. Some of those mutations slip through. Over time they add up. Some combination of mutations in some cells eventually results in cancer, as we all know. But there is also a theory of aging (there are lots of theories of aging) implicating the progressive accumulation of genetic defects for the loss of organ function as we grow older. Take your muscles, for instance. The older you get, the more mitochondrial DNA mutations you accumulate. At some point, muscle fibers lose cytochrome C oxidase activity due to these mutational defects. Your muscles rot.

 

This accumulation of mutational defects is quite organ-specific, at least in mice: Dolle and colleagues, writing in the PNAS over a decade ago, showed that the small intestine accumulated only point mutations, whereas the heart appears to specialize in large genome rearrangements. They spoke of “organ-specific genome deterioration,” a term that does not give me warm and fuzzy feelings.

 

And these mutations start accumulating at a very early point in life. Li and colleagues at McGill studied identical twins in a recent Journal of Medical Genetics paper. Identical twins are that great natural experiment that allows us to ask “just how identical is identical?” Looking at genome-wide SNP variants in 33 pairs of identical twins, they estimated a mutation frequency of 1.2 X10−7 mutations per nucleotide. This may not seem like much, but given three billion base pairs in the human genome, they concluded “it is likely that each individual carries approximately over 300 postzygotic mutations in the nuclear genome of white blood cells that occurred in the early development.”

 

Our cells are all potentially mutinous traitors, held in check only by the determined efforts of the molecular police force. We carry the seeds of our ultimate demise from birth.

 

Well, that’s all very cheery, you say. We all know that we grow old, and then we die, and the growing old part is only if we are lucky. This isn’t exactly late-breaking news. Applying a molecular mechanism to mortality is interesting, but my hair is still getting grayer by the day. Do something!

 

And something cool they are doing. Villeda and colleagues just published a paper in Nature Medicine looking at cognitive function in old mice. Mice, like humans, get duller as they get older. But give the old mice a transfusion of plasma from young mice, and you reverse age-related cognitive function. This has a somewhat vampirish sound to it: the old guy rejuvenating after sucking the life force out of some sweet young thing. But apparently Hollywood got it right.

 

Before I start lining up medical student “volunteers” for blood transfusions, can we drill down a bit on what is actually happening? Two new papers in Science suggest that the ”young blood” factor responsible for the cognitive improvement is GDF11, a circulating member of the TGF-b family. GDF11 increases neurogenesis and vascular remodeling in the brain. It also reverses cardiac hypertrophy in old mice. Unsurprisingly, the lead investigators are already speaking to venture capitalists to commercialize the findings.

 

It’s always a leap from mouse to man, and certainly a danger to assume that a single molecule will be a sovereign cure for all our ills. But I’ll sign up for the Phase II trial. I could use some neurogenesis right now.

 

Nothing in life is free, of course. The dentate gyrus, part of the hippocampus, generates neurons throughout adult life, and these adult neurons are incorporated into pre-existing neural networks. That’s a good thing. But it comes at a price: a paper just published in Science also shows that adult hippocampal neurogenesis promotes forgetting.

 

So, the dilemma we may face a few years from now: you are starting to lose your memory. You take GDF11 or whatever the very expensive synthetic equivalent is called in 2024. It promotes neurogenesis and you get a new lease on life. But you forget your spouse’s birthday in the process, and maybe some of what makes you “you.” Is the new “you” really you any more?

 

One is reminded of the Ise Grand Shrine in Japan. This beautiful Shinto wooden temple is rebuilt to the same design every 20 years, using trees from the same forest. Is the 2014 temple the same temple as the 1014 temple? Last year represented the 62nd time the temple was rebuilt. Ise is forever new, forever ancient. Maybe we will be too.

 

But I doubt it. We’ll still keep accumulating defects. And, being a drug, GDF11 will do something bad and unexpected. But it should be fun to find out.


Monday, May 12, 2014

In my last post (4/10/14 issue) I reviewed recent advances in non-cancer genomics, particularly the fun stuff involving Neanderthals and human migration. Genomic studies are rapidly filling in important blanks in human macrohistory, blanks for which there are no written records and only the scattered bony remains of long-dead ancestors.
 
In this post I'd like to discuss some microhistory, specifically the history of human cancers. We are writing this history as we speak, with almost every issue of Nature and Science and Cell and their  lesser relatives. This re-write of microhistory, like that of macrohistory, is the glorious outcome of a decade of falling prices for genomic analysis. While we don't know everything yet--when do we ever?--we are accumulating so much new data at such a rapid pace that every current cancer textbook is seriously out of date the moment it is published.

 

The last year has given us both 20,000-foot views of the genome as well as up-close-and-personal disease-specific stuff.

 

Two excellent papers published last year looked at the “20,000-foot view” of cancer. Writing in Nature, Cyriac Kandoth and colleagues in The Cancer Genome Atlas (TCGA) project looked at point mutations and small insertions/deletions across 12 tumor types (2013;502:333-339), overall finding 127 significantly (i.e., driver) mutated genes; most cancers have two to six drivers on average. Some of these (particularly transcriptional factors/regulators) tend to be tissue-specific, and some (such as histone modifiers) are relatively more ubiquitous.

 

Some mutations (for instance, NRAS and KRAS) seem mutually exclusive: overall there are 14 mutually exclusive pairs. Mutational pairings are common (148 co-occurring pairs), but some pairings are disease-specific (IDH1 and ATRX in glioblastoma, TBX3 and MLL4 in lung adenocarcinoma). There’s probably some interesting biology going on related to these pairings and non-pairings--matches made in hell, perhaps. Some mutations are tumor-specific (NPM1 and FLT3 in AML), but most are not.

 

Lawrence and colleagues at the Broad Institute performed a similar analysis across 27 tumor types in an attempt to examine genomic heterogeneity. Their analysis, published in Nature last year (Nature 2013;499,214-218), focused not on driver mutations per se, but on overall somatic mutation frequencies. There are huge differences in mutational frequency, both within and across cancer types: more than three orders of magnitude from the least to the most mutated.

 

There are also significant differences in the types of mutations seen, differences that appear to reflect (in part) tumor etiology. Cervical, bladder, and head and neck cancers, for instance, share Tp*C  mutations that may reflect their viral etiology. GI tumors tend to share frequent transitions at CpG dinucleotides. Etiology, in turn, also affects frequency, with highly mutagenized tumors occurring as a consequence of tobacco, hamburgers, and tanning salons.

 

Analyses such as those of Kandoth and Lawrence have focused (quite naturally) on genes coding for proteins (the exome). But most DNA is non-coding, what we used to call “Junk DNA.” Khurana and colleagues at the Sanger Institute identified some 100 potential non-coding cancer driving genetic variants in breast, prostate, and brain tumors.

 

Khurana’s paper suggests we’ve barely scratched the surface of genomic analysis. I am reminded of Isaac Newton’s fetching phrase (I’ve quoted it in a previous blog, but it appears appropriate to the non-coding DNA story):  I was like a boy playing on the sea-shore, and diverting myself now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.” But the important thing is that we have, in recent years, dipped our toes into that great ocean of truth.

 

A whole series of individual cancer genomes are now being rolled out for public display, and the results continue to disentangle cancer-specific biology and (perhaps) suggest new tumor attacks. A recent publication by Ojesina and colleagues in Nature (2014; 6,371-375) identified (as one example among several) HER2 somatic mutations (not the standard breast cancer-type amplifications) in six percent of cervical cancers.

 

Will these offer new treatment options for these patients? We will see soon, I suspect. We are seeing a profusion of low-frequency mutations in relatively rare diseases. Running the numbers, it’s hard to imagine performing Phase III trials to establish benefit for all of these. What’s a clinical trialist to do? How should the regulators deal with the tiny Phase II trials that will result? How are patients (and payers) to understand cost and benefit equations arising from those trials?

 

Some of the more fascinating studies reported in the past year explored the clonal evolution of human cancers. Sohrab Shah and colleagues in Vancouver analyzed 104 early triple-negative breast cancers. To quote their Nature article (2012:486:395-399):  “TNBCs vary widely and continuously in their clonal frequencies at the time of diagnosis… At the time of diagnosis these cancers already display a widely varying clonal evolution that mirrors the variation in mutational evolution.”

 

There is a point in Jurassic Park where a seasoned big game hunter, busy pursuing a nasty-looking velociraptor, suddenly finds himself outmaneuvered by the predatory dinosaur’s concealed partner. His last words before being chewed up are “Clever girl.” That’s the way I felt after reading Shah’s paper. Clever girl: TNBC will be a long, hard slog of a disease if we remain stuck in a morass of kinases.

 

Similar to the TNBC paper, one by Landau and colleagues investigated the evolution and impact of subclonal mutations in chronic lymphocytic leukemia. This work, published last year in Cell (2013;152:714-726), was both fascinating and deeply disturbing. Measuring mutations at multiple time-points, they showed that chemotherapy-treated tumors (though not untreated CLL) underwent clonal evolution, picking up new driver mutations (and increased measurable genetic diversity) that expanded over time, and that these drivers were powerful predictors of outcome. “Never bet against Charles Darwin” is a good rule of thumb in biology, just like “Never bet against Albert Einstein” is a good physics rule.

 

First Steps Toward Clinical Utility

While most of the excitement in cancer genomics has revolved around the large scale projects such as The Cancer Genome Atlas Project and its kindred studies, projects that in essence serve to describe the mutational landscape of human cancer(s), we are now beginning the first steps towards clinical utility in cancer genomics.

 

My institution, like many around the country, has wrestled with how to “bring genomics to the masses." The actual performance of tumor deep sequencing has gotten relatively inexpensive (relatively, that is, to those versed in the cost of an hour in an emergency room), and almost technically trivial. But running an assay is only the first step on the journey.

 

How does one analyze the data from an analysis of three billion base pairs? How does one do it in a timely fashion (patients appropriately want treatment today, not eight weeks from now)? Fortunately these technical challenges are becoming easier to handle: a recent report from the University of Chicago (published in Bioinformatics) demonstrated the use of supercomputers to process 240 genomes in two days.

 

But analytic problems are more than just technical challenges. They involve real judgment calls. How does one decide what is an important driver mutation, as opposed to a multitude of “passenger” mutations? When is a mutation actionable?

 

Indeed, what does “actionable” even mean? In current genomics-speak, “actionable” appears to mean something along the lines of “We found a mutation. That mutation is associated with badness in some disease. There is a drug that does something to interfere with that badness in some way in a preclinical model or some other cancer somewhere in the scientific literature.”

 

“Actionable” is easily misunderstood by patients and referring physicians as “we have a drug that works for your tumor.” What it really means is “maybe using this drug is not a total shot in the dark.”

 

Commercial enterprises are now leaping blindly into this evidence chasm, and asking us to jump in along with them, paying their development costs. Like all lovers’ leaps, I guess you just have to take some things on faith. But some of us would prefer, before ordering a test, to experience the cool soothing balm of carefully curated data demonstrating that obtaining the test actually affects outcome. Old-fashioned, I know.

 

How to obtain that evidence is the subject of a great deal of noodling by clinical/translational researchers. The upcoming NCI Match trial, a multi-thousand patient attempt to match mutations with drugs in an integrated, cross-disease clinical trials platform, will launch later this year, to much genuine interest and excitement by both patients and physicians.

 

Most cancer genomics to date has been performed on well-curated, carefully prepared, highly cellular tumors. Can we turn genomic analysis into a blood test? We are now beginning to see publications in the “liquid biopsy” field, such as last year’s New England Journal of Medicine publication by Sarah-Jane Dawson and her Cambridge colleagues (2013 ; 368:1199-1209).

 

To cut to the chase, we can now measure specific genomic alterations (Dawson measured PIK3CA and TP53) in human plasma, and those alterations can be used to track the course of metastatic disease with sensitivity higher than that of circulating tumor cells or protein tumor markers such as CA 15-3. This is a good beginning, though only a beginning: measuring badness is necessary but insufficient to ameliorating badness, as the recent S0500 breast cancer circulating tumor cell trial demonstrated. This technology will really take off when we can use it to identify actionable (there I go again, but you know what I mean) mutations.

 

All in all, we’re constantly being reminded that we live in the Golden Age of cancer biology, a breathtaking rollercoaster ride filled with the excitement of discovery.  Have you ever read a book you loved so much that you hated for it to end? One where you wanted to keep on reading after the last page to find out what happened to all those fascinating characters? We’re reading that book, the book of Nature, right now. But don’t worry: we’re still only somewhere in the middle. Maybe even the first few chapters. There’s plenty left to learn.


Monday, March 03, 2014

I thought that there could not have been any better year for genomics than 2012, but the last year or so has matched it. The falling price of genomics, the development of new analytic techniques, and the inventiveness with which genomic technologies are applied have colluded in the creation of wonderful new insights. There is hardly an area of biology or medicine that hasn't benefited in some way.

Let's look at the study of human origins. We learned, just a few years ago, that most non-sub-Saharan moderns carry a touch of Neanderthal in their genome. This discovery is largely the work of a group led by Svante Pääbo at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Pääbo has also found evidence of other admixtures (the mysterious Denisovans who make up part of the modern genomic complement of Pacific islanders). 

 

It would have appalled our Victorian ancestors, as well as the many 20th century racial ideologues wedded to a belief in the purity of their particular ethnic group, to learn of this unwelcome ancestry. But I found it a charming thought.

 

If 1.5% or so of our genome come from Neanderthal ancestors, the interesting question is "which 1.5%?” Is it a random set of genes, or, as Darwinian theory might predict, a set that increased our biologic fitness? A first-pass look at our Neanderthal inheritance suggests the latter: definitely non-random, definitely selected for (or against). Two recent publications (in Science and Nature) suggest that there was positive selection for genes involved in skin and hair -- particularly the type II cluster of keratin genes on 12q13 -- suggesting that our “Out of Africa” Homo sapiens ancestors benefitted from Neanderthal cold climate genes. It turns out that when we say that someone is “thick-skinned” we are implying Neanderthal ancestry.

 

At the same time, there was significant negative selection for X chromosome (a five-fold reduction) and testes-specific genes, as well as reduction of Neanderthal DNA in a region of the genome containing a gene thought to play an important role in human speech and language. The authors suggest that human-Neanderthal hybrids had reduced fertility, resulting in elimination of the X chromosome genes causing the infertility.

 

Well, OK, but so what? Why should anyone really care, other than in the “gee whiz” way, which scientific curiosities of no particular current importance appeal to intelligent onlookers? Because, once again, William Faulkner was right: the past isn’t dead. It isn’t even past. The researchers identified nine variant genes associated with specific disease-linked traits, including (to quote the Nature paper) “alleles of Neanderthal origin that affect lupus, biliary cirrhosis, Crohn’s disease, optic-disk size, smoking behaviour, IL-18 levels and type 2 Diabetes.”

 

Many of my breast cancer patients dread the idea that they have passed on a mutation to their children that will doom them at some future point. The sense of guilt can be overwhelming, that fear of a dark cloud hanging over those they hold most dear aligned with the certainty that they were the cause of their children’s future grief. I tell them that it is certainly not their fault, any more than it was their parents or their parents’ parents. And the good things they have given their children -- life, love, home, and family -- certainly far outweigh the bad. But this view of the things they pass on rarely seems to assuage the guilt, nor the sadness that attends it.

 

BRCA mutations may date back a millennium or two. But to extend the disease-causing mutation story back several tens of thousands of years (the Neanderthals became extinct some 30,000-plus years ago, and the interbreeding is thought to have occurred 40-80,000 years ago) is just astonishing. 

 

Try explaining to a lung cancer patient that he couldn’t quit smoking because of a gene he inherited from an extinct species: doomed by your inner cave man, if you will. Or, perhaps, some weird cosmic revenge for wiping out the Neanderthals?

 

Human history is rife with examples of one population violating the space of another, but we rarely think of these interactions in genomic terms. A new analytic technique called Globetrotter changes that. Produced by researchers at Oxford University, and published recently in Science, Globetrotter uses modern human genomes to identify ancient population migration patterns. The authors call these “admixture events,” a polite term for what was often a very messy, very ugly history. You can take a look at these at the Globetrotter website at http://admixturemap.paintmychromosomes.com

 

The authors reasoned that if someone from population A hooked up with someone from population B, then their offspring would share the genetic history of both parents. And then, over time, genetic recombination would occur, allowing one to clock how long ago the genomic mash-up happened. A fairly simple idea, but an enormous amount of thought and work went into making Globetrotter a reality.

 

A decade ago studies of the Y chromosome suggested that a relatively large percentage of selected Asian populations descend from Genghis Khan. Globetrotter confirmed this analysis, looking, not at the Y chromosome, but at approximately a half-million SNPs throughout the genome in some 1500 individuals from around the world. The Mongol Horde was not some group of peace-loving picnickers camping out on the Silk Road: Genghis and his kids were very, very bad people, and we have the genetic evidence to prove it.

 

Globetrotter also defined several other admixture events. To quote the paper: “We identified events whose dates and participants suggest they describe genetic impacts of the Mongol empire, Arab slave trade, Bantu expansion, first millennium CE migrations in Eastern Europe, and European colonialism, as well as unrecorded events, revealing admixture to be an almost universal force shaping human populations.” Maybe those unrecorded events were peaceful ones, but the ones we know about were not. Universal force, indeed.

 

These are first revelations from Globetrotter, and no doubt others will emerge. It seems pretty clear that much of prehistory (and a fair amount of history) may need to be re-written in the next few years. Could anyone have predicted this synchronicity of history and biology a couple of decades ago? Will any legitimate historian, going forward, be able to ignore science? And, as the Neanderthal data suggests, will we MD’s be able to ignore the lessons of anthropology?

 

There were a few other recent genome-based history lessons worth mentioning. In 2012 one of the coolest stories involved the discovery of Richard III's bones under a parking lot. I wrote a blog post about it at the timeand the Shakespeare lover in me still finds this ineffably awesome. Now comes the suggestion that the bones of King Alfred the Great, or perhaps his son, have also been found. The evidence here is less compelling (unlike the Richard III story, there are no known modern descendants to compare genomes with) but still intriguing.

 

I note in passing that the Sledge family’s ancestral bones remain to be discovered. I’m not holding my breath. But the news from a 12,600 year-old burial site in Wilsall, Montana gives me some hope. Looking at the genome derived from the only human burial associated with the Clovis culture (makers of the beautiful Clovis point arrowheads), the investigators (a University of Copenhagen group publishing in a recent issue of Nature) demonstrated that some 80% of all present day Native Americans are direct descendants of the individual’s closest relatives. Though not, alas, of the boy buried in Wilsall: he died shortly after his first birthday.

 

There’s always something wonderful about these genome-based historical studies. You feel connected, knowing that men and women walking by you on the street today carry genes from the family of a child buried over 12,000 years ago in Montana, or from ancestors who met each other when Homo sapiens met Homo neanderthalensis somewhere in the Levant. And that someday we too will be links in the great chain, passing on (if we are lucky in the genetic game of chance) our own genes to some unimaginable future.

 

In my next post I’ll write about what we’ve learned about cancer genomics in the last year. It may not be quite as much fun as Neanderthals, Globetrotter, and Clovis culture genomics, but it is consequential and interesting. Our own cells pass on things as well: individual microevolution as opposed to global macroevolution.

About the Author

George W. Sledge, Jr., MD
GEORGE W. SLEDGE, JR., MD, is Chief of Oncology at Stanford University. His OT writing was recognized with an APEX Award for Publication Excellence in the category of “Regular Departments & Columns.”