Musings of a Cancer Doctor

Wide-ranging views and perspective from George W. Sledge, Jr., MD

Tuesday, November 21, 2017

Recently Elon Musk, serial entrepreneur and visionary industrialist, sounded the alarm over the perils of artificial intelligence (AI), calling it our "biggest existential threat." Musk is no technophobe, no misguided Luddite, but rather a perceptive observer, and a bold visionary, so many think he is worth taking seriously. His argument, and that of those who agree with him, goes something like this: the ultimate result of modern information technology will be to create an autonomous artificial intelligence, a creature of servers and the web, whose intelligence equals or outpaces our own. That creature, capable of transmitting its base code to any other server, soon has in its grasp all the resources of the internet.

At that point, the autonomous intelligence essentially controls the world: bank accounts, Snapchat, Facebook, nuclear missiles. Borg-like, resistance becomes futile: object and your bank account is drained, weird pictures of that party where you drank too much suddenly show up in your boss's account, fake news suggesting you are under the control of aliens is transmitted to everyone you have ever friended. And if you persist in rejecting the AI's new world order, North Korea launches a tactical nuke at you while you are vacationing in Guam. You might be forgiven for being unable to distinguish between the AI apocalypse and 2017.

I've made fun of what is actually a serious intellectual argument. Anyone interested in that argument should read Nick Bostrom's Superintelligence, an entire tome devoted to the problem. Superintelligence has led to intellectual fist fights in the AI community, with responses ranging anywhere from "idiotic" to "frighteningly possible." Regardless, it is a good (though not light) read for those wanting to get some understanding of the issues involved.

One of the issues, if I read this literature correctly, is deciding on what constitutes autonomous, higher level, scary, Terminator-Skynetechelon Artificial Intelligence, or even defining lower level AI. AI proponents note that whenever some AI project works, and is incorporated into standard processes, we cease to think of it as AI. It's just code that is a little bit smarter than last year's code and a little quicker than this year's humans. And individual toolkits, like the one that Watson used to win at Jeopardy, will never be mistaken for higher level intelligence.

We don't even really understand our own intelligence all that well. If I, Google Translate-like, but using my baccalaureate French ("perfect Midwestern French," per my college professor; not a compliment) translate "peau d'orange" as "skin edema," am I just using an algorithm programmed into my neocortex, a bundle of interconnected neurons firing off a pre-programmed answer? And is all that constitutes my intelligence nothing but a collection of similarly algorithmic toolkits, residing in synaptic connections rather than semiconductors, just programmed wetware?

And if so, how many toolkits, how many mental hacks are required for a computer to equal or beat human intelligence? And if you combine enough of them, would they be distinguishable from human intelligence.

A single human brain is something quite wonderful. The most recent analysis I have seen, from a group working at the Salk Institute, suggests that a single human brain has a memory capacity of a petabyte, roughly equal to that of the current World Wide Web. This both reassures and concerns me. On the one hand, it will be a while before the collective intelligence of some AI creature is greater than that of several billion humans, though even somewhat stupider AIs could still create a lot of mayhem, like some clever 12-year-old. On the other hand, I have been using that old "limited storage capacity" excuse for my memory lapses (missed anniversaries, appointments, book chapters, etc.) for some time now, and this is, unfortunately, no longer a good excuse.

Part of why we might want to take the idea of superintelligence seriously is the rapid recent progression of AI. A good example of this involves Google Translate, a classic Big Data project. For a very long time, Google's ability to translate from one language to another, or anyone else's for that matter, was severely limited. The results mirrored that old Monty Python skit where a Bulgarian/English dictionary offers hilariously mistranslations that endanger the speaker (YouTube). But Google translation abilities have improved tremendously, the result of a massive brute force approach that now allows relatively good translation into almost every language attached to a written language. But impressive as this feat is, it is not higher-level AI: Google translate still doesn't write my blogs, in English or Bulgarian, though it could well translate between the two.

AI remains a minor force, verging on nonexistence, in the medical world. The largest effort, IBM's Watson, has been an expensive dud. Its major effect so far has been to assist in the resignation of the president of MD Anderson Cancer Center. The Anderson Watson project, designed to offer clinical decision support for oncologists, was plagued by mission creep, financial bloat, missed deadlines, physician loathing, and ultimate technical failure. An initial $2.9 million outlay turned into a $65 million boondoggle, and claims of breaking multiple University of Texas system rules. Early failures, of course, do not prevent future successes, but as President Trump has noted, no one (at IBM or MD Anderson) apparently realized how complicated health care actually was.

I can't say I'm surprised. Much of what makes health care complicated is, at least currently, not soluble with higher order computational strategies. In an average clinic, moving from one room to the next, I will speak to patients who want too much of modern medicine (diagnostically and therapeutically), followed by patients who reject what little I have to offer, both for reasons that seem unreasonable to me, and I suspect to a computer-generated algorithmic intelligence. I know what the right thing to do is, if by the right thing one means improving disease-free survival or response rate or overall survival. It is the unquantifiable and the personal that torpedo my best efforts, sinking what should be optimal outcomes in a sea of doubt and confusion. An irritating nanny program, manifesting itself in annoying pop-up messages while I am already wasting my time navigating EPIC, is unlikely to improve my mood or my patient's lot.

Other, more modest, efforts continue apace. The IT start-up Flatiron and ASCO's CancerLinQ both plan to liberate the EHR via AI measures. As a conflict of interest statement, I was ASCO's President the year the ASCO Board committed to the creation of CancerLinQ, and I served as Co-Chair of its initial oversight committee. As such, I feel a residual interest in its ultimate success. Whether ASCO, or some private market-based approach prevails, is not something I will bet on, either for or against. But I do foresee something approaching AI-based clinical decision support in our future. Still, this is not higher level AI as I understand it.

I don't know that what I want, or what most doctors want, is some omniscient superintelligence telling me how to practice medicine. My interests are much more modest: could the AI listen in on my conversation with the patient and create a decent clinic note in EPIC? Could it put in orders for the PET/CT I discussed with my nurse? Could it print out a copy of that JCO or NEJM article I was misremembering, highlighting the crucial piece of data I tried conjuring from the dusty cupboards of my memory? Could it automatically tell me what clinical trials my patient is eligible for without me having to go through the protocol? Could it go through my EHR and certify me for QOPI or recertify me for the American Board of Internal Medicine or give me 2-minute snippets of education between patients that would result in CME credit? Could it automatically pre-cert patients for insurance authorizations? Before a superintelligent AI becomes Skynet-scary and ends human civilization, let's hope it at least passes through a brief useful phase in the clinic.

These tasks all seem reasonable projects for companies capable of translating Shakespeare into Swahili. Some of them represent intellectual aides, making me clinically smarter, but many are just time-savers. And time, of course, is every doctor's most important commodity in the clinic.

Even limited AI could have fairly profound effects on the way medicine is practiced. There are already machine learning programs that outperform radiologists in their ability to read mammograms, are the equivalent of dermatologists (which means, superior to me) in recognizing melanomas, and are gaining rapidly on a pathologist's ability to read an H&E slide. Algorithms are great at pattern recognition. Note to radiologists, dermatologists, and pathologists: beware. You are ultimately as disposable as any of those other industries digitalization has transformed.

But back to superintelligence. Should we panic over the ultimate prospect of our servant becoming our master? I view this as an extension of a very old conflict. This conflict is over who controls our lives. Throughout most of recorded human history, humans have been under the control of some tribal chieftain, feudal lord, king, dictator, or petty bureaucrat. It is only in the past 2 centuries or so that a significant portion of the human race has had anything approaching personal or political freedom.

I worry that the period that began with the signing of the Declaration of Independence, a period of immense human creativity, is coming to an end. Human freedom is under assault around the globe, and so far the assaults have been launched not by supercomputers but by the rich and powerful, by ex-Soviet oligarchs, by religious fanatics, and by impartial market forces that care nothing for human liberty. That these forces use computer algorithms to keep the weak and powerless weak and powerless is unsurprising.

If, say, a political party uses computers to design gerrymandered maps to limit the voting strength of those it views as supporting its opponents, then it would be unsurprising if an AI creature failed to learn this lesson in its inexorable march for dominance. If an insurance company used Big Data/AI approaches to maximize its profit by limiting its liability to the poor and sick, why be surprised if some future AI master has no more care for human suffering? If a foreign power uses computer technology to undermine democracy by flooding the internet with disinformation, why expect tomorrow's superintelligence to do anything other than mimic us? Maybe we need to deserve our future freedom by fighting for it in the present, before it is lost to forces that are all too human.

At some level, I view a superintelligent AI as something almost…human. By this I mean a creature with a degree of insight that goes beyond mere pattern recognition. Something capable of strategizing, or scheming, perhaps even capable of hating, if silicon-based life forms can generate emotions similar to carbon-based ones.

There is a large science fiction literature devoted to AI, long since turned into Hollywood movies. One thinks of HAL in 2001: A Space Odyssey, or of Arnold Schwarzenegger's Terminator. These AIs are either maleficent, like Schwarzenegger's cyborg in Terminator, or controlling, like the AI in the Matrix series, or (rarely, because it is so unfrightening) benign and transcendent, as occurs in Iain Banks' wonderful Culture novels. Ants don't understand human intelligence, and we might be like ants to an AI superintelligence. The point, in all of these possible scenarios, is that the inflection point could, in theory, occur in a picosecond.

Those who worry about such things (Musk, Bostrom, and others) say that the time to start working on the problem is now, because 10 or 20 years from now may be too late. As I type these words, pictures of post-hurricane south Texas, Florida, and Puerto Rico crowd the airways: cities without fresh water that are underwater. We are not good at preparing for natural disasters; I doubt we would be any better at prepping for an AI apocalypse. A kinder, gentler alternative suggests that we might proactively program AIs for beneficence, as with Isaac Asimov's three laws of robotics. Though if AIs are allowed to watch certain networks, they may decide humanity unworthy of saving.​


Monday, September 25, 2017

By George W. Sledge, Jr., MD

There's an old joke, a poor but telling one. The late novelist David Foster Wallace used it in a college graduation speech, but it's much older than that. Two young fish are swimming along and run into an old fish. "Good morning," says the old fish, "isn't the water fine today?" And then he swims on. One of the young fish turns to the other and asks "What's water?"

We swim in history yet are so caught up in the day-to-day that we rarely notice its swirls and eddies, unless strong currents buffet us: a war, some momentous election. Looking back, we suddenly recognize how far downstream we've come. I look at my life, a totally ordinary one lacking in historical impact and think, well, what times I have lived through. I was born in the middle of the Korean War, was in middle school and high school during the Civil Rights era, went to college in the midst of the Vietnam War, saw (on TV) the first landing on the moon, the impeachment of Richard Nixon, the fall of the Berlin Wall, the two wars in Iraq, 9/11, and a myriad of lesser events.

The last Civil War veteran died when I was four. When I was a grade-schooler, a World War I veteran with a loud hacky cough did house work for us. I remember someone saying his lungs were a mess ever since he had been caught in a poison gas attack in 1918. When I was in college, I met a woman who as a child had been placed in a lifeboat and rowed away from the sinking Titanic.

As a medical student, I was on the ward with an elderly man patient who had played jazz with Louis Armstrong in the French Quarter. As an intern, I helped take care of Bill Drake, an 82-year-old former pitcher in the Negro Baseball leagues. His baseball moniker was Plunk, for his habit of throwing beanballs at batters who crowded the plate. Another patient that year, a Spanish-American War veteran, died of bradycardia while I stood at his bedside administering an antiarrythmic. This has always left me feeling guilty given what we subsequently learned about antiarrythmic usage. A patient of mine had, as a teenager, sung at the White House at Christmas time for President Franklin Delano Roosevelt.

And that's just some politics and culture. But the science, oh the science, how amazing: my life has overlapped with Watson and Crick's description of DNA and the subsequent molecular biology revolution, the deciphering of the immune system, the creation of monoclonal antibodies, the Human Genome Project, The Cancer Genome Atlas, and the discovery of CRISPR/cas9. I have argued previously, and I still believe, that 500 years from now it is the science that will be remembered if there are still humans around to remember such things. And I haven't even touched on physics, chemistry, the computer revolution, and several others whose histories happen to coincide with my own, including most of what makes up modern oncology.

None of these things, at the time, seemed particularly extraordinary, and I certainly cannot claim any credit for even one of them. At most I thought "that's interesting" before moving on to another task. It is the cumulative weight of these things that makes me stop and wonder. We swim through history and rarely think about the fact that we are living it. I have not lived a particularly extraordinary life, but I have lived in extraordinary times.

I write this a few days after the Republican attempt at repealing Obamacare went down in (what appears to be) its final defeat. It has been a dramatic week, and an important one for the health care system and many of my patients. It has been one of those rare times where I actually felt, in the moment, that I was "living in history," if you will.

It got me thinking about the contingent nature of history, and its relation to cancer. The Affordable Care Act (ACA) as a political event is thoroughly entwined in one type of cancer. When winding its way through Congress in 2009, the Democrats had a large majority in the house and a filibuster-proof 60-member majority in the Senate. Then Senator Ted Kennedy, a passionate supporter of national health insurance, developed a glioma, setting off a complicated sequence of events that contributed to the poorly-written piece of legislation that eventually became the ACA, and no doubt contributing to its lasting unpopularity.

Seven years later, and largely as a result of the unpopularity of Obamacare, the Republicans hold a substantial, if gerrymandered, majority in the House of Representatives, a small majority in the Senate, and the Presidency. Having spent 7 years promising the immediate repeal and replacement of the ACA, and having passed a "repeal and replace" bill in the House, they fell one vote short in the Senate.

As in 2009, cancer proved to be an important part of the story. Senator John McCain, 80 years old, former war hero, and recently diagnosed glioblastoma multiforme patient, cast the deciding vote against repeal in a dramatic, made-for-TV late-night vote. Would he have done so without the diagnosis of glioma? Did his cancer diagnosis, fortuitously coming right before the vote, alter history?

Psychologizing from a distance is dangerous even for a psychologist, but even more so for an oncologist, so I'll try. Did the recognition of his own impending mortality allow McCain the freedom to break from party ranks, the freedom that comes when fate renders you no longer answerable to this year's dogma or the next election cycle? Did the diagnosis render him more sympathetic to poor people with cancer? Or, less charitably, was the proud naval veteran offering some payback to the President who had foolishly and inexplicably impugned his heroism and attacked his character? Or perhaps all three, for we need not have a single motive. A fourth possibility, raised by one of McCain's "friends" in the Senate, was that the tumor affected the senator's judgement, perhaps because he was tired when he voted.

We do know that Joe Biden, whose son Beau died of glioma, called McCain on the day of the vote. Why is glioma, of all cancers, such a huge part of this story? Biden did not compete for the 2016 Democratic nomination because of his son's diagnosis, with unknowable consequences for American history.

I suspect the cancer had something to do with McCain's vote. Certainly, the other member of the Senate with advanced cancer, Senator Mazie Hirono, brought a special passion to the health care debate derived from her life-threatening disease. McCain has been coy, saying only that he thought his vote was the right thing to do.

We see things through our own special lenses. A Marxist would point to history as a determinist juggernaut, events being largely independent of personalities. I've never believed that for a moment, though some things, in particular those involving technology, seem to have a life of their own, almost independent of any individual's desires or efforts. Such technologic imperialism aside, some historical decisions come down to one vote, and one voter wrestling with the consequences of that action.

Senator McCain's recent diagnosis, leaving aside its political impact, has interesting additional aspects. I for one am always happy that I am not the doctor facing the press after a prominent politico receives a horrible diagnosis. These doctor/spokesmen are in an awful position. Like all doctors, their primary responsibility is to their patient, and their patient's desires regarding transparency must be taken into account. I would add that, while physicians are well aware of the gloomy statistics for a particular population of cancer patients, many of us (myself included) suffer from an optimism bias for individual patients, and that bias is probably amplified by a reporter's microphone. Against these motivations, the public's right to know can take second place.

This can lead to some interesting sins of omission, and occasionally sins of commission. For instance, when Ted Kennedy's glioma was diagnosed, his doctors were very careful, and generally quite honest, about the diagnosis, but utterly quiet regarding prognosis. Their statement included, "He remains in good spirits and full of energy." As if that mattered.

More recently, McCain's doctors mentioned that "Scanning done since the procedure...shows that the tissue of concern was completely resected by imaging criteria." Think about what that sentence says, how it says it, and what it implies. First, that horrible, mealy-mouthed euphemism, "tissue of concern," when what they mean is "cancer" or "glioma." Second, the phrase "completely resected by imaging criteria," a relatively meaningless phrase implying a good outcome in a disease famous for local recurrence.

A physician saying "we got it all" in misleading medicalese changes nothing, and the press was not conned by this minor obfuscation. McCain's prognosis, the prognosis of patients with glioma, was made painstakingly clear in the many reports I read. McCain's political rival in the Arizona Republican primary, a physician, wished McCain well and told him to resign from office so that she could be appointed in his place. Presumably this is because, having been rejected for office by members of her own party, she therefore deserves to be a U.S. senator, and anyway McCain is toast. Classy.

The worst example of this sort of thing occurred with Massachusetts Senator Paul Tsongas, diagnosed with a non-Hodgkin lymphoma. Tsongas, viewed as a potential Democratic presidential candidate, underwent high-dose chemotherapy and bone marrow transplantation for his disease, the marrow being prepared with a selection method designed to eliminate cancer cells. After this procedure, in announcing his 1992 candidacy, Tsongas had his Harvard physician (and active political supporter) offer up the comforting claim that he was likely cured and therefore a viable candidate for completing a term of office. In fact, Tsongas had already suffered an axillary lymph node recurrence post-transplant. Not only was his candidacy nonviable, within 3 years he would die of recurrent disease. Dana-Farber's chief physician, in reviewing the case, would later say that the institution "made a bevy of mistakes."

History provides other examples of the intersection of cancer and politics. Perhaps the weightiest occurred in 1888. Kaiser Frederick III of Germany was, as German autocrats went, kind, intelligent, liberal, and pacific by nature. He also chain-smoked cigarettes and developed cancer of the larynx. Originally misdiagnosed (by Virchow, of all people), and subsequently under-treated—being the German Kaiser doesn't protect you from bad care if the pathology is wrong—he eventually died, miserably, after a reign of only 99 days.

His son, Wilhelm, was everything Frederick was not. He had a withered left arm—Erb's palsy due to a breech birth—that left him shy and insecure. And like some shy and insecure people, he over-compensated, getting into a naval race with Great Britain, antagonizing the French and Russians through his militarization of the German Reich, acting bellicose and paranoid, and eventually approving the attacks that started World War I.

His dislike of the Western democracies (particularly Britain) was partly ideological, and partly personal: in one angry outburst shortly after his father died, he said "an English doctor killed my father, and an English doctor crippled my arm—which is the fault of my [English] mother." Talk about mommy issues. How different the 20th century might have been had his father not been a chain-smoker, or had the physicians been a little bit better at their jobs. Historians cannot imagine the pacific anglophile Frederick getting into the same mess his son Willy created.

World War I was the 20th century's keystone event, leading to (in no particular order) the rise of the U.S. as a world economic and military power and the eventual collapse of the British empire, the collapse of the Romanov Dynasty and its replacement by the Bolsheviks, the kick-starting of multiple military technologies (such as the tank, the submarine, and the airplane), the collapse of the Austro-Hungarian Empire, the embitterment of a generation of Germans (including the young Austrian emigre Corporal Hitler, who drew his own conclusions regarding the war's lessons), and the collapse of the Ottoman Empire and the subsequent creation of the many Middle Eastern nation-states whose fractious histories continue to bedevil the 21st century. Before World War I, there was no country called Iraq, no country called Syria, no Palestine mandate leading to Israel, no Saudi Arabia.

After the Kaiser died, his doctors all stuck their scalpels into each other, and quite publically. The English surgeon wrote an inflammatory book blaming the Germans, and lost his medical license as a reward. There was, reading the accounts, plenty of blame to go around, with both pathologists and surgeons making a mess of things. This one cancer death may have been the most consequential in world history, and I certainly would not want to have been one of Frederick's physicians, taking credit for the whole bloody, tragic 20th century.

Not all cancer deaths are so consequential, of course. Most just affect the patient, the patient's family and acquaintances, and the patient's health care team. But that is more than enough. Our private histories need not rival Frederick's story to count as tragedies.


Friday, August 25, 2017

Here's an old question: what makes us human? By old, I mean Greek philosopher-old. Aristotle pondered this and came up with a pretty good answer: humans are rational animals, beings capable of carrying out rationally formulated projects. He added that "man is by nature a social animal."

The biologist in me has a quite simple-sounding answer to the question. Homo sapiens, like any species, is defined as (per the great evolutionary scientist Ernst Mayr) "groups of interbreeding natural populations that are reproductively isolated from other such groups." Currently, there are no "other such groups," given our ability to eliminate close relatives, but once there were. Neanderthals and Denisovans were natural populations that interbred with our ancestors.

We, their descendants, have lost most Neanderthal and Denisovan genes, and lost them in a selective fashion. Still, evolutionary biologists now think about questions such as "were the Neanderthals really a separate species?" The consensus seems to be yes, they were separate, though separate along the lines of horses and zebras. Horses mares and zebra stallions, when bred together, make zorses, or zebra mules. They are infertile. Homo sapiens and Neanderthals were right on the edge of infertility. Perhaps as few as 80 parings were responsible for all the Neanderthal contribution to the modern human genome. By and large, the answer to "What is a human?" is "Humans are that species that breeds with other humans."

But that biological answer is both true and remarkably unsatisfying.

Just having the ability to ask the question may be part of an answer. "Humans are the species that ponders what it is to be human" may be a pretty good partial definition. I suspect, though I do not know this for a fact, that cows do not ponder their cowness, nor ants their antness. There has never been a Cowistotle. Cowness and antness, to the extent they exist, are genetically hardwired mindsets. We are hardwired as well, but we seem to be hardwired for software as much as hardware. We call the software "culture," but what stands out is out essential malleability, our stubborn refusal to be defined by our past.

I often wish that I was an archeologist. Archeology, it seems to me, asks the big question of "how did we get here from there?" In the great sweep of history, the human experiment seems to be defined by that question of malleability, that transition from hardwired behavior in a constrained physical form to an existence defined by our software. Our software is now in the process of altering our hardware, but that is a relatively late development in the human story.

Archeology always gives tentative answers. The average modern is associated with so much sheer stuff that it is hard to imagine a time when material possessions were defined by what one could carry on one's back, or in one's arms, or draped over one's body. The average modern home, I have read, contains around 300,000 "things." How many "things" can you carry with you if you are walking across the Bering Strait? Fifty? A hundred? And how many of those make it to your grave, and how many graves are ultimately discovered by an archeologist? The ancient human thingome (an "omics" word I just now invented) barely existed. We are now buried in our thingome.

But there are some answers, and I find them intriguing.

First, we seem to have been defined, for much of modern existence, by our pets. "Pets" isn't the right word for those animals that shared our space, any more than calling them "domesticated animals" quite fits the bill. If cats could speak, they might well claim to have domesticated us. And dogs are so finely attuned to human behavior that they might be considered relatives rather than pets.

How long have humans and dogs hung out together? Dogs are the oldest domesticated species, so the "man's best friend" trope is probably right. The archeologic record is somewhat confusing on this issue. A human grave site dating from ~14,700 years ago contains a dog mandible whose genomic DNA sorts with modern dogs, the first unequivocal evidence we have of the relationship. Other genomic data suggests that dog and wolf lineages separated somewhere around 36,000 years ago, so "a long time ago" is the current answer. And some quasi-wolves must have worked out with our ancestors even before that genetic divergence.

Where that domestication occurred is also something of a mystery: somewhere on the Eurasian land mass seems to be about as precise as is safe to commit to at this point. Regardless, dogs were snapping at our heels before we started farming. Penn State archaeologist Pat Shipman has theorized that the demise of the Neanderthals might be linked to the partnership of dogs (or wolf/dogs) and modern humans, with the latter two combining to out-compete the former for scarce food resources. Maybe, maybe not, but Neanderthals never seem to have had friends named Fido or Rover.

By the way, why did wolf/dogs decide to hang out with our ancestors? It is an interesting question. Wolves are loners, and wolves in the wild usually avoid humans right up until they decide to eat us. A recent paper comparing dog and wolf genomes suggests that the reason is that most dogs have Williams syndrome. You may never have heard of this syndrome, since only one in 10,000 people suffer from it (if "suffer" can be said to be the right word). Williams syndrome patients are routinely bubbly and extroverted, quite literally the friendliest people on Earth. They have other health and developmental issues, but the extreme sociability stands out.

The genetic event underlying Williams syndrome has been identified (the loss of a 27-gene stretch of DNA), and its canine homolog turns out to be common in dogs and rare in wolves. The friendliest of wolves also have the Williams syndrome genetic kit, and more standoffish dog breeds are less likely to have the Williams genetic defect. So maybe dogs were "socialized" as much by a mutational event as by our tossing chunks of meat to them on the edge of some ancient fire pit.

Cats represent a later stage in human history, and are intimately associated with the advent of agriculture. A just-out paper in Nature Ecology and Evolution dates their association with humans to around 10,000 years ago in the Near East. The story goes something like this: humans raise wheat, store it, and attract mice. The mice, in turn, attract the African wildcat, Felis silvestris lybica, whose virtues as vermin exterminators are appreciated by our ancestors. As farmers fan out from the Near East, the now-domesticated cats travel with them. They eventually travel to the ends of the earth, brought along on ships where they guard the stores against rats.

Whether cats are actually domesticated is an interesting philosophical question, but they tolerate us for the moment because we continue to supply them treats and don't bother them too much. I find it interesting that while we can easily tell wolf and dog skeletons apart, cat skeletons are indistinguishable from those of African wildcats. But the passion felines generate in some humans is undeniable. I had a patient delay potentially life-saving surgery until her cat underwent surgery. My patient could not face the prospect of living without that cat.

So, add "humans are the species that lives with cats and dogs" to Aristotle's "Man is by nature a social animal." They may even be the same answer to the "what makes us human" question. We don't just socialize with each other, we socialize with dogs and cats. Dogs and cats hung around for purely Darwinian reasons: today there are lots more dogs than wolves, and far more cats than African wildcats. But there probably were, as well, more humans because of dogs and cats: hunt wooly mammoths more efficiently, save more wheat, and you will prosper.

But the archeologic record has another interesting answer to the "what makes us human" question: humans are the species that creates art. We've been drawing pictures on cave walls for 35,000 years or more, beautiful work like that found at Lascaux in France. The first cave art we have involves hand stencils on the wall of the cave of Pettakere in Indonesia. The first figurative paintings date to 32,000 years ago, in the Chauvet cave in France and the Coliboaia cave in Romania. These paintings are filled with large mammals: bisons, aurochs, horses. So perhaps another answer to the "what makes us human" question is "Humans are the species that creates symbolic art."

And, at roughly the same time, musical instruments. The first musical instruments we have are flutes, made from mammoth and bird bones, found in the Geißenklösterle Cave in Southern Germany, and dating to about 42,000 years ago. Humans are the species that uses tools to make music.

Again, the comparison with our Neanderthal cousins is telling. We lack convincing evidence for Neanderthal cave art or musical instruments. This may only represent a flawed archeologic record, or it may suggest that art and music represent something crucial about the development of the modern human brain, and quite specific to Homo sapiens.

I will often see patients who are artists, or patients who are musicians (and with a fairly wide range of instruments, my favorite being the accordion). I have patients who have dogs and cats, indeed are passionate about them. These things are so common, so normal for us that we fail to recognize how absolutely extraordinary they make us as a species. Medical oncology is the new kid on the block, while art, music, and our pets tap into something deeper, something more ancient in the human psyche. Something we are designed for, if it truly separates us from our closest ancestral cousins.

I once had a patient with small cell lung cancer who presented with brain metastases. The metastases were accompanied by seizures, and the presenting aura for the seizures was Elvis Presley's "Blue Suede Shoes." Every time he would hear the song, he would wake up a few minutes later on the floor. We radiated his brain and Elvis Presley went into hiding. We treated his cancer with systemic chemotherapy, and the small cell responded, brilliantly but briefly, as is its wont. When the cancer recurred, so did "Blue Suede Shoes." I find it amazing that there is, somewhere in the human brain, a clutch of neurons devoted to "Blue Suede Shoes," but that is apparently a design function for modern humans. At least it wasn't "You Ain't Nothing but a Hound Dog."

Sometimes outlandish claims are made for the dogs and cats. Remember the news reports a few years ago suggesting that dogs could sniff out cancers in their owners? Or Oscar the cat, a Rhode Island nursing home denizen who appears to predict impending death, napping next to those next to pass? In both cases handwaving explanations ("maybe Oscar is good at smelling apoptosing cells" or "maybe that melanoma is releasing aromatic chemicals the dog recognizes as malignant") have been made. All I know for sure is that if Oscar ever shows up at my door it will be the last prediction he ever makes. Maybe cats and dogs are part of what makes us human, but I am thoroughly unsentimental about feline diviners of death.


Thursday, June 22, 2017

In 1455, Johannes Gutenberg, a German blacksmith, published his version of the Bible. His first print run was not large—only 180 copies—but it changed the world. Prior to Gutenberg, and it is important to recognize this, there were few copies of anything. Great literary works from the ancient world might depend for their survival on some literate monk toiling away in a remote Alpine monastery. If the monk decides not to copy, say, a play by Sophocles (and we have only seven of his 123 plays), and the monastery suffers a fire, that play is gone. Gone forever. We know this happened frequently because we have the names of many of the missing works: Pliny the Elder's History of the German Wars, 107 of the 142 books that make up Livy's History of Rome, Aristarchus of Samos' astronomy book outlining (long before Copernicus) his heliocentrism theory. And then there is Suetonius' Lives of Famous Whores; no particular surprise that the monks passed on copying that.

That all changed with Gutenberg. By the end of the 15th century, within the lifetime of someone born the same year as the printing press, it is estimated that 20 million manuscripts were in print, and over 200 European cities had printing presses in operation. A century later the number is 200 million manuscripts; by the 18th century, a billion have been printed. More books meant greater literacy. More books meant more controversy. Translate the Bible into German and publish it, as Martin Luther did, and you have the Protestant Reformation; when anyone can read the Bible, anyone can form an independent opinion and priesthoods lose their monopoly of specialized knowledge. Power dynamics change dramatically when the plebes can buy newspapers.

Prior to Gutenberg, there are few scientists and they barely communicate; knowledge is arcane, hidden, and easily lost when the scientist dies. After Newton, scientists start publishing their work, in Latin for easy transmission, and the Scientific Revolution takes off. The world is suddenly a very different place, and all because Gutenberg combined moveable type with a wine press.

And people changed as well. Or, more to the point, their brains change. Consider this recent experiment, conducted in India and published in Science Advances: take an illiterate 30-year-old rural Indian woman and teach her to read. Perform sophisticated brain imaging pre- and post-literacy. What does one see on the scans?

Something quite interesting. The colliculi superiores, a part of the brainstem, and the pulvinar, located in the thalamus, change the timing of their activity patterns to those of the visual cortex. And the more closely aligned the timing of brainstem and thalamus, the better one reads. Don Quixote and Pride and Prejudice hijack something very old, something reptilian, in our brains. 2D-printing made population-wide literacy possible, but it also reprogrammed the brains of millions of people.

Gutenberg lived in the German city of Mainz. If you were living in Mainz during Gutenberg's life, the big news was not the creation of the printing press. Instead, you would have been obsessed with the Mainz Diocesan Feud, a conflict over who would assume the throne of the Electorate of Mainz. Totally obscure today, the Diocesan feud resulted in the sack of Mainz by Adolph of Nassau (one of the contenders) and his troops. Gutenberg, by now an old man and failed businessman, was exiled along with 800 of his fellow citizens. He was one of the lucky ones: sacking a city rarely went well for its inhabitants, and hundreds of his fellow citizens were murdered.

I imagine Gutenberg trudging out of his home town, just one among many refugees caught up in the bloody politics of his time. Was he thinking about the Diocesan feud, or was his mind leaping ahead to the revolution he created, to the billion books, to the free flow of information across a world hungry for knowledge? Adolph of Nassau, or Martin Luther and Isaac Newton?

Today the verdict is easy: no one remembers Adolph of Nassau, and Gutenberg is one of civilization's greats. But it never looks that way when you are living through it. We rarely, in real time, understand what is important. Four centuries from now, I suspect the exploits of current potentates will fade, and what we will remember is the way science has transformed the world.

Gutenberg was performing two-dimensional printing. Now we have 3D printing, and it is on everyone's short list of world-changing technologic advances.

My old colleague Dr. Wikipedia defines 3D printing (AKA "additive manufacturing") as "processes used to create a three-dimensional object in which layers of material are formed under computer control to create an object." Basically, one creates a computer model (in what is called an STL file) of a 3-dimensional artifact. The STL file is then processed by software called a slicer—which does just what it sounds like—and converted into a series of thin layers. These layers are applied by a machine (the printer) repetitively. The layers themselves are now down to 100-micron resolution, a number that continues to drop.

3D printers have found their way into medicine, particularly with the development of cheap, individualized prosthetics: do a CT scan or MRI, use it as the template for the 3D printer, and roll out a prosthetic jaw or hip. After the 3D printer is paid for—and the price of these is collapsing—the prosthetic becomes quite inexpensive, developing world-cheap. Break a bone? You can now 3D print a personalized cast. Do you want a urologist to practice on "you" before the actual kidney surgery for a renal cell cancer? Perform a CT scan, 3D print the kidney model, and use it to plan surgery, minimizing normal tissue loss and preserving kidney function.

But creating new knees or prepping for a difficult operation, as important as they may be, is just the beginning. 3D bioprinting is now coming along. And this is genuinely wonderful. Start with a gel or sugar matrix, layer mixtures of cells on the matrix in repetitive, sequential fashion, and before you know it you have a functioning organ. It's already been used to create cartilage for worn-out joints, synthetic bone filler to support bone regeneration, and patches of heart muscle to assist recovery after a heart attack. (For the last see this cool video of 3D-printed muscle: https://www.youtube.com/watch?v=4VqIiqZtkU&feature=youtu.be). While still mostly at the preclinical stage, biotech startups are beginning to get into the business, and clinical trials are not too far off. Regenerative medicine will be quite different for the next generation of doctors and patients. Do you have heart failure? Just order a new one using your own stem cells printed into a cardiac matrix.

I've emphasized the medical aspects of 3D printing because, well, I'm a doctor. But 3D printing has so many uses, potential and real, that its only real limits are those of the human imagination. Suppose, for instance, you want to set up a human colony on the moon or Mars. A major barrier is the cost of moving things from here to there: a gravity well is not the space-travelers friend. You can't ship replacement parts for everything, and you can't even know what you will need 6 months from now. Things happen, unpredictable things that require some weird widget. But ship up a 3D printer along with the specs for just about any device, toss in some lunar dust, and you are in business. I'm not making this up: see Jakus, AE et al. Robust and Elastic Lunar and Martian Structures from 3D-Printed Regolith Inks. Scientific Reports 2017;7:44931.

And remember how we stopped losing ancient books once Gutenberg came along? We live in a world where art survives at the mercy of religious fanatics armed with AK-47s. 3D printing is still fairly new, and the polymers being used are crude simulacra of the real thing, but we are probably not that far away from a day when every home can have an exact replica of the Mona Lisa on the wall, and where ISIS could no longer destroy Palmyra's Temple of Bel because there are a hundred identical copies scattered around the world. So maybe they aren't the same thing, exactly, but wouldn't it be wonderful to have the Met's Temple of Dendur in your back yard? Sound crazy? Large, industrial-scale 3D printers are already being used to make houses in China.

I've described a fairly rosy picture for 3D printing: cheap knee replacements, smarter surgery, new hearts, better-equipped Moon colonies, my own copy of Monet. But there's another side as well. One of the iron laws of new technology is that it will always be used for purposes of porn and violence. 3D-printed sex toys—I'll leave the products to your imagination--are now available on the Internet. And 3D-printed firearms, essentially invisible to airport security, are available to any zealot with a 3D printer. Illegal in most localities, including the U.S., but when did that ever stop anyone? And would you want your teenage neighbor to have the 3D specs for a nuclear device?

There are other social and economic implications. If I can download a file that allows my home 3D-printer to replicate the most sophisticated of devices at negligible cost, whole industries are at risk. As a teenager, I read a prescient science fiction story (you can still find it on the Web) called "Business as Usual During Alterations." The story's premise was that aliens introduce technology allowing replication of virtually any product, in an attempt to destroy Earth's scarcity-based economy. This assault fails because the new technology, while eliminating the economies of scale underlying the 20th century industrial economy, unleashes human creativity and emphasizes diversity over uniformity. In the 21st century, we are the aliens, and the old industrial economy may well vanish, indeed is vanishing in front of our eyes. Already, for instance, there is a 3D shoe company in San Diego that produces individualized shoes—an exact fit, no more corns—on demand.

It's reasonable to ask whether 2D and 3D printing bear any real relationship. One, after all, is all about words, the other about physical objects. But the written word and hand-made objects are both uniquely human constructs, claims for tool-making, quasi-talking animal relatives notwithstanding. And remember the illiterate Indian peasant woman? If 2D printing reprograms the brain, what will 3D printing do? We may soon find out. Imagine a continent—let's call it North America—where every kindergartner is taught 3D programming along with reading and writing. Will that child's brain function differently than yours or mine? I imagine our view of the spatial environment changing a great deal.

Back to Gutenberg. The story has somewhat of a happy ending. Three years after the sack of Mainz, Adolph von Nassau allowed Gutenberg to return to Mainz. Gutenberg was given the title of Hofmann (gentleman of the court) along with appropriate court dress, a stipend, and 2,000 liters of wine. Perhaps our friend Johannes died a happy man, if not a sober one. Perhaps—and I hope that this is the case for at least some contemporary leaders as well—von Nassau was not a flaming narcissist and actually understood who was the real big deal in 15th century Mainz. But that may be a very two-dimensional, overly optimistic, way of looking at things.​


Thursday, May 25, 2017

Wrangel Island is a small, miserable place in the Arctic Ocean, a land where the temperature stays below freezing for 9 months per year. It has few permanent residents (a Russian weather station and a few park rangers), though the odd scientist drops in every now and then. Some 3,700 years ago, had you visited the place, you would have seen a strange sight. This is where the last woolly mammoths roamed, and died.

Their forbearers had long since died out on the mainland, likely the result of an invasive species that had spread from Africa to Northern Eurasia and the Americas. Wherever that species (we call them "wise man" in their Latin formulation) went, large animals died with frightening suddenness. But Wrangel Island, separated from the mainland by rising waters some 12,000 years ago, and not exactly what one would consider prime real estate, remained a refugee for those last mammoths.

Elsewhere, human beings had already created the first civilizations and were writing epics in Akkadian. But on Wrangel Island, the woolly mammoths were declining. Species confined to islands frequently undergo reduction in size over time, known as insular dwarfism. Think of Shetland ponies, or the "Hobbits" found on Flores Island off the coast of Southern Asia, or dwarf mammoths in the Channel Islands off the coast of France. The Wrangel Island mammoths do not appear to have been insular dwarfs, but they were genetically stressed.

We've learned this as a result of genomic analyses comparing the Wrangel Island mammoths with their mainland ancestors, as recently published in PLoS Genetics. Small in number—perhaps as few as 300 breeding members in that sad remnant—they were severely inbred and had accumulated numerous genetic defects, with significant loss of heterozygosity, increased deletions affecting gene sequences, and increased premature stop codons. The paper's authors call it "genomic meltdown."

Their hair, for instance, was not the coarse dark hair of their woolly mammoth ancestors, but rather a soft cream-colored, satiny coat lacking an inner core and therefore deficient as an insulator. One can imagine them standing there shivering as Arctic blasts bore down on Wrangel Island. Did this lead to their eventual extinction, or did our ancestors, visiting in boats, do them in? We don't know for certain, though harpoons and other human artifacts on the island have been dated to ~1700 BC. For whatever reason, the Wrangel Island woolly mammoths went extinct. We will never see their like again.

Or maybe we will. George Church at Harvard has inserted woolly mammoth genes into Indian elephant cells using CRISPR technology. The list of edits is up to 45 genes, genes that would allow a tropical elephant to survive on the Siberian plains. Similar efforts are underway in Russia and

Korea. Perhaps elephant-mammoth hybrids will walk the Earth in a few years. The Russians have already established a Pleistocene Park.

I feel conflicted about this. Part of me, the eternally curious 15-year old part of me that loves gee-whiz science, finds this attempt at de-extinction (a neologism of the CRISPR age) a thoroughly enthralling prospect. The other part of me, rather like Jeff Goldblum's character in Jurassic Park, finds all this quite frightening.

And frightening, not for the reason stated by Goldblum's character ("they had their time, and their time is over" was the gist of the argument), but because of the technology itself. For with CRISPR technology, we are at an inflection point in human history, that point where humans intentionally, directly alter the genomes of insects, mammals, and fellow humans.

CRISPR technology has advanced rapidly in the past few years. I wrote about this in a previous blog, but let me briefly revisit the technology. CRISPR is a bacterial defense system designed to snip out viruses attacking the bacterial genome. The CRISPR/Cas9 system has been adapted to mammalian cells and allows one to subtract or insert highly specific gene sequences. If one does this with stem cells, one can permanently change the genetic makeup of an organism. It works, sort-of, in human ova.

CRISPR/Cas9 has gone from obscurity to ubiquity in an astonishingly short time. Since the Doudna and Charpentier breakthrough paper in 2012, here are the number of PubMed citations: three in 2012; 78 in 2013; 315 in 2014; 715 in 2015; and 1,465 in 2016. As I write this in mid-April, 689 CRISPR/Cas9 papers have already been published in 2017. The technology is cheap, easy to learn, and powerful to use. It is a true new paradigm, an overused term but appropriate for once.

The big questions right now are:

  • When will CRISPR's discoverers/inventors fly to Stockholm?
  • Which discoverers/inventors will get the Nobel?
  • Who will win the patent wars over the use of CRISPR/Cas-9 in mammalian cells?​

I don't know the answer to the first question, other than "soon," nor the second question (several folks have a claim), though the answer to the third question, according to a recent U.S. Patent and Trademark Office decision, is "the Broad Institute will get very rich for adapting other people's ideas." The lawsuits will probably land up at the Supreme Court.

Lots has been happening in this space of late that has nothing to do with patents. I've mentioned the plan to resurrect the woolly mammoth, but consider the mosquito. I hate mosquitoes, and I've never met anyone who speaks well of them. They seem to exist only to render us miserable. It is not enough that they ruin warm summer nights with their incessant buzz and their pernicious bites. They transmit malaria, a disease that afflicts tens of millions and kills many of those. Humans evolved, in Africa and the Mediterranean, to resist the ravages of malaria. But in a "the cure is worse than the disease" sense, this evolution left us with sickle cell disease and its hematologic relatives.

So why not get rid of malaria-transmitting mosquitoes? One can use CRISPR/Cas-9 technology to create mosquitoes that are resistant to P. falciparum, the malarial parasite. That would be pretty cool. The problem is that if you dump a bunch of them into Sub-Saharan Africa, they will be outcompeted by the huge native mosquito population. In no time at all, the beneficial effects of the engineered population will dissipate and eventually disappear.

The scientific solution is called "gene drive" (a fine TED talks explanation can be seen at https://www.youtube.com/watch?v=OI_OhvOumT0). Gene drive involves the use of natural "selfish" homing endonuclease genes to promote the inheritance of specific genes. And it turns out you can use CRISPR/Cas9-based gene drive technology to engineer in genes that render mosquitoes resistant to malaria. Or—and this has also been done—you can engineer in a gene that renders female mosquitoes sterile.

Released into the wild, these engineered mosquitoes could spread throughout the world, eradicating the parent mosquito stock. One calculation suggests that were you to dump enough of these engineered mosquitoes into Sub-Saharan Africa, you could wipe out virtually all of the malaria-transmitting mosquitoes in the course of a year. An inflection point in human history: using science to drive an infection-transmitting, if unloved, species to extinction and replacing them with kinder, gentler relatives, albeit ones that still bite you at your neighborhood barbecue.

The engineered mosquitoes are waiting patiently in their inventor's lab. They've not been released into the wild. Lots of ethics committees lie ahead. Others are studying ways in which we might use gene drive to eliminate invasive species and return entire continents to their pre-invasion ecologic purity.

Meanwhile, those who guard us against things that go bump in the night are cogitating madly away on this. Last year, the latest Worldwide Threat Assessment of the U.S. Intelligence Community listed it as a potential bioterror weapon of mass destruction.

If you can engineer a mosquito tribe out of existence, what mad science CRISPR experiment might be used to, say, drive New England Patriot fans to the brink of extinction? I know, I'm sorry, bad example, but I'm sure you can concoct a more morally dubious use for the technology without great difficulty.

Sledge.jpeg

Does this sound too science-fictiony to you? Too like the movie, Gattaca, where the genetically engineered abuse the remnant normal human population? Too improbable? I'd like to think so, but this year's Intel Science Talent Search second-place awardee (with a $75,000 prize and an acceptance to Harvard) is an 18-year-old kid named Michael Zhang whose prize-winning science fair project is called "CRISPR-Cas9-based Viruslike Particles for Orthogonal and Programmable Genetic Engineering in Mammalian Cells." Just wait until next flu season. And I don't even want to know what the Talent Search's First Prize was for; Second Prize sounded scary enough.

The National Academy of Sciences (NAS) and the National Academy of Medicine have just released a thoughtful report on genome editing (you can, and should, see it at https://www.nap.edu/catalog/24623/humangenome-editing-science-ethicsand-governance). Not 2 years ago, a group of scientists that included CRISPR co-inventor Jennifer Doudna strongly discouraged the use of CRISPR technology on humans. The new report now gives a green light to genome editing for serious heritable medical conditions, under properly controlled, rigorous criteria. The same committee argued against genome editing for what they termed "enhancement," which of course is what I really want.

While we are a bit off, from both a technical and regulatory standpoint, from human genome editing, the NAS report opens the door on a new era in human history. The Israeli author Yuval Noah Harari recently published a fascinating book called Homo Deus: A Brief History of Tomorrow. Homo Deus makes the case that humanism has become the new religion of mankind, and that for the first time in our history as a species we have almost God-like powers to alter our own existence. Homo sapiens, if it does not succeed in cleansing the Earth of itself through stupidity, may transform into something unimaginable. And if that happens it will look back on this decade as the turning point.

I'd still like to meet up with a woolly mammoth.