Personalized Medicine, Big Data, and Computing Fundamentals

Personalized medicine promises to improve outcomes across the field of medicine.  Whether it is vaccines, aspirin for heart attacks, chemotherapy, so many medical interventions help some people while doing little for others…and sometimes doing serious harm.  If we could identify who would be helped and who would be harmed before treating them, many lives could be saved.

The tools of modern molecular biology are providing the potential basis for personalized medicine.  Genomics, proteomics, and other approaches provide a deluge of data for analysis.  If this “big data” can be appropriately analyzed, surely we can figure out what the effects of a potential treatment will be on an individual in advance…right?

I don’t think it will be that simple.

One of the fundamental results of computer science is that there are certain kinds of programs that we simply cannot write, at least if we expect them to perform correctly 100% of the time.  A particularly famous and seemingly simple version of this result is the halting problem.  Put simply, there is no foolproof way to tell whether a program will ever stop running (halt).  You can of course run it and if it stops, you know that it halts.  But if you run the program and it keeps going, you can’t be sure whether it will stop eventually.  Of course, it is possible to solve this “decision problem” in many special cases – we know, however, that a general solution is impossible.

A surprising number of problems can be reduced to the halting problem, meaning that they are essentially equivalent to the halting problem in difficulty.  One such problem is determining arbitrary program characteristics, i.e., whether a program will ever print “hello” or the works of William Shakespeare.

For personalized medicine to work, we have to be able to analyze data about a person and decide what effect a given medical intervention – an operation, a drug, a lifestyle change – will have on that person before actually doing the intervention.  In other words, solving personalized medicine is equivalent to determining an arbitrary property of a person.  Substitute “program” for “person”, and we have something equivalent to the halting problem.  Uh oh.

What are the implications of this insight?  Well first, we should accept that we’ll never be able to perfectly predict what will happen when we give anyone a pill.  We’re all different, and that uniqueness is irreducible.

A perhaps more useful insight, though, is that true personalized medicine will come when we can meaningfully simulate the physiology of an individual and/or when we can monitor how our bodies work in real time.  In computer terms, we have to move beyond static analysis to dynamic analysis.

Big data in medicine will give us insights that may allow for a limited form of personalized medicine; however, sample size limits and the massive diversity of our bodies make me suspect that any gains will be incremental and very limited in scope.  But a biological debugger that let us go step by step through a detailed simulation of a biological process?  That would be a game changer.  It is also a long way away.

 

The Passing of a Pioneer

Today I learned that John Holland passed away.  John was my “grand advisor”, as he was the Ph.D. advisor to Stephanie Forrest, my Ph.D. advisor.  Thus while I had only met John briefly, his work has profoundly influenced my own.

What most impresses me about John’s work is his clear dissatisfaction with his past work.  He developed genetic algorithms and could have spent his entire career on them; yet he went on to develop other major systems such as learning classifier systems and Echo.  John understood that the models of biology that he gave to computer science only captured small fragments of the richness of living systems; thus, while others have spent their careers elaborating on his past work, he kept working to capture more of that richness.  He knew how far we had to go.

The world is poorer for losing his future insights.

Exceptional Intelligence

This morning I was reading an article about Larry Page’s evolution at Google and it made me reflect on the kinds of smarts that Google and others are embedding in the devices that surround us.

Whether it is Microsoft’s Clippy’s failed attempts at being helpful or Siri‘s inability to simple variations to queries that it otherwise would understand, most attempts at computational intelligence tend to work reasonably well within narrow domains and perform very badly outside of them. If the boundaries between expertise and incompetence are clear, the tools can be a joy to use. When the boundary is fuzzy, we become frustrated with the technology’s limits and often stop using it altogether. If you can ask about the weather in ten ways but Siri can only understand three, and you can’t easily remember which is the “right” way to ask about the weather…well, why not just go and tap on the weather app and get the right results 100% of the time?

This rigidity of expectations – only being able to handle inputs that fit within a narrow range – points to the true limitation of current “intelligent” technology. In everyday life we associate intelligence not with getting common things right but with graceful, creative handling of the exceptional. The handling of the exceptional, however, is where most approaches to artificial intelligence break down. This limitation influences the core design of virtually every computational artifact that we build. Learning how to transcend it is, I think, the core problem of computer science.

Code Zombies

[This post was inspired by a discussion last week in my “Biological Approaches to Computer Security” class.]

Let’s talk about living code and dead code.

Living code is code which can change and evolve in response to new requirements.  Living code is a communications medium between the programmers of the past and those of the present.  Together, they collaborate on specifying solutions to software problems.  The more alive the code, the more active this dialog.

Dead code*, in this context, is code that is not alive.  It does not change in response to new requirements.  Dead code is part of a conversation that ended long ago.  Some dead code is truly dead and buried.  The code for Windows 1.0 I would characterize as being dead in this way.  Other dead code, however, still walks the earth.

I call these entities code zombies.  Others call them legacy code.

Code zombies died a long time ago.  The programmer conversations they were part of have long ended, and nobody is left who can continue them from where they left off.  Nobody understands this code, and nobody can really change it without almost rewriting it from scratch.  Yet this code is still run, is still relied upon.

Look around you – you’re surrounded by code zombies.  If you run commercial, proprietary software, you are probably running a lot of zombie code.  If you run open source, there are many fewer zombies around – but they do pop their heads up every so often.

Enterprises devote huge resources to maintaining their zombies.  Zombies aren’t good at taking care of themselves, and the repair jobs are often gruesome.  Sometimes a zombie needs to be brought back to life.  This can be done, at great effort and expense.  The result, however, is Frankenstein code: it may live, but boy is it not pretty, and it may turn around and bite you.

And here’s a funny thing: zombie code is insecure code.  Tamed zombies aren’t fussy about who they take orders from.  Living code, however, is part of a community that works to keep it safe.

I predict that the software industry will transform once enough people realize the costs of keeping zombies around outweigh the benefits.

* I know”dead code” has other meanings in computer science.

The Expert Engineer’s Fallacy

Today’s news was abuzz with yet another Google attempt to engineer a better Internet, this time in the form of a alternative to JavaScript. There’s been some interesting talk about how Google is doing just what Microsoft used to do, namely propose their own technologies as better alternatives to current practice.  One argument I saw basically says:

  • Google is attempting to replace all major web technologies with alternatives developed in-house.
  • Microsoft attempted to replace all major web technologies with alternatives developed in-house.
  • Microsoft is (or at least was) evil.
  • Google is now becoming evil.

(Sorry, can’t remember where I saw this.)  I think there is actually some truth to this argument, but it has nothing to do with being “evil.”  Rather, it is simply a symptom of having too many very good engineers all working together in the same environment.

Engineers always want to make better pieces of technology.  The better the engineer, the greater their ambition.  Get a bunch of them together in the same place, and they’ll try to remake the world in their image.

Unfortunately, at a certain scale their efforts are almost guaranteed to fail.  It isn’t because they aren’t good.  It is because they don’t know their limits.  And their limits are also simple to explain, but hard for many to believe.

We cannot engineer novel complex systems.  Full stop.

If something is very complicated and you’ve built one before, you can copy the past design and do a reasonable job – as long as you don’t change too many things.  If you are engineering something on a smaller scale, it is possible to design it from scratch and have it be quite good.  But once a system gets complicated enough, nobody understands the whole problem.  And if you don’t understand the whole problem, you can’t engineer a solution.

But we build big systems all the time, right?  Yes, we do, but what we do is engineer parts and then put them together in a process that is, more or less, trial and error.  We try things, we see what happens, and inevitably tweak/hack things to get them to work.

Google’s effort with Dash may succeed on its own in some niche.  But JavaScript as a platform is simply too big to just be replaced.  It is big and complex and solves more problems than any human – or even moderately-sized group of humans – understands.  We can compete with it, but we won’t be able to just engineer it away.   The rules of evolution now apply, not those of design.

Google is doing what Microsoft tried before simply because they are both places with engineers who don’t know their limits.  Well, Microsoft now knows its limits – it hit a wall, and is now a much more humble company.  Eventually this will happen to Google as well.

Such is the way of evolution.

Statistical Mirages

I have to admit that I’ve always been suspicious of statistics as they are used in computer security.  Oddly enough, I also should have been suspicious of statistics in other sciences as well.  Turns out that when you examine large datasets with lots of tests, simply by random chance you are likely to find something, i.e., one of the tests is likely to come up with something “significant.”  Hence you get results that suggest breathing in bus exhaust is good for you.  In other words, you get false positives.

If anomaly detection is ever to become a fundamental defense technology, it will have to move beyond statistics to being grounded in the mechanisms of computers and the real behaviors of users.  This is going to take a while, because this is a lot harder than just running a bunch of tests on datasets.  Of course, given the current disrepute of anomaly detection in security circles, perhaps the door is wide open for better approaches.

RIGorous Cell-level Intrusion Detection

When looking to biology for inspiration on computer security, it is natural to first look at mammalian immune systems.  While they are amazingly effective, they are also mind-numbingly complex.  As a result, many researchers get seduced by the immune system’s architecture when there is so much to learn from its lower-level workings.

Case in point: every cell in the human body can detect many kinds of viral infection on their own, i.e., with no assistance from the cells of the immune system.  As this recent article from Science shows, we are still far from understanding how such mechanisms actually work.  My high-level take on this article, as a computer security researcher, is that:

  • Basically all cells in mammals (and, I think, most animals in general) can generate immune system signals that generate responses from internal and external mechanisms.  A key source for such signals is foreign RNA (code) inside the cytoplasm of a cell.  Of course, there is a lot of other, “self”-RNA in that cytoplasm as well – so how does the cell tell the difference between them?
  • A key heuristic is that native RNA is only copied in the nucleus of a cell; RNA-based viruses, however, need to make RNA copies in the cytoplasm (that’s where they end up after getting injected and it isn’t easy to get into the nucleus – code basically only goes out, it hardly ever goes in).  RNA polymerases (RNA copiers) all use the same basic patterns to mark where copying should start.  Receptors such as RIG-I detect RNA with “copy me” signals (5′-PPP) in places where no copying should occur (the cytoplasm).
  • Of course, this is biology, so the picture isn’t so clear-cut.  A simple “copy me” signal won’t trigger a response; there must also be some base pairing – the RNA molecule must fold back on itself or be bound to another (partially complementary) RNA molecule.  I’d guess this additional constraint is there because normal messenger RNA is strictly single-stranded.   (Indeed, kinks or pairing in messenger RNA are bad in general because they’ll interfere with the creation of proteins.)

Of course, all of this is partial information – there’s evidence that these foreign RNA-detecting molecules (the RLR-family) trigger under other additional constraints.  This doesn’t surprise me either, as this mechanism must operate with extremely low false positives; one or two matching rules aren’t up to the task given the complexity of cellular machinery and (more importantly) given the evolution of viruses to circumvent these protections.  Viruses have evolved ways to shutdown or suppress RLR-related receptors.  Although cells will be pushed to evolve anti-circumvention mechanisms, in practice this is limited in the cellular environment—make the detectors too sensitive and a cell will kill itself spontaneously!  The solution has been to keep a circumventable but highly accurate detector in place; the arms race instead has moved to optimizing the larger immune system.

I leave any conclusions regarding the design of computer defenses as an exercise for the reader. 🙂