Buzzword propagation

I just wanted to briefly comment on today’s article in ArsTechnica by  entitled “Tech firms want to save the auto industry—and the connected car—from itself.”  Specifically, there is this quote near the end:

Symantec’s Anomaly Detection starts off learning what “normal” is for a particular model of car during the development process, building up a picture of automotive information homeostasis by observing CANbus traffic during production testing. Out in the wild, it uses this profile of activity to compare that to the car it’s running on, alerting the Symantec and the OEM in the event of something untoward happening.

Yes, they are doing anomaly detection, i.e. “automotive information homeostasis.”  How about that.

Two things I should point out, though.  First, their solution adds 6% overhead.  That seems awfully high, especially since program behavior on a car should be relatively easy to model.  I wonder what learning algorithms they are using?

Second, they are using static profiles in production.  Static profiles can certainly be tuned to reduce false positives; however, such a choice also guarantees no profile diversity and thus removes one of the key advantages of anomaly detection, namely the possibility of detecting zero-day attacks on some systems (through profile diversity).

But hey, it is certainly a step forward.

Finally.

Automating defense versus offense

I just took a look at the Cyber Grand Challenge, a DARPA sponsored event that will showcase systems that can play CTF (capture the flag) autonomously.

This event scares me.

Developing automated attackers and automated defenders might appear to be a way develop techniques to automatically harden software.  Let the bots slug it out in the simplified, safe environment of the challenge and then, once they’ve proven themselves, throw them loose in the real world to defend real systems (or, at least, adapt their techniques to build practical defenses).

I am certain it won’t work out this way.

The attack techniques developed will generalize and will be very good at finding flaws in real systems.  The defensive techniques, however, will not generalize to the real world.  Thus the outcome of this challenge will be even better ways to attack systems and little improvement in protecting systems.

This difference will occur because in the real world defenses have to work while protecting normal system functioning.  The hard part about defense is not stopping the attacks, it is stopping the attacks while keeping your systems up and your users happy.  (After all, you can always stop the attack by just turning your system off.)  Sure, CTF competitions feature services that have to be kept running; these services are nothing like real-world services though, even when they are running the same binaries, simply because they aren’t being required to provide real services to real users.

Simulating real-world behavior accurately is equivalent to building a detector for anomalous behavior.  If you know what makes it “real”, you know what doesn’t belong.  It thus is not easy to do.  Past efforts in computer security to simulate realistic computer behavior for testing purposes have failed miserably (e.g., see John McHugh’s critique of the late 1990’s DARPA intrusion detection evaluations).

The Cyber Grand Challenge makes no effort to simulate a realistic environment; in fact, it was designed to emphasize reproducibility and determinism, two qualities production systems almost never have.  In this sort of environment it is easy to detect attacks and it is easy to verify that a response strategy does not harm the main defense.

The attackers are playing a game that is very close to what real-world attackers face.  The defenders, however, are facing a much simplified challenge that leaves out all of the really hard aspects of the problem.  Note this even goes for software patching, as the hard part of patching is making sure you didn’t miss any corner cases.  When legitimate traffic has no corner cases, you can get away with being a lot sloppier.

On the attack side clearly things are working when you have systems that  can find vulnerabilities that weren’t inserted intentionally (slide 37).  I didn’t see any, and I don’t expect to see any novel defenses, at least none that would ever work in practice.

Attacking is easy, defending is hard.  Automating defense is fundamentally different from automating attacks.  Only when we accept the true nature of this difference will we be able to change the balance of power in computer security.

 

The Passing of a Pioneer

Today I learned that John Holland passed away.  John was my “grand advisor”, as he was the Ph.D. advisor to Stephanie Forrest, my Ph.D. advisor.  Thus while I had only met John briefly, his work has profoundly influenced my own.

What most impresses me about John’s work is his clear dissatisfaction with his past work.  He developed genetic algorithms and could have spent his entire career on them; yet he went on to develop other major systems such as learning classifier systems and Echo.  John understood that the models of biology that he gave to computer science only captured small fragments of the richness of living systems; thus, while others have spent their careers elaborating on his past work, he kept working to capture more of that richness.  He knew how far we had to go.

The world is poorer for losing his future insights.

Exceptional Intelligence

This morning I was reading an article about Larry Page’s evolution at Google and it made me reflect on the kinds of smarts that Google and others are embedding in the devices that surround us.

Whether it is Microsoft’s Clippy’s failed attempts at being helpful or Siri‘s inability to simple variations to queries that it otherwise would understand, most attempts at computational intelligence tend to work reasonably well within narrow domains and perform very badly outside of them. If the boundaries between expertise and incompetence are clear, the tools can be a joy to use. When the boundary is fuzzy, we become frustrated with the technology’s limits and often stop using it altogether. If you can ask about the weather in ten ways but Siri can only understand three, and you can’t easily remember which is the “right” way to ask about the weather…well, why not just go and tap on the weather app and get the right results 100% of the time?

This rigidity of expectations – only being able to handle inputs that fit within a narrow range – points to the true limitation of current “intelligent” technology. In everyday life we associate intelligence not with getting common things right but with graceful, creative handling of the exceptional. The handling of the exceptional, however, is where most approaches to artificial intelligence break down. This limitation influences the core design of virtually every computational artifact that we build. Learning how to transcend it is, I think, the core problem of computer science.

On Narrative Authentication

Since January 3rd (Friday) I’ve been getting notices about our NSPW 2013 paper on narrative authentication. This is a bit unusual since this paper was a position paper, not a full research paper. In other words, it is highly speculative. There was some real work behind it, specifically Carson Brown’s Master’s thesis (this is the source of the example in the paper). The paper, however, speculates about how systems that could truly understand and generate narratives could be used for authentication purposes.

If you are interested in these ideas please go and read the paper and, if inspired, go and build something! Neither me nor my co-authors, David Mould and Carson Brown, are currently working on this line of research so there is no chance we’ll step on your toes. Having said that, if anyone is really interested in developing narrative authentication systems, please let me know, I’d love to collaborate!

Code Zombies

[This post was inspired by a discussion last week in my “Biological Approaches to Computer Security” class.]

Let’s talk about living code and dead code.

Living code is code which can change and evolve in response to new requirements.  Living code is a communications medium between the programmers of the past and those of the present.  Together, they collaborate on specifying solutions to software problems.  The more alive the code, the more active this dialog.

Dead code*, in this context, is code that is not alive.  It does not change in response to new requirements.  Dead code is part of a conversation that ended long ago.  Some dead code is truly dead and buried.  The code for Windows 1.0 I would characterize as being dead in this way.  Other dead code, however, still walks the earth.

I call these entities code zombies.  Others call them legacy code.

Code zombies died a long time ago.  The programmer conversations they were part of have long ended, and nobody is left who can continue them from where they left off.  Nobody understands this code, and nobody can really change it without almost rewriting it from scratch.  Yet this code is still run, is still relied upon.

Look around you – you’re surrounded by code zombies.  If you run commercial, proprietary software, you are probably running a lot of zombie code.  If you run open source, there are many fewer zombies around – but they do pop their heads up every so often.

Enterprises devote huge resources to maintaining their zombies.  Zombies aren’t good at taking care of themselves, and the repair jobs are often gruesome.  Sometimes a zombie needs to be brought back to life.  This can be done, at great effort and expense.  The result, however, is Frankenstein code: it may live, but boy is it not pretty, and it may turn around and bite you.

And here’s a funny thing: zombie code is insecure code.  Tamed zombies aren’t fussy about who they take orders from.  Living code, however, is part of a community that works to keep it safe.

I predict that the software industry will transform once enough people realize the costs of keeping zombies around outweigh the benefits.

* I know”dead code” has other meanings in computer science.

The Expert Engineer’s Fallacy

Today’s news was abuzz with yet another Google attempt to engineer a better Internet, this time in the form of a alternative to JavaScript. There’s been some interesting talk about how Google is doing just what Microsoft used to do, namely propose their own technologies as better alternatives to current practice. One argument I saw basically says:

Google is attempting to replace all major web technologies with alternatives developed in-house.
Microsoft attempted to replace all major web technologies with alternatives developed in-house.
Microsoft is (or at least was) evil.
Google is now becoming evil.

(Sorry, can’t remember where I saw this.) I think there is actually some truth to this argument, but it has nothing to do with being “evil.” Rather, it is simply a symptom of having too many very good engineers all working together in the same environment.

Today’s news was abuzz with yet another Google attempt to engineer a better Internet, this time in the form of a alternative to JavaScript. There’s been some interesting talk about how Google is doing just what Microsoft used to do, namely propose their own technologies as better alternatives to current practice.  One argument I saw basically says:

  • Google is attempting to replace all major web technologies with alternatives developed in-house.
  • Microsoft attempted to replace all major web technologies with alternatives developed in-house.
  • Microsoft is (or at least was) evil.
  • Google is now becoming evil.

(Sorry, can’t remember where I saw this.)  I think there is actually some truth to this argument, but it has nothing to do with being “evil.”  Rather, it is simply a symptom of having too many very good engineers all working together in the same environment.

Engineers always want to make better pieces of technology.  The better the engineer, the greater their ambition.  Get a bunch of them together in the same place, and they’ll try to remake the world in their image.

Unfortunately, at a certain scale their efforts are almost guaranteed to fail.  It isn’t because they aren’t good.  It is because they don’t know their limits.  And their limits are also simple to explain, but hard for many to believe.

We cannot engineer novel complex systems.  Full stop.

If something is very complicated and you’ve built one before, you can copy the past design and do a reasonable job – as long as you don’t change too many things.  If you are engineering something on a smaller scale, it is possible to design it from scratch and have it be quite good.  But once a system gets complicated enough, nobody understands the whole problem.  And if you don’t understand the whole problem, you can’t engineer a solution.

But we build big systems all the time, right?  Yes, we do, but what we do is engineer parts and then put them together in a process that is, more or less, trial and error.  We try things, we see what happens, and inevitably tweak/hack things to get them to work.

Google’s effort with Dash may succeed on its own in some niche.  But JavaScript as a platform is simply too big to just be replaced.  It is big and complex and solves more problems than any human – or even moderately-sized group of humans – understands.  We can compete with it, but we won’t be able to just engineer it away.   The rules of evolution now apply, not those of design.

Google is doing what Microsoft tried before simply because they are both places with engineers who don’t know their limits.  Well, Microsoft now knows its limits – it hit a wall, and is now a much more humble company.  Eventually this will happen to Google as well.

Such is the way of evolution.

War is the exception

The imagery and terminology of war pervade computer security. Intrusions, vulnerabilities, attackers, defenders – they are all militaristic. While such terms may be be useful, that does not mean we should think we are at war on the Internet. I say this for a very simple reason: war is always the exception.

The imagery and terminology of war pervade computer security.  Intrusions, vulnerabilities, attackers, defenders – they are all militaristic.  While such terms may be be useful, that does not mean we should think we are at war on the Internet.  I say this for a very simple reason: war is always the exception.

Life as we know it is on pause when we are at war.  The rules that govern our productive lives – those that allow us to create, trade, and raise families – are all suspended when we are fighting for those lives.  The only good thing about war is its ending.  To be at war is to be fighting so we can be at peace.

This is what the computer security community must come to grips with: we are not at war today on the Internet.  If we were, then people would not be conducting business, socializing, learning, and falling in love online; instead, we would all be fighting for our (virtual) lives.

Now, it is true that in real war life does go on in some fashion; nevertheless, war is defined by fear, and this  fear infuses even the most mundane aspects of life.  While people are wary, they are not living in fear on the Internet.  We are at peace online.  And this is a good thing!

Why this matters in a computer security context is that what is appropriate for war is not appropriate for peacetime.  Arming the populace for cyber warfare can help prepare us for war; preparation for war, however, is often the surest way to destroy the peace.

Sacrifices that we willingly make in war are loathsome otherwise.  To forget this is to forget the greatest benefit of peace: the relative absence of fear.  Our job as computer security researchers and professionals should not be to spread fear, but rather to protect people from fear.

Our job is to keep the peace.

Going Beyond Social

Today Google announced their Google+ initiative, their latest attempt to address the “social” phenomenon that is currently dominated by Facebook.  To the various observations being made across the Internet, I just wanted to add a small thought.

Humans evolved to socialize in relatively small groups – 200-300 is what I’ve heard.  Social media makes it oh so easy to interact with many, many more than this.  While connecting online with every person you’ve ever met might sound like a good idea, most of us quickly realize that we really cannot handle this level of social interaction.

I am currently connected with so many people on Facebook that I’d love to talk for hours with.  I have been blessed by having known many very interesting, good people in my life.  But the fact that I can interact with so many of them now just makes me shudder.  I simply don’t have the time!  And even just trying to keep up with their status updates is too much. As our online connectivity increases, I think we all are going to experience this social information overload.

Google is giving us a better Facebook.  What we need, though, is a cure to social information overload.