Humanizing Automation

After two crashes of the Boeing 737 Max 8, the planes have been grounded  due to concerns about their automated systems.  Whether or not MCAS turns out to be the root cause, this is but the latest example of a major technological trend: computers making mistakes when controlling systems that humans used to control.

When we automate control, we trade human mistakes for system errors.  When an automated plane crashes into the ground or an autonomous car runs over a human, the natural answer is to put the human back in charge.  After all, wouldn’t a person have avoided tragedy?

Such an answer, though, misses how things have changed.  People generally cannot take over at a moment’s notice – they don’t have situational awareness when they aren’t doing the task full time.  Further, people may not even be capable of making timely decisions because the system was designed for automated control.

Automated systems are going to take erroneous actions and people will not be able take over and fix things.  However, that does not mean that we have to blindly accept those errors.

It is easier when a person can notice the problem in a timely fashion.  Imagine if you were a passenger in a plane that was flying towards the ground and you happened to be in the cockpit.  Even if you had no knowledge of how a plane works, you’d be able to see that something was wrong and be able to tell the pilots (forcefully) that they were doing something very bad.  We can imagine controls that would convey the same information to an automated system.

However, what if there is nobody to notice the problem?

We see this problem frequently in computer security.  Whether you are trying to protect one or a thousand computers, nobody wants to constantly monitor defense systems.  We tune systems so that they exhibit low false positives, which largely prevents them from shutting down legitimate activities.  The reduced sensitivity of such settings, however, also means that they miss many forms of attacks.

If automated systems are to serve human interests, they must always attempt to achieve human-specified goals, even when there is no human in the loop.  Automation often fails, though, when it tries too hard to achieve goals that have been set for it.  Runaway loops of goal seeking are often the root cause of disaster.  Consider a stock trading program keeps trying to make money by trading even when every trade loses money – it never gives up, and so it keeps making the problem worse until outside forces intervene.  Similarly, with the 737 Max 8 accidents, we seem to have pilots who were directly countering the actions of automated systems, and the automated systems keep fighting back until the plane hits the ground.

People can observe a situation and realize their actions aren’t achieving their goals.  People may get frustrated, bored, angry, or scared; very rarely will they be oblivious as to whether or not their choices are succeeding.  Automated systems, however, are generally built to “do what they are told” no matter the consequences.

We will never make automated systems perfectly accurate.  But, perhaps we can build systems that know when they are failing and can seek outside guidance.  Sometimes, admitting a problem is the most intelligent thing any system can do.

 

Automating defense versus offense

I just took a look at the Cyber Grand Challenge, a DARPA sponsored event that will showcase systems that can play CTF (capture the flag) autonomously.

This event scares me.

Developing automated attackers and automated defenders might appear to be a way develop techniques to automatically harden software.  Let the bots slug it out in the simplified, safe environment of the challenge and then, once they’ve proven themselves, throw them loose in the real world to defend real systems (or, at least, adapt their techniques to build practical defenses).

I am certain it won’t work out this way.

The attack techniques developed will generalize and will be very good at finding flaws in real systems.  The defensive techniques, however, will not generalize to the real world.  Thus the outcome of this challenge will be even better ways to attack systems and little improvement in protecting systems.

This difference will occur because in the real world defenses have to work while protecting normal system functioning.  The hard part about defense is not stopping the attacks, it is stopping the attacks while keeping your systems up and your users happy.  (After all, you can always stop the attack by just turning your system off.)  Sure, CTF competitions feature services that have to be kept running; these services are nothing like real-world services though, even when they are running the same binaries, simply because they aren’t being required to provide real services to real users.

Simulating real-world behavior accurately is equivalent to building a detector for anomalous behavior.  If you know what makes it “real”, you know what doesn’t belong.  It thus is not easy to do.  Past efforts in computer security to simulate realistic computer behavior for testing purposes have failed miserably (e.g., see John McHugh’s critique of the late 1990’s DARPA intrusion detection evaluations).

The Cyber Grand Challenge makes no effort to simulate a realistic environment; in fact, it was designed to emphasize reproducibility and determinism, two qualities production systems almost never have.  In this sort of environment it is easy to detect attacks and it is easy to verify that a response strategy does not harm the main defense.

The attackers are playing a game that is very close to what real-world attackers face.  The defenders, however, are facing a much simplified challenge that leaves out all of the really hard aspects of the problem.  Note this even goes for software patching, as the hard part of patching is making sure you didn’t miss any corner cases.  When legitimate traffic has no corner cases, you can get away with being a lot sloppier.

On the attack side clearly things are working when you have systems that  can find vulnerabilities that weren’t inserted intentionally (slide 37).  I didn’t see any, and I don’t expect to see any novel defenses, at least none that would ever work in practice.

Attacking is easy, defending is hard.  Automating defense is fundamentally different from automating attacks.  Only when we accept the true nature of this difference will we be able to change the balance of power in computer security.

 

The Passing of a Pioneer

Today I learned that John Holland passed away.  John was my “grand advisor”, as he was the Ph.D. advisor to Stephanie Forrest, my Ph.D. advisor.  Thus while I had only met John briefly, his work has profoundly influenced my own.

What most impresses me about John’s work is his clear dissatisfaction with his past work.  He developed genetic algorithms and could have spent his entire career on them; yet he went on to develop other major systems such as learning classifier systems and Echo.  John understood that the models of biology that he gave to computer science only captured small fragments of the richness of living systems; thus, while others have spent their careers elaborating on his past work, he kept working to capture more of that richness.  He knew how far we had to go.

The world is poorer for losing his future insights.

Exceptional Intelligence

This morning I was reading an article about Larry Page’s evolution at Google and it made me reflect on the kinds of smarts that Google and others are embedding in the devices that surround us.

Whether it is Microsoft’s Clippy’s failed attempts at being helpful or Siri‘s inability to simple variations to queries that it otherwise would understand, most attempts at computational intelligence tend to work reasonably well within narrow domains and perform very badly outside of them. If the boundaries between expertise and incompetence are clear, the tools can be a joy to use. When the boundary is fuzzy, we become frustrated with the technology’s limits and often stop using it altogether. If you can ask about the weather in ten ways but Siri can only understand three, and you can’t easily remember which is the “right” way to ask about the weather…well, why not just go and tap on the weather app and get the right results 100% of the time?

This rigidity of expectations – only being able to handle inputs that fit within a narrow range – points to the true limitation of current “intelligent” technology. In everyday life we associate intelligence not with getting common things right but with graceful, creative handling of the exceptional. The handling of the exceptional, however, is where most approaches to artificial intelligence break down. This limitation influences the core design of virtually every computational artifact that we build. Learning how to transcend it is, I think, the core problem of computer science.