Humanizing Automation

After two crashes of the Boeing 737 Max 8, the planes have been grounded  due to concerns about their automated systems.  Whether or not MCAS turns out to be the root cause, this is but the latest example of a major technological trend: computers making mistakes when controlling systems that humans used to control.

When we automate control, we trade human mistakes for system errors.  When an automated plane crashes into the ground or an autonomous car runs over a human, the natural answer is to put the human back in charge.  After all, wouldn’t a person have avoided tragedy?

Such an answer, though, misses how things have changed.  People generally cannot take over at a moment’s notice – they don’t have situational awareness when they aren’t doing the task full time.  Further, people may not even be capable of making timely decisions because the system was designed for automated control.

Automated systems are going to take erroneous actions and people will not be able take over and fix things.  However, that does not mean that we have to blindly accept those errors.

It is easier when a person can notice the problem in a timely fashion.  Imagine if you were a passenger in a plane that was flying towards the ground and you happened to be in the cockpit.  Even if you had no knowledge of how a plane works, you’d be able to see that something was wrong and be able to tell the pilots (forcefully) that they were doing something very bad.  We can imagine controls that would convey the same information to an automated system.

However, what if there is nobody to notice the problem?

We see this problem frequently in computer security.  Whether you are trying to protect one or a thousand computers, nobody wants to constantly monitor defense systems.  We tune systems so that they exhibit low false positives, which largely prevents them from shutting down legitimate activities.  The reduced sensitivity of such settings, however, also means that they miss many forms of attacks.

If automated systems are to serve human interests, they must always attempt to achieve human-specified goals, even when there is no human in the loop.  Automation often fails, though, when it tries too hard to achieve goals that have been set for it.  Runaway loops of goal seeking are often the root cause of disaster.  Consider a stock trading program keeps trying to make money by trading even when every trade loses money – it never gives up, and so it keeps making the problem worse until outside forces intervene.  Similarly, with the 737 Max 8 accidents, we seem to have pilots who were directly countering the actions of automated systems, and the automated systems keep fighting back until the plane hits the ground.

People can observe a situation and realize their actions aren’t achieving their goals.  People may get frustrated, bored, angry, or scared; very rarely will they be oblivious as to whether or not their choices are succeeding.  Automated systems, however, are generally built to “do what they are told” no matter the consequences.

We will never make automated systems perfectly accurate.  But, perhaps we can build systems that know when they are failing and can seek outside guidance.  Sometimes, admitting a problem is the most intelligent thing any system can do.