I just wanted to briefly comment on today’s article in ArsTechnica by Jonathan Gitlin entitled “Tech firms want to save the auto industry—and the connected car—from itself.” Specifically, there is this quote near the end:
Symantec’s Anomaly Detection starts off learning what “normal” is for a particular model of car during the development process, building up a picture of automotive information homeostasis by observing CANbus traffic during production testing. Out in the wild, it uses this profile of activity to compare that to the car it’s running on, alerting the Symantec and the OEM in the event of something untoward happening.
Two things I should point out, though. First, their solution adds 6% overhead. That seems awfully high, especially since program behavior on a car should be relatively easy to model. I wonder what learning algorithms they are using?
Second, they are using static profiles in production. Static profiles can certainly be tuned to reduce false positives; however, such a choice also guarantees no profile diversity and thus removes one of the key advantages of anomaly detection, namely the possibility of detecting zero-day attacks on some systems (through profile diversity).
But hey, it is certainly a step forward.