I’ve just enabled a mobile theme for this site. Hopefully this will lead to more posts in the near future…
I have to admit that I’ve always been suspicious of statistics as they are used in computer security. Oddly enough, I also should have been suspicious of statistics in other sciences as well. Turns out that when you examine large datasets with lots of tests, simply by random chance you are likely to find something, i.e., one of the tests is likely to come up with something “significant.” Hence you get results that suggest breathing in bus exhaust is good for you. In other words, you get false positives.
If anomaly detection is ever to become a fundamental defense technology, it will have to move beyond statistics to being grounded in the mechanisms of computers and the real behaviors of users. This is going to take a while, because this is a lot harder than just running a bunch of tests on datasets. Of course, given the current disrepute of anomaly detection in security circles, perhaps the door is wide open for better approaches.
When looking to biology for inspiration on computer security, it is natural to first look at mammalian immune systems. While they are amazingly effective, they are also mind-numbingly complex. As a result, many researchers get seduced by the immune system’s architecture when there is so much to learn from its lower-level workings.
Case in point: every cell in the human body can detect many kinds of viral infection on their own, i.e., with no assistance from the cells of the immune system. As this recent article from Science shows, we are still far from understanding how such mechanisms actually work. My high-level take on this article, as a computer security researcher, is that:
- Basically all cells in mammals (and, I think, most animals in general) can generate immune system signals that generate responses from internal and external mechanisms. A key source for such signals is foreign RNA (code) inside the cytoplasm of a cell. Of course, there is a lot of other, “self”-RNA in that cytoplasm as well – so how does the cell tell the difference between them?
- A key heuristic is that native RNA is only copied in the nucleus of a cell; RNA-based viruses, however, need to make RNA copies in the cytoplasm (that’s where they end up after getting injected and it isn’t easy to get into the nucleus – code basically only goes out, it hardly ever goes in). RNA polymerases (RNA copiers) all use the same basic patterns to mark where copying should start. Receptors such as RIG-I detect RNA with “copy me” signals (5′-PPP) in places where no copying should occur (the cytoplasm).
- Of course, this is biology, so the picture isn’t so clear-cut. A simple “copy me” signal won’t trigger a response; there must also be some base pairing – the RNA molecule must fold back on itself or be bound to another (partially complementary) RNA molecule. I’d guess this additional constraint is there because normal messenger RNA is strictly single-stranded. (Indeed, kinks or pairing in messenger RNA are bad in general because they’ll interfere with the creation of proteins.)
Of course, all of this is partial information – there’s evidence that these foreign RNA-detecting molecules (the RLR-family) trigger under other additional constraints. This doesn’t surprise me either, as this mechanism must operate with extremely low false positives; one or two matching rules aren’t up to the task given the complexity of cellular machinery and (more importantly) given the evolution of viruses to circumvent these protections. Viruses have evolved ways to shutdown or suppress RLR-related receptors. Although cells will be pushed to evolve anti-circumvention mechanisms, in practice this is limited in the cellular environment—make the detectors too sensitive and a cell will kill itself spontaneously! The solution has been to keep a circumventable but highly accurate detector in place; the arms race instead has moved to optimizing the larger immune system.
I leave any conclusions regarding the design of computer defenses as an exercise for the reader. 🙂
Over the past year I find myself reading more and more economics-probably because I know sol little about the field. Anyway, I have to say I found this paper by Dani Rodrik (which I found via Brad Delong) to be fascinating. I knew that currency devaluation was a big factor in China’s huge currency reserves and their trade surplus; I never thought to link that to them joining the WTO and abandoning targeted industrial policy. This makes me think that something major has to change with respect to China and the rest of the world. They are following the rules as best they can given their domestic constraints, but those choices are having major external repercussions. So then the question is, how is anyone going to change the rules, and do so in a way that everyone can agree to?
I suspect climate change will figure into this as well. Wow, what a mess. Makes me glad I’m not a politician…
This week in our group meeting Alex gave a presentation about the status of DNSSEC. DNSSEC is supposed to improve the security of the Domain Name System (DNS) by cryptographically signing DNS responses. Thus, with DNSSEC, you can be sure that when you visit www.google.com, you are visiting a machine (IP address) that is actually associated with Google, rather than visiting some random attacker’s website. Recently a number of DNS vulnerabilities have been found that make it very easy (under some circumstances) to forge DNS responses, so the security case for DNSSEC would appear to be very strong. By the end of our discussion, however, we had reached a very different conclusion. Let me explain.
First, let’s assume that DNSSEC is adopted in record time, say within the next year – 95%+ penetration on secured servers and clients. Next, let’s assume that all the major security problems with DNSSEC – such as the lack of key revocation – have been resolved. In this hypothetical world, we would now have a DNS infrastructure hardened against forgery attacks. Mission accomplished, right? Maybe not. In fact, I think there’s a good chance that we would actually be in worse shape than we are now. Things would be worse because the Internet would become both less reliable and less secure. These problems would arise precisely because of the success of DNSSEC.
The key insight is that a successful DNSSEC would inevitably kill SSL certificates; instead, SSL would just use the keys conveyed by DNSSEC. Why bother maintaining two sets of cryptographic credentials when you can get away with one? Once this happens, the incentives for breaking DNSSEC become enormous.
And break it they will, because DNS admins at all levels have minimal experience safeguarding cryptographic credentials – they know how to keep servers running, not how to keep secrets. The first priority with DNS will always be availability, and such availability in DNS means that entries have to be changed with short notice. Therefore, many more people will have access to domain signing keys than should from a security perspective. Thus, attackers will get the keys. And those keys will be trusted even more than SSL certificates, because they will be used to block network connections.
So, in an effort to secure DNS, we will make DNS less reliable (because it will be harder to make timely updates) and we’ll make the Internet less secure (because connections to secure websites will be authenticated using much less reliable signatures).
We currently have some faith in cryptographic credentials because they are issued by parties that value their reputation for security (because their business depends upon it). Instead, we’re going to make organizations who have a reputation for reliability – DNS registrars – and we’re going to give them a fundamental security responsibility that detracts from their core mission of reliability. The parties implementing DNSSEC actually have significant incentives to trade off security for reliability, even though everybody else on the Internet will have an increasing requirement that DNS be secure (because it will replace SSL certificates).
So, what’s the connection to the financial meltdown? Well, that meltdown can be, in part, attributed to mismatches between incentives and expectations of trustworthiness. Credit rating agencies were expected to look out for buyers of securities but were paid by the sellers of securities. Developers of mortgage-backed securities expected banks to continue to make loans as they had in the past, even though mortgage-backed securities gave banks every incentive to be careless when giving out loans – if the loans went bad, they wouldn’t lose any money because they’d sold the loan to somebody else!
New technologies, whether DNSSEC or financial securitization, inevitably have secondary effects on human decision making. Technologists must realize that tools designed to increase security or manage risk can, in practice, lead to reduced security or disastrous levels of risk.
everyone in the financial world having too much faith in the models underlying structured financial instruments (such as mortgage-backed securities). Those models became untrustworthy the moment they were used to create structured financial instruments, because such instruments removed the incentives for banks to be careful when giving out loans (they no longer faced the risk of mortgage defaults).
In our discussion, Luc made an interesting point that reliability and security are generally in conflict in practice, but yet system designers seem to keep wanting to do both at the same time. I think there’s a deep insight here, but I’m not quite sure what it is. All I do know is that if we’re going to get reliability and security, we need more flexible ways to manage the trade-offs between them.