Monday, April 6, 2009

CyLab Seminar Series Notes: Why do people and corporations not invest more in security?


CyLab Seminar Series Notes: Why do people and corporations not invest more in security? Nicolas Christin on "Understanding User Investments & Response to Security Threats"

[NOTE: CyLab's weekly seminar series provides a powerful platform for the highlighting vital research. The physical audience in the auditorium is composed of Carnegie Mellon University faculty and graduate students, but CyLab's corporate partners also have access via the World Wide Web. Frequently, CyBlog will wet your appetite by offering brief glimpses into these talks. Here are my notes from a talk on Understanding User Investments and Response to Security Threats delivered by Nicolas Christin on 1-19-09. -- Richard Power]

Why do people and corporations not invest more in security?

After all, CyLab researcher Nicolas Christin notes, there are compelling reasons for spending more on security: users claim they have an interest in secure practices, security technology is, by and large, inexpensively available (e.g., PGP, SSL, AES), and financial losses can be very costly.

The thesis of this research is that economics can help understand and change user behavior.

There are several reasons why Christin and his fellow researchers believe in their thesis:

"First of all, this is the 21st Century, everybody is on the network," Christin explains, "all computers are connected one way or another.

"It is also a competitive environment, e.g., competing Internet service providers (ISPs), competing content providers, even within a single organization you may have different divisions competing for funds.

"In addition, we have strong externalities, i.e., the security of one person affects the whole network, or at least a significant number of other users; for example, when the Code Red Worm or the I Love You virus started to propagate on the Internet, they were passed along by end users who failed to properly secure their systems ... so who should pay for security? The people who financially suffer the most from security problems, or the people who are causing these security problems to flourish in the first place? That is an interesting question...

"Another reason we think economics is a very good complement to technology is that [today] criminals themselves are, by and large, very rational, they are in it for the money."

However, unlike most cyber criminals, end users, whether corporations or individuals, are not, in general, "perfectly rational," nor are they random. "We need to find a way of modeling their behavior" Christin remarks, 'so that we can then impact it."

Using modeling methodology, which includes formal analysis, experimental research and field data measurement, Christin and his fellow researchers are in pursuit of "an abstraction that captures as much as possible the salient features of a host of different security situations"

"When you have a fairly reasonable model," according to Christin, "you can use it to test intervention mechanisms before deploying them in practice." For example, you might ask, what would be the impact of passing some particular law? "You test it on your model, and you can make recommendations to a policymaker, and you have something to substantiate your argument, which hopefully makes it more compelling."

"We have tried to look at simple security games, where people are playing against each other, with an exogenous attacker, so all the people we are looking at are basically defending against a common set of threats. We have separated them into a finite number of canonical security games ... They cover a reasonable range of security situations."

"We decouple security strategies into investments to protect yourself (e.g., setting up a firewall) and self-insurance coverage (e.g., archiving data for back up)."

("Most of the research in the economics of security assumes that you have a single security variable," Christin explains, "but we think that this is a little too rough, and that at least we should consider the two different things that people can do.)

"We also consider those network externalities, i.e., a choice by one person on the network affects other participants on the network."



In the course of his seminar, Christin broke down the elements of the general model as depicted in this post:

"You have your 'Expected utility, which is essentially the amount of money that you can expect to have after the attack has or has not taken place. It is simply the 'Initial endowment' (that is, the money you start out with) minus the potential losses you are going to face, and minus the 'Protection investment.' Say our player invests in security, maybe he buys an anti-virus program: that is going to be captured by this protection cost si(normalized to (0-1) hence the scaling variable bi).

"Then there is 'Insurance purchased,' the amount of insurance that you have purchased, whether it is literally insurance with a provider, or simply back ups, i.e., anything that allows you to recover after a security attack has been successful. So the security expenses are broken down in protection and insurance expenses.'Expected loss' is that lose you would expect to face if you didn't institute any security at all. ...

"The expected loss is mitigated by your security expenses, as you can see from the formula. The insurance expenses only depend on that individual player, which means that if your return on investment doesn't depend on what other people are doing. That is good news for most people.

"But the thing that is going to throw us off and that makes this research interesting is that when it comes to protection, we have these externalities here, 'Network protection level (public good)' ... somehow hidden in that function ... where the level of protection that the individual player picks is part of the level of protection it is going to receive, but the level of protection that the other people pick is also going to impact the bottom line, the 'Expected utility.'

"So, the overall utility H depends on all the players in the network. This is where the actions of others impacts my welfare, and this is the critical point we have to model."

To read a relevant paper, which includes a complete analysis, click on J. Grossklags, N. Christin, and J. Chuang. Secure or Insure? A Game-Theoretic Analysis of Information Security Games. WWW'08.

To read a CyLab Chronicles Q&A with Nicolas Christin, click here.

For information on the benefits of partnering with CyLab, contact Gene Hambrick, CyLab Director of Corporate Relations: hambrick at andrew.cmu.edu