Tuesday, February 23, 2010

Cyber Shock, Virtualization, Cloud Computing & the State of the Web -- Not for the Faint of Heart ...


Partial map of the Internet based on the January 15, 2005 data found on opte.org. Each line is drawn between two nodes, representing two IP addresses. The length of the lines are indicative of the delay between those two nodes. This graph represents less than 30% of the Class C networks reachable by the data collection program in early 2005. Lines are color-coded according to their corresponding RFC 1918 allocation as follows:
* Dark blue: net, ca, us
* Green: com, org
* Red: mil, gov, edu
* Yellow: jp, cn, tw, au, de
* Magenta: uk, it, pl, fr
* Gold: br, kr, nl
* White: unknown

Matt Britt, Internet Map, 12-1-2005

Cyber Shock, Virtualization, Cloud Computing & the State of the Web -- Not for the Faint of Heart ...

Richard Power


Three stories have intrigued me over the past few days, and I will share them with you here. Woven together they deliver a message that needs to be heard (over and over again apparently). And that message? From the corridors of the highest power, through IT networks across the planet, down to each and every end user -- there is a tremendous amount of yet as yet undone, and much of it is attitudinal and cultural.

Cyber Shock

First, there is the Cyber Shock story.

Event: A massive cyber attack has turned the cellphones and computers of tens of millions of Americans into weapons to shut down the Internet. A cascading series of events then knocks out power for most of the East Coast amid hurricanes and a heat wave.
Is the assault on cellphones an armed attack? In a crisis, what power does the government have to order phone and Internet carriers to allow monitoring of their networks? What level of privacy can Americans expect?
A war game, sponsored by a nonprofit group and attended by former top-ranking national security officials, laid bare Tuesday that the U.S. government lacks answers to such key questions.
Half an hour into an emergency meeting of a mock National Security Council, the attorney general declared: "We don't have the authority in this nation as a government to quarantine people's cellphones."
The White House cyber coordinator was "shocked" and asserted: "If we don't have the authority, the attorney general ought to find it."
Ellen Nakashima, War game reveals U.S. lacks cyber-crisis skills, Washington Post, 2-17-10

If you google Cyber Shock, you will find hundreds of news items, and blog posts, about it.

When I heard it, I started hearing the theme music from the hilarious filmGroundhog Day, starring Bill Murray and co-written and directed by Harold Ramis again.

Do you remember The Day After (1996)? Do you remember Eligible Receiver (1997)? There have been many other such cyber war games over the past 15 years. I have been involved in a few, and studied the results of many.

So when I read the WashPo's headline,"War game reveals U.S. lacks cyber-crisis skills," I tweeted, "Shouldn't this read STILL lacks?"

As a reality check, I asked a few friends and colleagues who have been at and beyond the front-line for all the years of this long struggle. I received three responses. None thought it was anything new, of course. One dismissed it as a "bad joke." One said hopefully it would "embarrass" some officials on high to "addressing the issue" at long last. One said that it never hurts to try to get the attention of people who are just tuning in."

From my perspective, I encourage and commend such exercises, but I think everyone would learn more if we framed them in proper historical context, and I certainly would have hoped that we would be much farther along, here at the start of the second decade of the 21st Century, than discovering that we STILL LACK cyber crisis skills.

As it quite often, the press (both mainstream and IT) missed the real story.

Head in the Clouds

Meanwhile, two new studies perked my interest, and not surprisingly they also reinforce the sense that we are spinning our wheels in the the digital Thunderdome.

The first of them was conducted for Symantec by Applied Research. It queried "2,100 top IT and security managers in 27 countries."

Is moving to virtualization and cloud computing making network security easier or harder? The "2010 State of Enterprise Security Survey - Global Data" report shows that about one-third believe virtualization and cloud computing make security "harder," while one-third said it was "more or less the same," and the remainder said it was "easier." The telephone survey was done by Applied Research last month on behalf of Symantec, and it covered 120 questions about technology use -- organizations remain overwhelmingly Microsoft Windows-based -- and cyberattacks on organizations. Ellen Messmer, Security of Virtualization, Cloud Computing Divides IT and Security, Network World, 2-22-10

Harder? About the same? Easier? Well, of course, the most insightful answer would be "All of the Above." (There are numerous security concerns, not the least of which is that we still do not know most of them.) But I doubt the "All of the Above" was available option to the respondents.

Certainly, it should be of concern to us all that most of these "top IT and security managers" are under the impression that virtualization and cloud computing have either made the challenge of cyber security easier to deal with or had no impact on it at all. This data point suggests that a majority of "top IT and security managers" don't really understand security in any depth.

Leading SaaS Vendor Offers Evidence of Enterprise Exposure

The second study is hot of the digital press.

Zscaler Lab's State of the Web - Q4 2009, A View of the Web from an End User's Perspective." Zscaler is a Security-as-a-Service (SaaS) vendor, so its "network of web gateways continually inspects traffic for millions of end users around the globe."

"With the emergence of Advanced Persistent Threats (APT) as evidenced with the Operation Aurora attacks, it is obvious that large enterprise are vulnerable to such targeted attacks that exploit employee behavior," the Zscaler study concludes; and furthermore, it also offers corroboration that "vulnerable desktops and browsers are extremely common in corporate environment making them easier targets than previously believed."

It is an interesting report, with lots of data points to ponder, but two in particular jumped out at me.

First, Internet Explorer continues to dominate among browsers, with over 70% of market share, while Firefox Mozilla its nearest competitor, with only 15% of market share in December 2009. I was not surprised that IE still held sway, but I was surprised that the margin was so significant. But what is even more striking to me was that 48% were still using 6.0, while only 46% had moved on to 7.0, and only 5% have moved on to 8.0. Yes, although 6.0 is still being updated by Microsoft, it lacks several important security features available in later releases. We are talking about IE here? How can this be so?

Another intriguing tidbit, concerning "Top Ten Malware IP Addresses," was that "more and more," Zscaler reports seeing "otherwise legitimate sites hosting malware without being aware of it."

I asked Mike Geide, a senior security researcher for Zscaler, for some insight into the study's findings.

CyBlog: On pg. 17 and pg. 18, you share some numbers on browsers that were eye opening to me, specifically breakdowns of market share and IE versions. I expected to see IE still holding the lion's share, but I confess I was surprised to see by it to be dominating by such a wide margin. Why? Is this a failure on corporate network managers to push upgrades out, or insist on them, or are the 6.x number inflated by home use (of course, either way it represents an unnecessary risk)?

Mike Geide: In short, yes this a failure of corporate network managers. This could be because corporate managers are unfamiliar with the security risks of IE 6 / benefits of IE 8. Because MS is still supporting / releasing patches for IE 6, managers may feel that they are sufficiently secure because their IE 6 is patched. Additionally, while IE 8 is pushed as a High-Priority update from Automatic Updates, it will not automatically install on machines. Users must opt-in to install IE 8 or organizations may deploy via SMS / WSUS. (Ref: http://blogs.msdn.com/ie/archive/2009/04/10/prepare-for-automatic-update-distribution-of-ie8.aspx)

What could corporate IT security people be doing to improve on these numbers?

Mike Geide: Corporate IT should have a centralized method for managing software and patch installation. It is not enough to just patch the underlying OS, web browsers and other client software (e.g., Adobe Acrobat, MS Office, etc.) need to be included in this management.

CyBlog: Is there anything that Microsoft could be doing (or not doing) that could help these numbers?

Mike Geide: If Microsoft announced an end-of-life for IE 6, this may force organizations to conduct an audit of their systems and upgrade to IE 8. However, from MS's point-of-view they may feel that there is still too much of a user base to end-of-life the product, and should continue to maintain/provide patches for any newly discovered vulnerabilities that would impact this user base. The responsibility is really on the corporations and end users to sufficiently patch and upgrade.

CyBlog: On pg. 22, you talk about Top 10 Malware IP Addresses, and note that "more and more legitimate sites are hosting malware without being aware of it." I see you have listed the IP addresses, could you share some more information here? Of the Top 10, how many are "legitimate sites," and could you characterize them in any way? Types of businesses, types of applications, types of traffic?

Mike Geide: For this report we did not specifically break out stats for "legitimate sites" versus "illegitimate sites" for malware- perhaps we will do this for the next report. I would need to pull the logs to properly answer this question, however based on Google results, the majority of the top 10 appeared to be illegitimate sites identified in other resources (e.g., ThreatExpert).

CyBlog: Of the Top 10, how do you define this?

Mike Geide: I would define an illegitimate site as a site that is setup by the attacker explicitly for malicious purposes. Whereas a malicious legitimate site provides or had provided benign content, but has been compromised and is serving some malicious content (e.g., embedded malicious iframe or JS drive-by-download).

CyBlog:Is there any accountability for either "legitimate sites" or "illegitimate sites" that show up in such a ranking?

Mike Geide: Depends. Some hosting providers will terminate legitimate site accounts if they are repeatedly compromised, while other hosting companies care very little about malicious content being hosting. Depending on the severity of the incident and jurisdiction, law enforcement (LE) could enforce accountability- but LE resources do not scale to the Internet. In short, the Internet doesn't have an overall policing body to ensure accountability. If a site continues to be an offending site, block it from being visited. If the site provides a legitimate service they will either fix their security problem or go out of business.

CyBlog: What are key measures that "legitimate sites" should be taking to avoid ending up as a part of the problem instead of being part of the solution?

Mike Geide: Legitimate sites are frequently leveraged to host malicious content through vulnerabilities in 3rd party web applications (e.g., Wordpress, Joomla, etc.). The most common vulnerabilities in these and other web applications are SQL Injection and Cross Site Scripting (XSS). Legitimate sites should only install 3rd party web applications that they absolutely need, and then follow the patch cycle of the application to ensure that known vulnerabilities are fixed.

Saturday, February 6, 2010

CyLab Seminar Series Notes: Lujo Bauer Shares Glimpse into CyLab Research on Usable Privacy & Security in the Digital Home


[NOTE: CyLab's weekly seminar series provides a powerful platform for highlighting vital research. The physical audience in the auditorium is composed of Carnegie Mellon University faculty and graduate students, but CyLab's corporate partners also have access to both the live stream and the archived content via the World Wide Web. From time to time, CyBlog wets your appetite by offering brief glimpses into these talks. Here are some of my notes on a talk delivered by CyLab researcher Lujo Bauer on 2-1-10. At the end of this post you will find links to other issues of CyLab Seminar Series Notes as well as two editions of CyLab Chronicles that highlight Bauer's work. -- Richard Power

CyLab Seminar Series Notes: Lujo Bauer Shares Glimpse into CyLab Research on Usable Privacy & Security in Home Computing

According to Lujo Bauer, “Usability” is often seen as the last phase in system design.
“One of the problems in the way we build systems is that first we build the system, and then perhaps we start thinking about how to best design the interfaces that we dress the system up in ...”
The thesis for Bauer’s seminar: “Creating usable systems often requires not just the help of usability experts, but that the system architects are usability experts.”
“Usability is something that we should pay attention to, and start building into our systems from the design phase onward; and not something that can just be always tacked on at the end.”
Bauer supported his thesis with three examples from his personal experience in research. Two of the examples were based on user studies, from which was learned something important to the very initial phases of system design. The third example was from an instance where the research team tried to make a system more usable after it was deployed, and learned something about features needed.
One of these examples involved the Expandable Grid, a robust interface that shows effective policy instead of policy rules, as well as both user and file hierarchies (groups), and also displays the entire policy on the screen; another involved Grey, a smartphone-based, end-user-driven access control system for physical and virtual resources deployed in Carnegie Mellon’s Collaborative Innovation Center (CIC); and the third example, the one we will focus on here, involved the “Future Digital Home,” and highlights not only the CyLab research thrust into “Usable Privacy and Security” but also the CyLab research thrust into “Securing the Digital Home.”
“Most of us already have a bunch of gadgets at home: digital cameras, maybe a network drive, a TV that can stream Netflix, things like that. In the near future, this will become much more extreme. We will have dozens of devices in our home, which will either gather information or store information that we put on them, or will be used for viewing information. Think of this information as being media, whether its music, or video, or home surveillance; you can also think of it as being files, e.g., tax records, or homework, or papers; you can think of it as your current shopping list, or the content of your refrigerator. Your refrigerator is going to have a little computer built into it and it is going to keep track of how much milk is left, and you are going to want to use your phone every once in awhile to ask your refrigerator how much milk there is because you are going to be walking by a grocery store, and wondering if you should pick up milk.”

“So there are exciting new capabilities from the user perspective, but on the other hand, there are also big questions, and one of the big questions is who handles security and reliability? In this environment, with many devices in my home that all somehow talk to each other and share data, I want to make it the case that I can always access all the information, confidential or otherwise, and I can also let any of my friends, or specific friends, to gain access to some of this information, but at the same time I might have really confidential data in the system, and it could be terrible if the wrong person got access.”
“We’re also dealing with people who are not professional system administrators. They only people in the home are the people that love there. They don’t take classes in system administration; so the interfaces that they use to configure the system correctly, or tell the system what they want it to do, have to be somehow specifically tailored to them. These interfaces can’t require much expertise.”
The goal of the research that the CyLab team working in this area is to provide usable security for digital home storage, e.g., enable users to effectively specify and understand policies, and to use and trust mechanisms.
“Having learned something from previous projects that we had done, we decided to start out with some user studies. Technical researchers are notoriously bad judges of what end users do … “
The first study done was based on in-situ, semi-structured interviews of subjects recruited via Craigslist and the distribution of fliers. The study subjects were limited to non-programmer households. There were thirty-three users (from eight to fifty-nine years of age) in fifteen households, these households ranged from families to couples to roommates.
“We also covered a wide range of expertise: even though there were no programmer households, we had people whose households had as many as twenty-something digital devices for two people, or as few as four or five digital devices for a family of three or four.”
House maps were used as reference points in the interviews.
“We had the participants draw maps of their households, and on these maps indicate where various digital devices might live. And we used these maps later to make sure that when we talked about the various digital devices and types of data, we could actually cover all the devices that they had."

The study yielded some insightful findings:
Current methods are not working: Although almost all of the people worry about sensitive data, access control mechanisms varied and were often ad-hoc.
Policy needs are complex: Fine-grained divisions of people and files are needed (e.g., distinguishing between “public” and “private” aren’t enough), dimensions beyond “person” are needed (e.g., “presence” proved important to most and “location” proved important to many), and of course, there was wide variation across participants (e.g., in definitions of what most private and who is most trusted).
A-priori policy isn’t enough: People want to be asked permission (even if they have assigned it), they want to know not only who is accessing files but why, and they want the capability to review access and revise policy.
Mental models do not equal system realities: Mismatches between current systems and users’ mental models may lead those users astray.
From these findings, Bauer and his fellow researchers distilled a set of useful guidelines for anybody building such a system:
Allow fine-grained control
Plan for lending devices
Include reactive policy creation and usable logs
Reduce or eliminate up-front complexity
Acknowledge social conventions
Support iterative policy specification
Account for users’ mental models

Related CyLab Chronicles

CyLab Chronicles: Q&A with Lujo Bauer (2009)

CyLab Chronicles: Q&A with Lujo Bauer (2008)

Other CyLab Seminar Notes

CyLab Seminar Series Notes: The Evolution of A Hacking Tool, Moxie Marlinspike on SSLstrip

CyLab Seminar Series Notes: User-Controllable Security and Privacy -- Norman Sadeh asks, "Are Expectations Realistic?"

CyLab Seminar Series: Of Frogs, Herds, Behavioral Economics, Malleable Privacy Valuations, and Context-Dependent Willingness to Divulge Personal Info

CyLab Seminar Series Notes: Why do people and corporations not invest more in security?

CyLab Research Update: Basic Instincts in the Virtual World?

For information on the benefits of partnering with CyLab, contact Gene Hambrick, CyLab Director of Corporate Relations: hambrick at andrew.cmu.edu