Friday, July 31, 2009

From Parking Meters to the Cloud, from SMS to Smart Grids ... Is Everything Broken? -- Report from Black Hat Briefings (Las Vegas 2009)

Las Vegas Strip at Night from the International Space Station (NASA)

Seem like every time you stop and turn around
Something else just hit the ground
Broken cutters, broken saws,
Broken buckles, broken laws,
Broken bodies, broken bones,
Broken voices on broken phones.
Take a deep breath, feel like you're chokin',
Everything is broken
– Bob Dylan


From Parking Meters to the Cloud, from SMS to Smart Grids ... Is Everything Broken? -- Report from Black Hat Briefings (Las Vegas 2009)

-- Richard Power


From legendary billionaire Howard Hughes to legendary Gonzo journalist Hunter Thompson , Las Vegas has worked its strange magic on many talented people, including cyber security conference entrepreneur Jeff Moss. Back in 1993, Moss held the first DEFCON hackers convention in Las Vegas, and a few years later, in 1997, spun-off Black Hat Briefings, which has, arguably, become what its hype trumpets, the "world's premier technical security conference." And although BlackHat now go on tour to Tokyo, Amsterdam and Washington, D.C., Las Vegas is still home to the main event, both for DEFCON and Black Hat.

Robert Lentz, Chief Security Officer for the US Department of Defense, was one of BlackHat's 2009 keynote speakers. Lentz articulated the goal of a "resilient cyber eco-system." Lentz cited the need for "culture changing in cyberspace." He stressed the role of education in achieving this goal, citing not only the CAE but also Cyber U.S. Cyber Challenge and the Dc3 Digital Forensics Challenge. He even alluded to the green movement, and the passion it evoked, and called for "a cyber-green movement." "That is something we could all rally around," he added. It was encouraging to hear the bold vision and the high values, but of course when working within an entity as huge, complex and long-established as the US federal government, delivering is the challenge.

The sessions in the body of this year's Briefings ranged from sessions on attacking everything from SMS to "the Smart Grid," and from parking meters to "the Cloud."

Dan Kaminsky, a cyber security researcher typically breaks news at BlackHat, not only broke news this year (see Kaminsky reveals key flaws in X.509 SSL certificates at Black Hat) was himself breaking news this year:

Two noted security professionals were targeted this week by hackers who broke into their web pages, stole personal data and posted it online on the eve of the Black Hat security conference.
Security researcher Dan Kaminsky and former hacker Kevin Mitnick were targeted because of their high profiles, and because the intruders consider the two notables to be posers who hype themselves and do little to increase security, according to a note the hackers posted in a file left on Kaminsky’s site.
The files taken from Kaminsky’s server included private e-mails between Kaminisky and other security researchers, highly personal chat logs, and a list of files he has purportedly downloaded that pertain to dating and other topics.
Wired, 7-29-09

Carnegie Mellon CyLab's Alessandro Acquisti spoke on his team's recent headline-grabbing revelation on the accuracy of predicting of social security numbers, using publicly available information.

Acquisti's message --

The vulnerability presented here is not based on some secret bug hidden inside some software. It is based purely on publicly available data . This reflects the unexpected/unintended consequences of the interaction of complex information systems (i.e., combination of SSN issuance patterns, SSDI, EAB, SSNVS, and availability of personal information) and highlights the need to think past SSNs as authenticators.

(For more on this research, see There is an Elephant in the Room; & Everyone’s Social Security Numbers are Written on Its Hide.)

Moxie Marlinspike offered "More Tricks for Defeating SSL."

His conclusions --

We have a MITM attack that will intercept communication for almost all SSL/TLS implementations.
In the case of NSS (Firefox, Thunderbird, Evolution, AIM, Pidgin) we only need a single certificate.
We've defeated the OCSP protocol as implemented.
We've hijacked the Mozilla auto-updates for both applications and extensions.
We've got an exploitable overflow.
In short, we've got your passwords, your communication, and control over your computer.


(Marlinspike recently spoke at CyLab, see CyLab Seminar Series Notes: The Evolution of A Hacking Tool, Moxie Marlinspike on SSLstrip)

There were two other sessions that I found particularly interesting: Nathan Hamiel and Shawn Moyer on “Weaponizing the Web -- More Attacks on User-Generated Content," and Cormac Herley and Dinei Florencio on “Economics and the Underground Economy.” More on them can be found in my full report on Black Hat, available only to CyLab partners in the Intelligence Briefing section of the partners-only portal.

I will also have more to say about Black Hat 2009 as well as the upcoming USENIX Security Symposium in my article for CSO Magazine (August 2009).

Wednesday, July 22, 2009

CyLab News: CyLab & INI Host Information Assurance Capacity Building Program to Boost Nation’s Cyber Security


"As one of the nation's largest cybersecurity research and education centers, Carnegie Mellon CyLab can offer a wealth of highly relevant topics and research findings to the faculty who engage in the IACBP," said Virgil Gligor, co-director of Carnegie Mellon CyLab, a multidisciplinary research center pioneering development of leading-edge cybersecurity tools.

CyLab News: CyLab & INI Host Information Assurance Capacity Building Program to Boost Nation’s Cyber Security

Carnegie Mellon University's CyLab and its Information Networking Institute (INI) are hosting six faculty members for the seventh annual federally funded Information Assurance Capacity Building Program (IACBP) through July 24.

"This comprehensive program is designed to foster outstanding programs that support the nation's cybersecurity needs and educate future information security leaders and faculty," said Dena Haritos Tsamitis, INI director and director of education, training and outreach for Carnegie Mellon CyLab.

This year, select faculty will spend two weeks participating in a combination of lectures and lab exercises designed to help them develop cutting-edge curricula to educate tomorrow's information security leaders.

"It's been so helpful because we are learning how to simulate situations on the Internet, which helps us convey complex information to our classes," said Gail Finley, an associate professor in the Computer Science Department at the University of the District of Columbia in Washington, D.C.

Thorna Humphries, associate professor in the Department of Computer Science at Norfolk State University in Norfolk, Va., said the program is outstanding because it gives participants insight into the future. "We've been exposed to everything from cryptography to secure software," she said. Other 2009 participants come from Hampton State University and Bowie State University.

Humphries and Finley join 36 other faculty members from 11 academic institutions that have participated in the IACBP. Since 2002, more than $1.1 million has gone toward the IACBP, which is designed to guide faculty from minority-serving institutions, including Historically Black Colleges and Hispanic-Serving Institutions, to develop curricula with academic enrichment from Carnegie Mellon CyLab and the INI.

"As one of the nation's largest cybersecurity research and education centers, Carnegie Mellon CyLab can offer a wealth of highly relevant topics and research findings to the faculty who engage in the IACBP," said Virgil Gligor, co-director of Carnegie Mellon CyLab, a multidisciplinary research center pioneering development of leading-edge cybersecurity tools.

Tsamitis said the combined efforts of Carnegie Mellon and the program participants will ultimately translate into new courses and educational initiatives at the participating institutions. In the past seven years, program participants have created 11 new courses, seven new degree options and 14 certificate programs, workshops and symposia.

"Programs such as the IACBP are designed to strengthen information assurance education at campuses nationwide," said Tsamitis, who was instrumental in gaining recognition for the university at an awards ceremony in Seattle, Wash., where Carnegie Mellon was re-designated as a National Center of Academic Excellence in Information Assurance Education and designated for the first time as a Center for Academic Excellence in Research.

The National Security Agency and the Department of Homeland Security jointly sponsor the National Centers of Academic Excellence programs. This partnership was formed in 2004 to protect the nation's critical infrastructures, which are essential to maintaining a strong economy and our national security.

Friday, July 17, 2009

Reflections on SOUPS 2009: Between Worlds, Cultivating Superior Cleverness, Awaiting a Shift in Consciousness



"Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction." -- Albert Einstein

Reflections on SOUPS 2009: Between Worlds, Cultivating Superior Cleverness, Awaiting a Shift in Consciousness

-- Richard Power


The success of the fifth annual Symposium on Usable Privacy and Security (SOUPS) -- more papers submitted than ever before, more papers accepted than ever before, more attendees registered than ever before -- is an affirmation of the usability concept and its vital role in the development of security and privacy strategies.

On the third and final day of SOUPS 2009, Lorrie Cranor, the driving force behind both SOUPS and the CUPS from whence it poured, was unable to attend the morning session, she was across town, keynoting on "Teaching Johnny Not to Fall for Phish" at the Sixth Conference for E-Mail and Anti-Spam.

The research of Cranor and her CUPS colleagues demonstrates that user education can indeed play a critical role in the fight against phishing, etc., IF the tools utilized are engaging, enlightening and designed to exploit the "teachable moment." It has also led to the formation of Wombat Security Technologies.

In the technical paper session on Passwords and Authentication, Alexander De Luca of the Media Informatics Group at University of Munich presented Look into my Eyes! Can you guess my Password?, co-authored with his University of Munich colleagues Martin Denzel and Heinrich Hussmann.

In the same session, Stuart Schechter of Microsoft presented 1 + 1 = You: Measuring the comprehensibility of metaphors for configuring backup authentication, co-authored with his Microsoft colleague, Robert Reeder.

The work of De Luca, Denzell and Hussman explored the potential of having users authenticate themselves, particularly at terminals in public places, by drawing shapes with their eyes.

The work of Schechter and Reeder explored the issues involved in user-chosen challenge questions (e.g., the kind you answering when you've lost your password or user ID in Hotmail or G-mail), and showed that somewhat better results were achieved if the user had to take an exam and get a passing grade.

These and other presentations I attended were fascinating.

Our problem is, however, that the challenge in cyber security and privacy is not one of cleverness, but one of consciousness.

We are still between worlds, really.

The Information Age that Alvin Toffler heralded as the "Third Wave" has already broken over our heads, it has already swept us away; but, in many ways, our minds are still on the shore, or reaching back toward the shore, wanting to somehow, impossibly, to take it with us.

In the 1990s, the news was that the periphery between the network and the Internet no longer existed. Here and now, at the end of the first decade of the 21st Century, the news is that the periphery between the mind and the World Wide Web is gone.

The implications are profound.

Some months ago, at dinner with a colleague from inside the US intelligence community's own attempt to comprehend this Brave New World, we discussed these issues at great depth, and both came to the same conclusion: most of the human race will not recognize the world in which they live and work even as soon as ten years from now.

Most of what we are trying to accomplish in cyber security and privacy is based on a paradigm that has been eclipsed; no, not an IT-related paradigm, an old paradigm of the human psyche and its relationships to both the natural world and the digital world, and the interpenetration of all three.

There is something profoundly new coming in the realm of cyber security and privacy.

You and I will recognize it when we see it because not only will we not have seen it before, it will change the way we perceive problems and approach solutions.

It may well come from such academic research. That's why participating in conference such as SOUPS is of great importance.

But it will not reflect superior cleverness, it will signal a shift in consciousness.

Of course, meanwhile, we must rely on superior cleverness, and that too is a reason to participate in SOUPS, etc.

For more commentary on SOUPS 2009, go to the CUPS Blog.

Speaking of which, I will be blogging from Blackhat later this month and from the USENIX Security Symposium in August. Stay tuned.

Summary of SOUPS 2009 Posts:

Reflections on SOUPS 2009: Between Worlds, Cultivating Superior Cleverness, Awaiting a Shift in Consciousness

SOUPS 2009 Mental Modes Session: Study Demonstrates that Pursuit of Seamless Security can Lead to New Dangers, Particularly for Mobile Users

SOUPS 2009 Best Paper Award Goes to "Ubiquitous Systems and the Family: Thoughts about the Networked Home"

SOUPS 2009 Tutorial Explores Challenges of Evaluating Usable Security and Privacy Technology

CUPS Related Posts:

CyLab Seminar Series Notes: User-Controllable Security and Privacy -- Norman Sadeh asks, "Are Expectations Realistic?"

CyLab Research Update: Locaccino Enables the Watched to Watch the Watchers

CyLab Chronicles: Wombat, the Latest CyLab Success Story

CyLab Chronicles: Q&A w/ Norman Sadeh

CyLab Chronicles: Q&A w/ Lorrie Cranor

Culture of Security: CUPS Research Takes on Both Widespread Attack Method & Dangerous Meme (Available to Cylab Partners Only)

Thursday, July 16, 2009

SOUPS 2009 Mental Modes Session: Study Demonstrates that Pursuit of Seamless Security can Lead to New Dangers, Particularly for Mobile Users



"The Windows Vista personal firewall provides its diverse users with a basic interface that hides many operational details. However, concealing the impact of network context on the security state of the firewall may result in users developing an incorrect mental model of the protection provided by the firewall." Fahimeh Raja, University of British Columbia

SOUPS 2009 Mental Modes Session: Study Demonstrates that Pursuit of Seamless Security can Lead to New Dangers, Particularly for Mobile Users

Paul Van Oorschot of Carelton University in Ottawa chaired the Mental Models session.

Fahimeh Raja of University of British Columbia (Vancouver) presented Revealing Hidden Context: Improving Mental Models of Personal Firewall Users, co-authored with her colleagues, Kirstie Hawkey and Konstantin Beznosov.

The goal of the study was to investigate the impact of adding contextual information to the Vista Firewall Basic Interface. The researchers looked at the impact of Vista Firewall functionality on users' mental models, as well as the impact of Vista Firewall configuration on users' understanding.

"The Windows Vista personal firewall provides its diverse users with a basic interface that hides many operational details. However, concealing the impact of network context on the security state of the firewall may result in users developing an incorrect mental model of the protection provided by the firewall."

Raja and her colleagues determined that because the security technology makes changes in the users' security state, it is important to somehow communicate these changes to users; "otherwise, these users can be left in dangerous situations; for example, only protected in the current network context but believing themselves to be protected for future network contexts."

Users could think that their firewall was turned on when it was turned off, or conversely, that their firewall was turned off when it was turned on.

"Users need to understand the effect of the configuration on the system's security state. We argue as users become more mobile, it is increasingly important to understand the security state for both current and future contexts of use."

They concluded that the design of the Vista Firewall Basic Interface does not provide enough context for mobile users. If unaware that configuration changes only apply to current network location, users may be left with dangerous misconceptions. The researchers also concluded that users' mental models can be supported by revealing context.

The implications of this study are important, i.e., it may be possible to balance complexity and security.

Two other papers were presented in this session:

Andrew Besmer of University of North Carolina (Charlotte) presented Social Applications: Exploring A More Secure Framework, a paper co-authored with colleagues Heather Richter Lipford, Mohamed Shehab and Gorrell Cheek, also from the Department of Software and Information Systems.

Ponnurangam Kumaraguru of Carnegie Mellon University CyLab presented on School of Phish: A Real-Word Evaluation of Anti-Phishing Training, a paper co-authored with fellow Carnegie Mellon researchers Justin Cranshaw, Alessandro Acquisti, Lorrie Cranor, Jason Hong, Mary Ann Blair and Theodore Pham.

Some Related Posts:

SOUPS 2009 Best Paper Award Goes to "Ubiquitous Systems and the Family: Thoughts about the Networked Home"

SOUPS 2009 Tutorial Explores Challenges of Evaluating Usable Security and Privacy Technology

CyLab Seminar Series Notes: User-Controllable Security and Privacy -- Norman Sadeh asks, "Are Expectations Realistic?"

CyLab Research Update: Locaccino Enables the Watched to Watch the Watchers

CyLab Chronicles: Wombat, the Latest CyLab Success Story

CyLab Chronicles: Q&A w/ Norman Sadeh

CyLab Chronicles: Q&A w/ Lorrie Cranor

Culture of Security: CUPS Research Takes on Both Widespread Attack Method & Dangerous Meme (Available to Cylab Partners Only)

For further commentary on SOUPS 2009, go to the CUPS Blog.

-- Richard Power

SOUPS 2009 Best Paper Award Goes to "Ubiquitous Systems and the Family: Thoughts about the Networked Home"



Often, futuristic shopping scenarios highlight ways in which a network of computers are able to determine the items a consumer needs by intelligently surveying food stocks and other goods in the individuals home. However, as Friedewald and colleagues note, such scenarios tend to take an individualistic approach, ignoring the ways in which the various interests within a family may converge or conflict within a shopping expidition. In many families, shopping is considered a social activity where all family members might take part in the process. Younger members of a family (seldom seen in the ubicomp world) are typically active participants in the weekly shopping task, and are given their own responsibilities or activities. Linda Little, Elizabeth Sillence and Pam Briggs, Ubiquitous Systems and the Family: Thoughts about the Networked Home

SOUPS 2009 Best Paper Award Goes to "Ubiquitous Systems and the Family: Thoughts about the Networked Home"

Andrew Patrick and Simson Garfinkel, SOUPS Technical Papers Co-Chairs, announced SOUPS 2009 Best Paper Award has been bestowed on Ubiquitous Systems and the Family: Thoughts about the Networked Home by Linda Little, Elizabeth Sillence and Pam Briggs of the PaCT Lab, Northumbria University (U.K.).

Here are brief excerpts from the award-winning paper followed by a link to full text:

Developments in ubiquitous and pervasive computing herald a future in which computation is embedded into our daily lives. Such a vision raises important questions about how people, especially families, will be able to engage with and trust such systems whilst maintaining privacy and individual boundaries. To begin to address such issues, we have recently conducted a wide reaching study eliciting trust, privacy and identity concerns about pervasive computing. Over three hundred UK citizens participated in 38 focus groups. The groups were shown Videotaped Activity Scenarios [11] depicting pervasive or ubiquitous computing applications in a number of contexts including shopping. The data raises a number of important issues from a family perspective in terms of access, control, responsibility, benefit and complexity. Also findings highlight the conflict between increased functionality and the subtle social interactions that sustain family bonds. We present a Pre-Concept Evaluation Tool (PRECET) for use in design and implementation of ubicomp systems."

The design and implementation of ubiquitous systems cannot be solely based on traditional HCI issues of functionality, usability and accessibility. In a shopping context at least ubicomp systems need to incorporate a better understanding of family interactions and need to show some sensitivities to the natural information sharing boundaries that occur within the family. Such an approach will resonate with developments in other technologies, where the focus on ‘user-experience’ as opposed to ‘usability’ has seen a shift towards an understanding of the wider social impacts of HCI.


Ubiquitous Systems and the Family: Thoughts about the Networked Home, Linda Little, Elizabeth Sillence and Pam Briggs, PaCT Lab, Northumbria University, U.K.

Some Related Posts:

SOUPS 2009 Tutorial Explores Challenges of Evaluating Usable Security and Privacy Technology

CyLab Seminar Series Notes: User-Controllable Security and Privacy -- Norman Sadeh asks, "Are Expectations Realistic?"

CyLab Research Update: Locaccino Enables the Watched to Watch the Watchers

CyLab Chronicles: Wombat, the Latest CyLab Success Story

CyLab Chronicles: Q&A w/ Norman Sadeh

CyLab Chronicles: Q&A w/ Lorrie Cranor

Culture of Security: CUPS Research Takes on Both Widespread Attack Method & Dangerous Meme (Available to Cylab Partners Only)

For further commentary on SOUPS 2009, go to the CUPS Blog.

-- Richard Power''

Wednesday, July 15, 2009

SOUPS 2009 Tutorial Explores Challenges of Evaluating Usable Security and Privacy Technology


"Once you have real people using the security in this design, what is the performance that you can expect? What is the performance you can expect at the security level in terms of the choices that the users will make? This is where we have a problem ... We don't have a clear set of criteria to assess a particular performance against ... If you don't have a criteria for what is actually an acceptable level of performance, then you just don't know if it is good enough or not." Angela Sasse, University College London

SOUPS 2009 Tutorial Explores Challenges of Evaluating Usable Security and Privacy Technology

By Richard Power


As I drove across Google's sprawling Mountain View campus, the memory of a 2005 visit to Microsoft rose up in my mind. I had traveled there to participate in a CSO Council meeting. The Redmond campus is a city-state, of course, with its own police force and its own street. In 2005, Google was only ten years old. Fast forward another four years. Just this month, Google announced that it is going to challenge Microsoft on the OS front (See Now Google parks its tanks right outside Microsoft's gates, Guardian, 7-12-09).

This afternoon, sitting in a sun-drenched pavilion on Google's grounds during a lunch break, I looked up from my grilled salmon to notice a employee walking by, with her dog on a leash, then I saw another, and then I saw another. There were dogs everywhere. I asked if this happened to be a special "Bring Your Dog to Work Day," but was told, "No, we are allowed to bring our dogs to work everyday." Hmmm. Could this remarkable corporate culture innovation give Google an edge in the struggle ahead?

But, of course, I did not come here to handicap the coming clash of the titans; I came here to report to you on the fifth Symposium on Usable Privacy and Security (SOUPS), which Google is hosting and co-sponsoring along with CyLab. (Next year, SOUPS will be held in Redmond.)

SOUPS is an annual event organized by Carnegie Mellon CyLab's Usable Privacy & Security Lab (CUPS).

Several significant evolutionary trends have emerged in cyber security and privacy over the last decade, ranging from the somewhat ill-conceived search for Return on Investment (ROI) in cyber security deployments to the much more promising inquiry into the ways in which the sciences of economics and psychology might better inform cyber security development. The quest for "Usable Security and Privacy" is one of the most intriguing of these trends; and SOUPS provides an invaluable forum for the exploration of themes in this vital area of research.

The first day of SOUPS 2009 was built around an all-day tutorial on "Designing and Evaluating Usable Security and Privacy Technology" led by M. Angela Sasse, Professor of Human-Centred Technology in the Department of Computer Science at UCL, Clare-Marie Karat, Research Staff Member in the Policy Lifecycle Technologies department at the IBM TJ Watson Research Center, and CyLab researcher Roy Maxion, a faculty member in the Computer Science and Machine Learning Departments at Carnegie Mellon University.

Sasse spoke on "Evaluating for Usability & Security."

Karat delivered a "Case Study of Usable Privacy and Security Policy Research."

Maxton shared "Mechanics of Experiments, Forensics, and Security."

Here are some excerpts from my notes and transcription of Sasse's compelling talk:

Starting off by citing a "cumbersome" definition of "evaluation" as “an assessment of the conformity between a work system's performance and its desired performance.” (Whitefield et al., 1991); Sasse then explained, "What they really mean by 'work system' is if a user works together with a system, what is the performance that you are going to get out of the combination? That is what you are actually interested in. Once you have real people using the security in this design, what is the performance that you can expect? What is the performance you can expect at the security level in terms of the choices that the users will make? This is where we have a problem ... We don't have a clear set of criteria to assess a particular performance against ... If you don't have a criteria for what is actually an acceptable level of performance, then you just don't know if it is good enough or not."

In the course of outlining the essentials of a proper usability evaluation plan, Sasse went into some depth concerning evaluation goals, first emphasizing the difference between summative goals (e.g., "Mech 1 performs better than Mech 2" or "Mech 1 meets performance criteria X, Y, Z") and formative goals (e.g., "exploratory evaluation of feasibility and indicative performance, user feedback, pointers for improvement"), and then exploring the breakdown of an evaluation goal.

"When it comes to usability and security, we need to look at two things: not only how well does the user perform with the security mechanism (e.g., how long does it take the user to remember, read off and enter the one-time pin, how long does it take to figure out which finger to use, where to put, etc.) but also what is the primary task, or production task, within which this security task is performed. The kind of experiments we have seen so far is basically, 'Thank you for coming, try this security mechanism, and I will measure how well you do.' But if it is envisioned, for example, that they do this as part of their on-line banking session, or for governments purposes, to fill their taxes on-line, then we need to create a realistic approximation of what that whole procedure looks like. At what point, would a user normally in the real world approach the system with the goal of 'I'm filing my taxes, and goddamn I'm late, I've got about twenty hours or so to do it."

"There are some things you can do in the lab, but there are some things that you can never really reproduce in a lab. If people anticipate that they are going to have problems with a security mechanism, they are going to change altogether how they behave and how they do their work. You find that because people fear that they might fail to authenticate themselves to a service that they either completely re-organize how they do their work, which has an impact on their productivity, and an impact on the productivity of the organization overall, or they might find workarounds, for instance, they just find ways of leaving the system open in order to avoid entering their credentials time and time again. You are never going to see people do that in the lab experiment."

Along with evaluating "performance achieved on security tasks" as well as the "actual level of security achieved given user choices and behaviour," Sasse also stressed evaluating "at what cost" these were achieved -- "to the individual user, to the system owner and to society as a whole."

"I recognize some of the issues, and they are pretty obvious," Roy Maxton asked Sasse, "so my question is what do you think makes it not obvious to so many people?"

"The answer is that so far security has basically been treated as special," she responded. "A lot of organizations out there are not very good at assessing the ongoing cost of ownership of certain types of technology. And when it comes to security that problem is magnified, because the argument always made is 'Security is important, just think of what could happen if we didn't have it.' They tend to only look at the cost of purchasing it and putting it in place. How much time or productivity is it going to take out of a company is a question I have never seen anybody ask up front, until very recently. Or 'how much of our system administrator's time is it going to take ... it is generally not looked at and factored into the cost of operating. But I am sure this will change. These kinds of ideas have now gotten out there. Traditionally, the only argument for security was risk mitigation, it was very often not off-set by the cost of ownership and operation ..."

Sasse went on to articulate other key elements of the evaluation process:

Scope, both summative (e.g., "sample large enough for adequate statistical power; generally larger samples than for formative evaluations" and formative (e.g., "explanatory results" providing "reasons for user choices" and "reasons for failure');

Participants (e.g., "need to control for practice and interference effects")

Context of use (e.g., need to replicate demands of production task, equipment used, physical context and situational context)

Criteria, including user cost (e.g., physical and mental workload), owner cost (e.g., "needs to be proportionate to degree of security achieved"), user satisfaction (e.g., "user confidence in mechanism itself" and "their own ability to operate it correctly") and, of course, security.

Aye, but there's the rub. The internationally recognized framework, Common Criteria for Information Technology Security Evaluation (ISO/IEC 15408), does not include in usability.

In her conclusion, Sasse emphasized the need to develop a framework for evaluating usability and security to ensure comparable and generalisable results, suggesting it that should be "linkable to assessment via Common Criteria" and might incorporate the NIST taxonomy as template for procedure.

These notes reflect the richness of the discussion in this day-long tutorial led by Karat, Maxton and Sasse.

Stay tuned for more from SOUPS 2009 over the next two days.

Some Related Posts:

SOUPS 2009 Best Paper Award Goes to "Ubiquitous Systems and the Family: Thoughts about the Networked Home"

CyLab Seminar Series Notes: User-Controllable Security and Privacy -- Norman Sadeh asks, "Are Expectations Realistic?"

CyLab Research Update: Locaccino Enables the Watched to Watch the Watchers

CyLab Chronicles: Wombat, the Latest CyLab Success Story

CyLab Chronicles: Q&A w/ Norman Sadeh

CyLab Chronicles: Q&A w/ Lorrie Cranor

Culture of Security: CUPS Research Takes on Both Widespread Attack Method & Dangerous Meme (Available to Cylab Partners Only)

For further commentary on SOUPS 2009, go to the CUPS Blog.

Thursday, July 9, 2009

Not Just Yesterday's Headlines, But the Day After Tomorrow's As Well


“We live in a precarious time, where knowledge of a Social Security number, along with other information about one’s name and date of birth, is sometimes sufficient to impersonate another individual,” said Alessandro Acquisti, the study’s lead author, in a telephone interview.
Acquisti, an economist at Carnegie Mellon’s Heinz School of Public Policy and Management in Pittsburgh, and computer scientist Ralph Gross used records from the Social Security Administration’s Death Master File to search for statistical patterns in the Social Security numbers of people. They obtained birth data from voter registration lists, online white pages, social networking sites and other sources, he said.
Bloomberg, 7-6-09

Not Just Yesterday's Headlines, But the Day After Tomorrow's As Well

The findings of a study released early this week by CyLab researcher Alessandro Acquisti and his team revealed that someone's nine-digit SSN can be predicted with "great accuracy." Of course, it was even easier to predict that the study's release would become a blockbuster story, and indeed it did.

Here are excerpts from a broad cross-section of the stories (including mainstream, business, IT, science and alternative media sources) that arose from the release of "Predicting Social Security numbers from public data, co-authored by Acquisti and Heinz College post-doctoral researcher Ralph Gross, with links to the full texts. -- Richard Power

Scientists guess nearly one in ten Social Security numbers USA Today, 7-6-09

How to figure out someone’s social security number Christian Science Monitor, 7-6-09

Weakness in Social Security Numbers Is Found New York Times, 7-6-09

Social Security # Analysis Emphasizes Need for Reform Daily Kos, 7-6-09

New algorithm guesses SSNs using date and place of birth Ars Technica, 7-6-09

Predicting Social Security Numbers Washington Post, 7-6-09

Social Security Number Prediction Makes Identity Theft Easy Information Week, 7-7-09

Social security flaw leaves way open for cyber-theft New Scientist, 7-7-09

Study: Social Security numbers are predictable Computerworld, 7-7-09

Social Security Numbers ID'd Using Public Data PC Magazine, 7-7-09

Social Security Numbers Can Be Predicted With Public Information Science Daily, 7-7-09

Is Your Facebook Account a Gold Mine for Identity Thieves? TIME, 7-8-09

How Social-Networking Sites Can Reveal Your Social Security Number Wall Street Journal Digits, 7-9-09

For more on the story, read There is an Elephant in the Room; & Everyone’s Social Security Numbers are Written on Its Hide

Monday, July 6, 2009

There is an Elephant in the Room; & Everyone’s Social Security Numbers are Written on Its Hide



Acquisti and Gross tested their prediction method using names in the Death Master File of people who died between 1973 and 2003. They could identify in a single attempt the first five digits for 44 percent of deceased individuals who were born after 1988 and for 7 percent of those born between 1973 and 1988. They were able to identify all nine digits for 8.5 percent of those individuals born after 1988 in less than 1,000 attempts. Their accuracy was considerably higher for smaller states and recent years of birth: for instance, they needed 10 or fewer attempts to predict all nine digits for one out of 20 Social Security numbers issued in Delaware in 1996. Carnegie Mellon Researchers Find Social Security Numbers can be Predicted from Publicly Available Information, Press Release, 7-6-09

There is an Elephant in the Room; & Everyone’s Social Security Numbers are Written on Its Hide

By Richard Power


At a conference last year, toward the end of the day, an attendee asked me what I thought were the top five privacy stories of the year; as I said, it was the end of the day, and perhaps I should have been more positive in my response, but instead, I just blurted out, “there is no privacy.” I wasn’t just being gruff or cynical. In a very real way, “privacy” as it was once espoused is gone.

That doesn’t mean that there are no privacy regulations to uphold; nor does it mean that all those Chief Privacy Officers are simply wasting their time. But there are junctures in history when little or no meaningful progress is going to be made unless we engage in merciless honesty, and we are at just such a juncture in regard to the issue of privacy.

The push for merciless honesty just got a big boost from CyLab researcher Alessandro Acquisti and Heinz College post-doctoral researcher Ralph Gross. They have rocked the world with the revelation that there is an elephant in the living room, and everyone’s social security numbers (SSN) are legible on the grey surface of its massive body.

In a paper that will appear later this week in the online Early Edition of the Proceedings of the National Academy of Science, and will be presented later this month at BlackHat 2009 in Las Vegas, Acquisti and Goss have changed the game forever and for good, by demonstrating “that it is possible to predict, entirely from public data, narrow ranges of values wherein individual SSNs are likely to fall.”

“Unless mitigating strategies are implemented,” Acquisti and Goss warn, “the predictability of SSNs exposes them to risks of identify theft on mass scales.”

“Any third party with Internet access and some statistical knowledge can exploit such predictability in two steps: First, by analyzing publicly available records in the SSA Death Master File (DMF) to detect statistical patterns in the SSN assignment for individuals whose deaths have been reported to the SSA; thereafter, by interpolating an alive person's state and date of birth with the patterns detected across deceased individuals' SSNs, to predict a range of values likely to include his SSN. Birth data, in turn, can be inferred from several offline and online sources, including data brokers, voter registration lists, online white pages, or the profiles that millions of individuals publish on social networking sites.”


Over the weekend, as Acquisti prepared himself for the firestorm of news media attention, I asked him three questions that put this blockbuster story in context:

What should the man or woman in the street take away from the results of your research?

"Our results show that it is possible to combine pieces of personal information from different sources, each of them not particularly sensitive, and end up inferring something that is more sensitive than any original piece of information alone. This calls for heightened attention by each of us towards what we reveal about ourselves online. However, our results also indicate that the problem of SSNs security goes much beyond consumers' responsibility and control: it has to do with the use (and abuse) of SSNs in the private sector for purposes (such as authentication) they were never designed to fulfill. As consumers, we have very little control on that. At the end of the day, this is a systematic problem that industry, policy-makers, and of course researchers must resolve."

What should privacy professionals in government and business take away from the results of your research?

“The broader message is that how 'sensitive' certain pieces of information are depends on what other data those pieces of information can be combined with. The more specific message, as it relates to SSNs, is that - in their current form - SSNs are very insecure passwords, and should not be used for authentication in any service.”

Are there any solutions that present themselves?

“Randomizing the assignment of future SSNs can buy us some time. But any system that still relies on using the same number (your SSN) as both an identifier and for authentication is a vulnerable system - because it is incongruously predicated on the sensitivity of a number that is shared with too many parties. For long run solutions, the attention should move towards the research on secure, efficient, and privacy-preserving means of authenticating identities.”

SSA Responds

In a statement issued in response to the research of Acquisti and Goss, the Social Security Administration sought to provide some context of its own:

“The public should not be alarmed by this report because there is no foolproof method for predicting a person's Social Security Number. The method by which Social Security assigns numbers has been a matter of public record for years. The suggestion that Mr. Acquisiti has cracked a code for predicting an SSN is a dramatic exaggeration.
For decades, we have cautioned the private sector, including educational, financial and health care institutions, against using the SSN as a personal identifier.
For reasons unrelated to this report, the agency has been developing a system to randomly assign SSNs. This system will be in place next year.
Stealing someone's SSN is a crime.”


Asked to comment on the statement, Acquisti stressed that his study “made no claim about breaking secret code,” but rather “demonstrates that publicly available information about SSNs and their assignment scheme is sufficient to infer regular patterns linked to demographics data, and predict very narrow ranges of values wherein individual SSNs are likely to fall.”

He commended the Social Security Administration for its effort, but urged realism in regard to the facts on the ground: “We want to praise the SSA for cautioning the private sector against using the SSN as a personal identifier. Unfortunately, such cautions have not been sufficient to dissuade many third parties: SSNs are still used (and abused) everywhere to authenticate identities, leads to widespread crimes of identity theft.”

Acquisti also agreed with the SSA’s “short-time” strategy, while again urging realism about the issues we are all confronting: “We also agree with the SSA about the need, in the short term, to randomize the assignment scheme (a proposal we advance in the manuscript). However, changing the assignment scheme may protect newly assigned SSNs, but not the hundreds of millions of already assigned SSNs. It may also makes us complacent to preserve the current -- and insecure -- system where SSNs are incongruously used by private sector entities both as public identifiers and private passwords - a role that SSNs never meant to fulfill when they were designed in the 1930s.”

Emphasizing the vital need for all of us to finally deal with the pervasive issue of weak authentication, Acquisti remarked: “By showing that SSNs are predictable from public data and therefore are vulnerable passwords, we hope to focus the debate toward more secure, efficient, and privacy-preserving means of authenticating identities in our society.”

CyLab corporate partners can access the full text of this post in the Culture of Security section of the partners-only portal. The full text includes commentary from privacy expert, Rebecca Herold, a participant in the CyLab Business Risks Forum.

Additional information about the study and some of the issues it raises is available at http://www.ssnstudy.org, as well as http://blogs.heinz.cmu.edu/ssnstudy/ (blog) and http://www.heinz.cmu.edu/~acquisti/ssnstudy/ (FAQ).

The full press release is available here.