Thursday, December 20, 2012

Tuesday, November 27, 2012

CyLab Researchers Make Major Advances In Audit Technology For Privacy Protection



A team of researchers at Carnegie Mellon University led by Dr. Anupam Datta, Assistant Research Professor at CyLab and Electrical & Computer Engineering, has developed algorithms that can help protect individual privacy by checking that organizations such as hospitals and banks are disclosing personal information about their customers to third parties in compliance with privacy regulations. They have produced the first complete formal specification of disclosure clauses in two important US privacy laws -- the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule and the Gramm-Leach-Bliley Act (GLBA).

They also built an algorithm that can help investigators detect violations of these laws and similar privacy policies. The research team included Henry DeYoung (a graduate student in the Computer Science Department) and three postdoctoral researchers in Dr. Datta's research group: Dr. Deepak Garg (now faculty at MPI-SWS), Dr. Limin Jia (now faculty at CMU CyLab), and Dr. Dilsun Kaynar (now faculty at CMU Computer Science Department).

Privacy has become a significant concern in modern society as personal information about individuals is increasingly collected, used, and shared, often using digital technologies, by a wide range of organizations. To mitigate privacy concerns, organizations are required to respect privacy laws in regulated sectors (e.g., HIPAA in healthcare, GLBA in financial sector) and to adhere to self-declared privacy policies in self-regulated sectors (e.g., privacy policies of companies such as Google and Facebook in Web services). Enforcing these kinds of privacy policies in organizations is difficult because privacy laws and enterprise policies typically identify a complex set of conditions governing the disclosure of personal information. For example, the HIPAA Privacy Rule includes over 80 clauses that permit, deny, and even require the disclosure of personal health information, making it difficult to manually ensure that all disclosures are compliant with the law. 

The research team at Carnegie Mellon University created a formal language for specifying a rich class of privacy policies. They then used this language to produce the first complete formal specification of disclosure clauses in two important US privacy laws -- the Health InsurancePortability and Accountability Act (HIPAA) Privacy Rule and theGramm-Leach-Bliley Act (GLBA). Recognizing that certain portions of complex privacy policies such as HIPAA are subjective and might require input from human auditors for compliance determination, the specification clearly separates out the subjective and the objective portions of a given policy.

The team then developed an algorithm that checks audit logs for compliance with privacy policies expressed in their language.  The algorithm has two distinct characteristics. First, it automatically checks the objective portion of the privacy policy for compliance and outputs the subjective portion for inspection by human auditors. Second, recognizing that audit logs are often incomplete in practice (i.e., they may not contain sufficient information to determine whether a policy is violated or not), the algorithm proceeds iteratively: in each iteration it checks as much of the policy it possibly can over the current log and outputs a residual policy that can only be checked when the log is extended with additional information. Initial experiments with a prototype implementation checking compliance of simulated audit logs with the HIPAA Privacy Rule indicates that the algorithm is fast enough to be used in practice. 

Additional information about this work can be found on the project web page:http://www.andrew.cmu.edu/user/danupam/privacy.html

Carnegie Mellon CyLab Awarded DHS Contract For Research Into Understanding And Disrupting The Economics Of Cybercrime



Carnegie Mellon University CyLab has been awarded a multi-million dollar contract for research into Understanding and Disrupting the Economics of Cybercrime. Nicolas Christin, CyLab Senior Systems Scientist and Associate Director of the Information Networking Institute (INI), is Principle Investigator (PI). His co-PIs are fellow CyLab researcher Alessandro Acquisti, along with Tyler Moore of Southern Methodist University, Ross Anderson of Cambridge University, and Ryan Williams of NFCTA. Richard Clayton of Cambridge University will also participate as instrumental senior personnel.

Based on the realization that focusing on a particular attack, or a specific set of attacks, is unlikely to provide the detailed level of understanding necessary to design meaningful intervention policies against cybercrime, the methodology developed by Christin and his colleagues holistically combines network measurements with behavioral and economic analysis. The project will consist of four research tasks: designing cybercrime indicators, designing data interchange formats and standards, modeling online-crime supply chains and modeling attackers' behavioral psychology The contract is one of thirty four, totaling $40 million that the U.S. Department of Homeland Security (DHS) Science and Technology Directorate (DHS S&T) has awarded to twenty-nine academic and research organizations. This funding is for research and development of cyber security solutions.

In January 2011, the DHS S&T Cyber Security Division (CSD) issued a Cyber Security R&D Broad Agency Announcement (BAA 11-02) that solicited proposals for 14 Technical Topic Areas (TTAs) aimed at improving security in federal networks and across the Internet while developing new and enhanced technologies for detecting, preventing and responding to cyber attacks on the nation's critical information infrastructure. BAA 11-02 elicited white paper responses from more than 1,000 offerors.

Following extensive review and down-select process, more than 200 offerors were invited to submit full proposals for final review. And of those, new awards were made to the twenty-nine organizations that were announced on October 26, 2012.

"The work to be accomplished through these contracts will significantly advance cyber security and support the mission of the DHS Science and Technology Directorate's Cyber Security Division to create a safe, secure and resilient cyber environment," Dr. Douglas Maughan, director of DHS' S&T Cyber Security Division told Homeland Security Today. "Our goal," said Maughan, "is to transform the cyber-infrastructure to be resistant to attack so that critical national interests are protected from catastrophic damage and our society can confidently adopt new technological advances." (See Homeland Security Today, 10-26-12)


Monday, November 5, 2012

Glimpses into the 9th Annual CyLab Partners Conference

CyLab Researchers Nicolas Christin, Rahul Telang, Alessandro Acquisti
9th Annual Cylab Partners Conference (October 2012)
Glimpses into the 9th Annual CyLab Partners Conference

[NOTE: This CyBlog post is also cross-posted as a CyLab Chronicles on the main CyLab web site.]

The 9th Annual CyLab Partners Conference was held at the main campus of Carnegie Mellon University (Pittsburgh, Pa.), on October 2nd and 3rd, 2012.

The Partners Confernce is an exclusive benefit of membership in the CyLab Partners program, and like the recruitment opportunities, reputational boost and Seminar webcasts, it is one of several benefits that is available to all Partners, whether at $25,000 level, the $100,000 level or the $350,000 level.

For two days, representatives from CyLab's corporate Partners recieve research updates from our work across a broad range of areas, e.g., Next Generation Internet, Trustworthy Computing, Mobility, Software Security, Usable Privacy and Security, Businss Risks and Economic Implications, and more. Perhaps even more important is the time to interact one on one with faculty researchers during breaks and meals, and to interact with CyLab's graduate students at the poster session.

Annual Partners Conference content is archived on the CyLab Partners Portal (another exclusive benefit of membership), including videos of the research presentations, along with .pdfs of the slides for each presentation, as well as electronic files of the student posters, documenting current projects.

To entice you to consider taking advantage of the benefits of CyLab partnership, and to contribute to the general dialogue on the vital issues of cyber security and privacy, we have posted a CyLab Partners Conference video sampler and some other content to both the CyLab YouTube Channel and the CyLab iTunesU Store.

The sampler, 9th Annual Partners Conferenece Excerpts, includes two or three minute snippets from each of the following six presentations:
  • Virgil Gligor - "On Foundations of Trust in Networksof Humans and Computers"
  • David Brumley - "Automatically Finding Exploitable Bugs in Off-The-Shelf Executables"
  • Mike Farb - "SafeSlinger: Applied Ad-Hoc Smartpone Trust Establishment"
  • Lorrie Cranor - "Measuring the Success of Web-based Spoofing Attacks on OS Password-Entry Dialogs" 
  • Collin Jackson - "Web Security" 
  • Rahul Telang - "Competition and Data Breaches"
  • Norman Sadeh - "Mobile Privacy"


Four full faculty researcher presentations have also been made available publicly:
Related Posts

Tuesday, October 30, 2012

An Update on My "Secrets Stolen/Fortunes Lost" Co-Author Christopher Burgess



In case you missed it, my Secrets Stolen, Fortunes Lost co-author, Christopher Burgess was featured in a recent Forbes Magazine article on What Do Former CIA Spies Do When They Quit the Spy Game?

Upon retirement after thirty years with the Central Intelligence Agency, in various position including Station Chief, Burgess, was awarded the Career Distinguished Intelligence Medal, the highest level of career recognition. After retirement, he took on important roles in the private sector, first as Senior Advisor to Cisco Chief Security Office (CSO) John Stewart, and then as CSO himself at Atigeo. In the Forbes piece, Christopher shares some insights on his transition:

One [skill] that served me well was my ability to collaborate. That’s a huge skill for a field officer. Everybody on a team has something to contribute and you have to truly recognize and believe that. Another skill is a technique common to planning intelligence operations: building in ‘fall back positions’ and alternate routes while mapping out how to attain a goal. In Agency operations, things go wrong and you have to have backup plans. Also in the corporate world, whether you are selling a widget or consulting, competitors will surprise you. Dealing with that surprise, keeping your cool when all about you are losing theirs, definitely came from Agency training. Another key skill I developed in the Agency was creating loyal workforces, which yield outstanding results. A big part of that is knowing exactly what you are asking someone to do. If you don’t know from personal experience, you cannot be shy about asking them to give you feedback on their probability of success in a risky operation. Art Keller, What Do Former CIA Spies Do When They Quit the Spy Game? Forbes, 10-12-12

As I have mentioned in previous posts, this year, in CSO Magazine, I have been focusing on interviews with C-level executives, who also happened to be thought-leaders. (Surely, you have noticed that "C-level executives" and "thought leaders" are not straightforward synonyms?)

In the first of these interviews (fourth one coming soon), Christopher and I discussed a range of vital issues, but of course we started with a look back at our collaboration on Secrets Stolen/Fortunes Lost:

My 30,000-foot perspective has not changed since we co-authored Secrets Stolen, Fortune Lost — every company (emphasis intended) regardless of locale has the potential to fall into the sights of an entity or individual who has designs on their assets. The company can choose to educate or not educate their workforce to this reality. Sadly, I continue to see far too many companies operating as if they are immune from falling into the cross-hairs of someone's targeting scheme because they aren't engaged in national security work — they equate economic espionage and IP theft to only those in the national security vertical. While I don t disagree the nation state vector is one about which we, collectively, must pay attention; the individual, the competitor and the criminal vectors also warrant every company's attention. How to meet the challenges of 21st century security and privacy, CSO Magazine, 4-18-12

NOTE: You can find links to all my CSO Magazine articles in the CyBlog sidebar.

Christopher Burgess is also one of those experts from business and government (in this instance, it's a twofer!) who have delivered CyLab Seminars in the context of my Business Risks Forum. He has given two Seminars, one in 2010 and one this year.

Access to the webcast and online archive of the CyLab Seminar series is an exclusive benefit available only to CyLab Partners. But from time to time, we release select seminars, and excerpts from seminars, via You Tube and iTunes to both promote our program and contribute to the public dialogue on the vital issues of cyber security and privacy.

Here are embedded videos of both of Burgess' CyLab Seminars. Enjoy.

CyLab Business Risks Forum: Christopher Burgess - Collaborative Distributed Inferencing (2012)



CyLab Business Risks Forum: Christoper Burgess - Common Sense Approach to Social Media (2010)

Monday, October 29, 2012

CyLab Researchers Discuss Code 2600, Award-Winning Cyber Crime Documentary with Filmaker Jeremy Zerechak


Lorrie Cranor, Jeremy Zerechak, Nicholas Christin, Norman Sadeh, CyLab, October 2012

CyLab Researchers Discuss Code 2600, Award-Winning Cyber Crime Documentary
 with Filmaker Jeremy Zerechak

Carnegie Mellon University CyLab recently hosted two screenings of CODE 2600, an award-winning full-length documentary on the societal implications of cyber security and cyber risk.

These evening screenings were preceded by a special CyLab Seminar Series event: a panel discussion in which
 three CyLab researchers, Lorrie CranorNicolas Christin and Norman Sadeh, joined filmmaker  Jeremy Zerechak
 for a discussion of the film and the important issues it highlights.  

Dr. Cranor, who moderated the panel, was among numerous cyber security and privacy experts interviewed in the documentary, others included: world-class cryptographer and security commentator Bruce Schneier
BlackHat and DEFCON founder Jeff Moss, leading security iconoclast Marcus Ranum and Jennifer Granick,
Director of Civil Liberties at Stanford University's Center for Internet and Society

Here is the full video of the panel discussion, beginning with a clip from the film:


For more compelling videos on cybersecurity and privacy, visit the CyLab You Tube Channel and
CyLab iTunes StoreThe content is free!

Thursday, October 18, 2012

Sample Some Fruits of CyLab Mobility Research Safeslinger for Secure Smartphone Communications. It's FREE!



Sample Some Fruits of CyLab Mobility Research, e.g., Safeslinger for Mobile App for Secure Smartphone Communications. It's FREE!

By Richard Power


CyLab has seven major research thrusts (as seven cross-cutting research thrusts); Mobility is one of those seven major research thrusts. And CyLab research isn't locked away in some ivory tower of abstraction; no, it is impacting security in the here and now.

Safeslinger, developed by Mike Farb, Adrian Perrig, Jonathan McCune and other CyLab team members is an excellent example.

This video, available via the CyLab You Tube Channel illustrates the how and why.



More on Safeslinger from CyLab Online

CyLab Chronicles: Mike Farb Offers Insights Into SafeSlinger, CyLab's Powerful New Smartphone App

CyLab's New Smartphone App, SafeSlinger, Empowers Users' to Strengthen Their Own Security and Privacy

SafeSlinger App for Mobile Devices

SafeSlinger: An Easy-to-use and Secure Approach for Human Trust Establishment

CyLab Chronicles: Q&A with Mike Farb (2011)

CyLab Researchers Release KeySlinger, Security App for iPhone and Android

Tuesday, September 25, 2012

BSIMM4 Released; If You Are Not Part of the Solution, Well Then ...



BSIMM4 Released; If You Are Not Part of the Solution, Well then ...

By Richard Power


My perspective on cyber security goes back to the mid-1990s, and well, yes, my view on its current state (and its likely future) is rather cynical. Why? I was among those who spent the 1990s warning of what was to come, and having those warnings discounted by those entranced by that mass of false memes known as "the conventional wisdom." For the next ten years, I watched the nascent trends I had detected become dominant themes in the field. And in recent years, since the retrospective I offered in 2006, it has become chillingly clear to me that neither sufficient political will nor sufficient corporate accountablility exist to address these problems in any meaningful way.

What I do have sustained confidence in, of course, is academic research, particularly that done here at CyLab, such work is one of our greatest hopes, and that is why I am so happy to a part of such a program.

The only other element of contemporary cyber security that I have sustained confidence in is the work of those few in business and government who have made the existential choice to see and respond to what actually is, and do so in some way that can make a real difference in and of itself.

That's why my CSO articles this year (see them listed on the sidebar) are all interviews with c-level security and privacy executives who are also thought leaders (surely, you have noticed that these two descriptors are not synonyms). It is also why I take the time, annually, to report to you on the release of the latest BSIMM.

Am I inferring that BSIMM is THE solution? Of course not. There is no ONE solution. But it is an exemplary effort to mitigate and to collective coalesce around mitigating efforts, and as such it is worthy of both your attention and possibly your involvement.

BSIMM4 encompasses ten times the measurement data of the original 2009 study (95 distinct measurements), it includes updated activity descriptions, and reports on two new activities (bringing the activity count going forward to 111); and (like BSIMM3), it also includes a longitudinal study. 

The project continues to grow is a steady and meaningful way.

The first release of BSIMM, in 2009, included data from nine organizations. By the next release, BSIMM2, in 2010, participation had tripled to thirty organizations.

In 2011, the number of organizations contributing data continued to grow, forty-five organizations were involved in BSIMM3.

This year's iteration, BSIMM4, is built on data from fifty-one firms; and these participants represent a range of twelve overlapping verticals including: financial services (19) independent software vendors (19), technology firms (13), cloud (13), media (4), security (3), telecommunications (3), insurance (2), energy (2), retail (2) and healthcare (1).

The list of organizations contributing data is impressive, e.g., Adobe, Aon, Bank of America, Box, Capital One, The Depository Trust & Clearing Corporation (DTCC), EMC, F‐Secure, Fannie Mae,  Fidelity, Google, Intel, Intuit, JPMorgan Chase & Co., Mashery, McKesson, Microsoft, Nokia, Nokia Siemens Networks, QUALCOMM, Rackspace, Salesforce, Sallie Mae, SAP, Scripps Network, Sony Mobile, Standard Life, SWIFT, Symantec, Telecom Italia, Thomson Reuters, Vanguard, Visa, VMware, Wells Fargo and Zynga.

"A Huge Difference"

To provide some insight on this year's BSIMM release, I caught up with with its architect, Cigital CTO Gary McGraw, and asked him some questions.

What strikes you in this year's data? Or in the cumulative data so far? What stands out as surprising or deserving of added emphasis?

"The BSIMM continues to grow and evolve as we gather more data. We now have 10 times as many measurements as we started with in 2009. Basically, the data show that if you are not doing software security today you are rapidly falling behind. As an example of what this means, consider that two brand new activities were identified in the BSIMM4 model. The field is growing and progressing."

How would you characterize the impact of BSIMM so far? How would you gauge it? What difference is it making? What difference could it potentially make?

"The BSIMM is making a huge difference in software security as practiced in the commercial marketplace. With 51 firms actively participating, the BSIMM has become a large community of like minded professionals. The power of the community is evident during the (private) conferences that we hold once a year. The professionals who run software security initiatives are eager to share what they know and learn from each other."

Download BSIMM4. Review it with your team, bring it to your Board of Directors. Participant in the next iteration. Become part of the solution, or at least an example of what one dimension of the solution would look like.

For more information and to access the BSIMM4 study, which is distributed free of charge under a Creative Commons license, please visit: http://bsimm.com/

Related Posts

BSIMM3 Released: "An Excellent Tool for Devising a Software Security Strategy"

Evolving Rapidly, BSIMM2 Offers Key Elements of Successful Software Security Initiatives Shared by 30 Major Corporations

From Biometrics to BSIMM , & "50 Hurricanes Hitting At Once!" -- A Report on the Sixth Annual Partners Conference

CyLab Business Risks Forum: Gary McGraw on Online Games, Electronic Voting and Software Security

Fortify & Cigital Release BSIMM -- Integrating Best Practices from Nine Software Security Initiatives

Monday, July 16, 2012

CyLab's SOUPS Continues Its Ongoing, Deepening Dialogue on What Works and What Doesn't



CyLab's SOUPS Continues Its Ongoing, Deepening Dialogue on What Works and What Doesn't

Last week in Washington, D.C., the important work of the Symposium on Usable Privacy and Security (SOUPS), now in its eighth year, continued to deepen and expand. The annual event, shaped and led by Dr. Lorrie Cranor, Director of CyLab Usable Privacy and Security (CUPS) Lab, generates rich content with which to better inform the development of programs, policies and applications.

SOUPS' technical paper sessions were organized into five categories:

Mobile Privacy and Security

User Perceptions

Authentication

Online Social Networks

Access Control

Here are excerpts from select papers in each of these categories, along with links to the full texts:

Mobile Privacy and Security

Adrienne Porter Felt, Elizabeth Ha, Serge Egelman, Ariel Haney, Erika Chin, David Wagner, Android Permissions: User Attention, Comprehension, and Behavior, UC Berkeley:

We performed two usability studies to address the attention, comprehension, and behavior questions ... Our primary findings are: Attention. In both the Internet survey and laboratory study, 17% of participants paid attention to permissions during a given installation. At the same time, 42% of laboratory participants were unaware of the existence of permissions. Comprehension. Overall, participants demonstrated very low rates of comprehension. Only 3% of Internet survey respondents could correctly answer three comprehension questions. However, 24% of laboratory study participants demonstrated competent—albeit imperfect—comprehension. Behavior. A majority of Internet survey respondents claimed to have decided not to install an application because of its permissions at least once. Twenty percent of our laboratory study participants were able to provide concrete details about times that permissions caused them to cancel installation. Our findings indicate that the Android permission system is neither a total success nor a complete failure.

User Perceptions

Blase Ur, Pedro Giovanni Leon, Lorrie Faith Cranor, Richard Shay, Yang Wang, Smart, Useful, Scary, Creepy: Perceptions of Online Behavioral Advertising, Carnegie Mellon University:

Participants found behavioral advertising both useful and privacyinvasive. The majority of participants were either fully or partially opposed to OBA, finding the idea smart but creepy. However, this attitude seemed to be influenced in part by beliefs that more data is collected than actually is. Participants understood neither the roles of different companies involved in OBA, nor the technologies used to profile users, contributing to their misunderstandings. Given effective notice about the practice of tailoring ads based on users’ browsing activities, participants wouldn’t need to understand the underlying technologies and business models. However, current notice and choice mechanisms are ineffective. Furthermore, current mechanisms focus on opting out of targeting by particular companies, yet participants displayed faulty reasoning in evaluating companies. In contrast, participants displayed complex preferences about the situations in which their browsing data could be collected, yet they currently cannot exercise these preferences.

Authentication

Richard Shay, Patrick Gage Kelley, Saranga Komanduri, Michelle L. Mazurek, Blase Ur, Timothy Vidas, Lujo Bauer, Nicolas Christin, Lorrie Faith Cranor, Correct horse battery staple: Exploring the usability of system-assigned passphrases, Carnegie Mellon University:

Our findings suggest that system-assigned passphrases are far from a panacea for user authentication. Rather than committing them to memory, users tend to write down or otherwise store both passwords and passphrases when they are system assigned. When compared to our password conditions, no passphrase condition significantly outperformed passwords in any of our usability metrics, indicating that the system assigned passphrase types we tested fail to offer substantial usability benefits over system-assigned passwords of equivalent strength. We even find that system-assigned passphrases might actually be less usable than system-assigned passwords. For instance, users were able to enter their passwords more quickly and with fewer errors than passphrases of similar strength. While our results in general do not strongly favor system-assigned passwords over system-assigned passphrases or vice versa, we identify several areas for further investigation. For example, larger dictionary sizes do not appear to have a substantial impact on usability for passphrases. This could be leveraged to make stronger passphrases without much usability cost. We also find that lowercase, pronounceable passwords are an unexpectedly promising strategy for generating system-assigned passwords.

Online Social Networks

Thomas Muders, Matthew Smith and Uwe Sander, Helping Johnny 2.0 to Encrypt His Facebook Conversations, Leibniz Universitaet and University of Applied Sciences and Arts, Hannover, Germany:

While there are some solutions available to cryptographically protect Facebook conversations, to the best of the authors' knowledge, there is no widespread use of them. Thus, the aim of our work was to nd out why this might be the case and what could be done to help OSN users to encrypt their Facebook conversations. While mechanisms to protect email messaging could in principle be adapted to Facebook conversations in a straightforward manner, previous usability studies show signi cant problems with the existing email encryption mechanisms. One of our goals was therefore to see if the changes brought about by the OSN paradigm might open up new possibilities for a usable security mechanism protecting private OSN messages. To answer these questions, we conducted multiple studies to evaluate needs surrounding the protection of users' conversations on Facebook and then compared different existing solutions for conversation encryption. Based on these intermediate results, we developed an approach to encrypt Facebook conversations which we tested in two user studies to ascertain whether the solution provided good usability characteristics while at the same time protecting user privacy. The results of the final study show that the OSN paradigm does indeed o er new ways of simplifying security and fi nding security/usability trade-o s which are acceptable to users.

Access Control

Jason Watson, Andrew Besmer, Heather Richter Lipford, +Your Circles - Sharing Behavior on Google+, University of North Carolina (Charlotte):

This study o ers insight into the behavior of Google+ users and how they use group based sharing. We found participants had strong positive attitudes towards using circles and generally understood the intended purpose of them. However, much of the use of circles was not to protect disclosures from certain people, but to increase the relevance of posting to people. Thus, users are still treating information they post as relatively public. While this may decrease the liklihood of accidentally oversharing, this also means that users will continue to experience the issues from self-censoring, such as the inability to more deeply connect to close friends. Also, despite user understanding, we still saw a disconnect in users' stated desires and behavior. While Google+ lowered the level of eff ort required to interact in contextually appropriate ways, many continued using strategies for privacy management they had formed by using Facebook and simply posted to all circles. In addition, some participants found that circle use increased the mental demand required for social network interaction. Similar to previous studies, the increased e ort lead some of our participants to bypass the privacy mechanisms. In the case of this study, this meant collapsing friends into a single circle. Thus, Google+ users are not yet taking full advantage of the capabilities provided by circles for greater control over information flow. However, these results are also heavily influenced by the overall lack of people and activity on the site, which may have reduced the need for the use of circles. Yet, if site usage grows and users add more connections, the burden of managing circles is also likely to grow.

For the full agenda and links to all the papers presented, visit SOUPS 2012.

For more information on CyLab's ongoing research into Usable Privacy and Security, visit CUPS.

See Also

SOUPS 2011 Advances Vital Exploration of Usability and Its Role in Strengthening Privacy and Security

SOUPS 2010: Insight into Usable Privacy & Security Deepens at 6th Annual Symposium

Reflections on SOUPS 2009: Between Worlds, Cultivating Superior Cleverness, Awaiting a Shift in Consciousness

Glimpses into the Fourth Annual Symposium on Usable Security and Privacy (SOUPS 2008)

For information on other aspects of CyLab's vital work, visit
http://www.cylab.cmu.edu/

Saturday, May 26, 2012

CyLab Chronicles: CyLab's Strong Presence at IEEE Security and Privacy 2012 Packs A Wallop

CyLab's Zongwei Zhou talks on Building Verifiable Trusted Path on Commodity X86 Computers




CyLab Chronicles: CyLab's Strong Presence at IEEE Security and Privacy 2012 Packs A Wallop

The 33rd annual IEEE Symposium on Security and Privacy held at the St. Francis hotel in downtown San Francisco (May 20-May 23, 2012), is one of the respected venues in the field, and once again, numerous papers presented by Carnegie Mellon University CyLab researcher and several sessions chaired by CyLab faculty made for a powerful presence.

Seven papers authored or co-authored by CyLab researchers were presented in the course of the three-day program. In addition to the papers presented, CyLab faculty also chaired three sessions.

Here is the CyLab 2012 IEEE Security and Privacy roster of papers and presenters, with brief excerpts from each paper:

Jiyong Jang talked on ReDeBug: Finding Unpatched Code Clones in Entire OS Distributions, a paper co-authored with Abeer Agrawal, and CyLab faculty David Brumley.

"ReDeBug was designed for scalability to entire OS distributions, the ability to handle real code, and minimizing false detection. ReDeBug found 15,546 unpatched code clones, which likely represent real vulnerabilities, by analyzing 2.1 billion lines of code on a commodity desktop. We demonstrate the practical impact of ReDeBug by confirming 145 real bugs in the latest version of Debian Squeeze packages. We believe ReDeBug can be a realistic solution for regular developers to enhance the security of their code in day-to-day development."

Michael Carl Tschantz presented Formalizing and Enforcing Purpose Restrictions of Privacy Policies, a paper co-authored with Anupam Datta and Jeannette M. Wing.

"Our work makes the following contributions: 1) The first semantic formalism of when a sequence of actions is for a purpose; 2) Empirical validation that our formalism closely corresponds to how people understand the word “purpose”; 3) An algorithm employing our formalism and its implementation for auditing; and 4) The characterization of previous policy enforcement methods in our formalism and a comparative study of their expressiveness. The first two contributions illustrate that planning can formalize purpose restrictions. The next two illustrate that our formalism may aid automated auditing and analysis."

Xin Zhang, who graduated from Carnegie Mellon University and now works for Google, delivered Secure and Scalable Fault Localization under Dynamic Traffic Patterns, co-authored with CyLab Technical Director Adrian Perrig, and by Chang Lan of Tsinghua University.

"While existing path-based FL protocols aim to identify a specific faulty link (if any), DynaFL localizes data-plane faults to a coarser-grained 1-hop neighborhood, to achieve four distinct advantages. First, DynaFL does not require any minimum duration time of paths or flows in order to detect data-plane faults as path-based FL protocols do. Thus, DynaFL can fully cope with short-lived flows which are popularly seen in modern networks. Second, in DynaFL, a source node does not need to know the exact outgoing path, unlike path-based FL protocols. Hence, DynaFL can support agile (e.g., packet-level) load balancing such as VL2 routing [20] for datacenter networks. Third, a DynaFL router only needs around 4MB per-neighbor state based on our classic Sketch implementation, while a router in a path-based FL protocol requires per-path state. Finally, a DynaFL router only maintains a single secret key shared with the AC, while a router in a path-based FL protocol needs to manage 100 to 10000 secret keys in measured ISP topologies."

Sang Kil Cha spoke on Unleashing Mayhem on Binary Code, co-authored with Thanassis Avgerinos, Alexandre Rebert and David Brumley.

"We presented MAYHEM, a tool for automatically finding exploitable bugs in binary (i.e., executable) programs in an efficient and scalable way. To this end, MAYHEM introduces a novel hybrid symbolic execution scheme that combines the benefits of existing symbolic execution techniques (both online and offline) into a single system. We also present index-based memory modeling, a technique that allows MAYHEM to discover more exploitable bugs at the binary-level. We used MAYHEM to analyze 29 applications and automatically identified and demonstrated 29 exploitable vulnerabilities."

Saranga Komanduri talked on Guess again (and again and again): Measuring password strength by simulating password-cracking algorithms, co-authored with Patrick Gage Kelley, , Michelle L. Mazurek, Richard Shay, Tim Vidas, Lujo Bauer, Nicolas Christin, Lorrie Faith Cranor, and Julio Lopez.

"We introduced a new, efficient technique for evaluating password strength, which can be implemented for a variety of password-guessing algorithms and tuned using a variety of training sets to gain insight into the comparative guess resistance of different sets of passwords. Using this technique, we performed a more comprehensive password analysis than had previously been possible. We found several notable results about the comparative strength of different composition policies. Although NIST considers basic16 and comprehensive8 equivalent, we found that basic16 is superior against large numbers of guesses. Combined with a prior result that basic16 is also easier for users [46], this suggests basic16 is the better policy choice. We also found that the effectiveness of a dictionary check depends heavily on the choice of dictionary; in particular, a large blacklist created using state-of-the-art password-guessing techniques is much more effective than a standard dictionary at preventing users from choosing easily guessed passwords. Our results also reveal important information about conducting guess-resistance analysis ..."

Hsu-Chun Hsiao presented LAP: Lightweight Anonymity and Privacy, co-authored with Tiffany Hyun-Jin Kim, and Adrian Perrig, along with Akira Yamada (KDDI R&D), Sam Nelson and Marco Gruteser (Rutgers University), and Wei Ming (Tsinghua University).

"In this framework, our approach is simple yet effective: by leveraging encrypted packet-carried forwarding state, ISPs that support our protocol can efficiently forward packets towards the destination, where each encrypted ISP-hop further camouflages the source or destination address or its location. Although encrypted packet-carried forwarding state is currently not supported in IP, we design simple extensions to IP that could enable this technology. In particular, our approach is even more relevant in future network architectures, where the design can be readily incorporated. This new point in the design space of anonymity protocols could also be used in concert with other techniques, for example in conjunction with Tor to prevent one Tor node from learning its successor. Despite weaker security proper- ties than Tor, we suspect that LAP contributes a significant benefit towards providing topological anonymity, as LAP is practical to use for all communication.

Zongwei Zhou delivered Building Verifiable Trusted Path on Commodity X86 Computers, co-authored with CyLab Director Virgil Gligor, as well as James Newsome and Jonathan M. McCune.

"Building a general-purpose trusted path mechanism for commodity computers with a significant level of assurance requires substantial systems engineering, which has not been completely achieved by prior work. Specifically, it requires (1) effective countermeasures against I/O attacks enabled by inadequate I/O architectures and potentially compromised operating systems; and (2) small trusted codebases that can be integrated with commodity operating systems. The design presented in this paper shows that, in principle, trusted path can be achieved on commodity computers, and suggests that simple I/O architecture changes would simplify trusted-path design considerably."

-- Richard Power

See Also:

CyLab Research has Powerful Impact on 2010 IEEE Security and Privacy Symposium

Microcosm & Macrocosm: Reflections on 2010 IEEE Symposium on Security and Privacy; Q and A on Cloud, Cyberwar and Internet Freedom with Dr. Peter Neumann

Five Papers Add to Impressive CyLab Presence at ACM CCS 2011

CyLab Research Presentations Impact CHI 2011

USENIX Security 2011: Another Ring on the Tree Trunk for One of Cyber Security's Worthiest Gatherings, and a Strong CyLab Presence

USENIX Security 2011: CyLab Researchers Release Study on Illicit Online Drug Trade and Attacks on Pharma Industry

A Report on 2012 IEEE Symposium on Privacy and Security

Hsu-Chun Hsiao delivers a paper on LAP: Lightweight Anonymity and Privacy


A Report on 2012 IEEE Symposium on Privacy and Security

As noted in previous CyBlog posts, IEEE's annual Symposium on Privacy and Security (a.k.a. "Oakland") is an important event in the realm of academic research on how to best strengthen cyber security and privacy. This year's Symposium lived up to expectations. (And I am not just saying that because Carnegie Mellon University CyLab's imprint was on eight different sessions. See CyLab Chronicles: CyLab's Strong Presence at IEEE Security and Privacy 2012 Packs A Wallop.)

Here are a few glimpses into some sessions that interested me.

Prudent Practices for Designing Malware Experiments

Christian Rossow of the Institute for Internet Security delivered a talk on "Prudent Practices for Designing Malware Experiments," a paper co-authored with Christian J. Dietrich and Norbert Pohlmann, also of Institute for Internet Security, along Chris Grier, Christian Kreibich and Vern Paxson of University of California, Berkeley and International Computer Science Institute, Berkeley, as well as Herbert Bos and Maarten van Steen, VU University Amsterdam, The Network Institute.

Rossow articulated numerous guidelines on safety, transparency, realism and correct data sets.

I have pulled out an example of one of the guidelines from each categories:

Safety: "1) Deploy and describe containment policies. Well-designed containment policies facilitate realistic experiments while mitigating the potential harm malware causes to others over time. Experiments should at a minimum employ basic containment policies such as redirecting spam and infection attempts, and identifying and suppressing DoS attacks. Authors should discuss the containment policies and their implications on the fidelity of the experiments. Ideally, authors also monitor and discuss security breaches in their containment."

Transparency: "4) Mention the system used during execution. Malware may execute differently (if at all) across various systems, software configurations and versions. Explicit description of the particular system(s) used (e.g., 'Windows XP SP3 32bit without additional software installations') renders experiments more transparent, especially as presumptions about the 'standard' OS change with time. When relevant, authors should also include version information of installed software.

Realism: "5) Consider allowing Internet access to malware. Deferring legal and ethical considerations for a moment, we argue that experiments become significantly more realistic if the malware has Internet access. Malware often requires connectivity to communicate with command-and-control (C&C) servers and thus to expose its malicious behavior. In exceptional cases where experiments in simulated Internet environments are appropriate, authors need to describe the resulting limitations.

Correct data sets: "2) Balance datasets over malware families. In unbalanced datasets, aggressively polymorphic malware families will often unduly dominate datasets filtered by sample-uniqueness (e.g., MD5 hashes). Authors should discuss if such imbalances biased their experiments, and, if so, balance the datasets to the degree possible. explicitly if they decide to blend malicious traces with benign background activity."

Detecting Hoaxes, Frauds, and Deception in Writing Style Online

Sadia Afroz of Drexel University delivered a talk on "Detecting Hoaxes, Frauds, and Deception in Writing Style Online," a paper co-authored with colleagues Michael Brennan and Rachel Greenstadt.

This fascinating paper used the compelling story from recent headlines, i.e., strange tale of Amina, the "Gay Girl in Damascus," whose blog captured the attention of the world during the early days of the Arab Spring, only to be later revealed as the work of Thomas Macmaster, a 40 year old American male.

In reporting on the research, Afroz and her colleagues, concluded:

"Stylometry is necessary to determine authenticity of a document to prevent deception, hoaxes and frauds. In this work, we show that manual counter-measures against stylometry can be detected using second-order effects. That is, while it may be impossible to detect the author of a document whose authorship has been obfuscated, the obfuscation itself is detectable using a large feature set that is content-independent. Using Information Gain Ratio, we show that the most effective features for detecting deceptive writing are function words. We analyze a long-term deception and show that regular authorship recognition is more effective than deception detection to find indication of stylistic deception in this case."

As Afroz and her colleagues also point out, such research has implications for adversarial learning in general:

"Machine learning is often used in security problems from spam detection, to intrusion detection, to malware analysis. In these situations, the adversarial nature of the problem means that the adversary can often manipulate the classifier to produce lower quality or sometimes entirely ineffective results. In the case of adversarial writing, we show that using a broader feature set causes the manipulation itself to be detectable. This approach may be useful in other areas of adversarial learning to increase accuracy by screening out adversarial inputs."

Oakrams: Searching Through Strands of Oakland's DNA

The three day event culminated in a all-star panel on "How can a Focus on 'Science' Advance Research in Cyber Security?" Moderated by Carl Landwehr, the panel members, including Alessandro Acquisti (Carnegie Mellon), Dan Boneh (Stanford), Joshua Guttman (Worcester Polytechnic Institute), Wenke Lee (Georgia Tech) and Cormac Herley (Microsoft) on whether or not the realm of cyber security as currently constituted should be or is already "science." But honestly, in spite of some sparkling insights, particularly from Acquisti and Herley, this debate has a certain dog chasing its tail futility to it. It is the kind of debate that become central after it is already too late to grasp the reality of a situation. It reminded me of a sage perspective delivered back in the 1990s, by the legendary Donn B. Parker: "Information Security, A Folk Art in Need of An Upgrade." Parker was spot-on on that, as well as on other issues.

So before the theme music to the Bill Murray film Groundhog Day once again starts to rise up in my psyche, let me turn away from the august panel and its erudite dialogue, and end this report from Oakland on a "short talk" in which Hilarie Orman (Purple Streak, Inc.) shared her "Oakrams."

I suggest there is at least as much import in them as in the debate over "cyber security" as "science."

Orman was kind enough to explain her exercise to me.

"I call them 'Oakrams' (the conference used to be called "Oakland" informally, and the software is based on an open source system call 'WordCram.' I modified WordCram so that I could control the coloring based on the word position, and so that I could reuse a word placement while changing size and color. This resulted in two sequences of images. I preprocessed the text of the papers so that for each year I had an ordered list of all non-trivial words that occurred 20 times or more. In the first sequence, for each year of the conference, I arranged the words so that the size and color intensity was proportional to word's frequency for that year. I modified WordCram to get word arrangements that were both denser and more uniform that its usual algorithms could produce. The word coloring varied uniformly over a small color range from top to bottom and left to right. Each year had slightly different range, overlapping with the previous year, and drifting from yellow through green in 1980 to the final blue through reddish yellow in 2012. The word arrays seemed endlessly interesting to me. Some words are loaded with context in the security world, and their presence or absence in an array was a source for reflection. As a small example, the word 'alice' appeared briefly in one or two years, but never rose to prominence. These arrays showed that 'system,' 'information,' and 'security' were usually the most frequent words in each year. This wasn't surprising, but I wanted to get more information about the words that had varying popularity, and I wondered if the words could point out trends in topics. That led to the next phase. The second sequence of images used only 50 words. These were the words that were the 'most popular' over the 33 years. For each year, each word had the same placement in the visual array, but the size and color varied. The size of a word was proportional to its frequency for that year. The color hue varied from red to blueish-purple, where red meant the word had not occurred in the previous 5 years, and the amount of blue represented its average frequency during the previous 5 years. As words moved in and out of popularity their size and color and opacity varied to reflect their usage. It was interesting to see how long it took for networking terms like 'message,' 'packet,' and 'node' took to get traction. I was amazed that "privacy" has rarely been a major term, despite it being part of the 'Security and Privacy' symposium's name! And, to me, it was quite significant that 'application' and "attack" have become major terms --- we used to focus on provably secure operating systems, now we try to protect individual applications against specific attacks. I'm a calligrapher and student of typography; the wordcrams are artistic objects that I enjoy, but they carry some fragments of meaning, like pieces of DNA."

-- Richard Power

See Also:

CyLab Research has Powerful Impact on 2010 IEEE Security and Privacy Symposium

Microcosm & Macrocosm: Reflections on 2010 IEEE Symposium on Security and Privacy; Q and A on Cloud, Cyberwar and Internet Freedom with Dr. Peter Neumann

Five Papers Add to Impressive CyLab Presence at ACM CCS 2011

CyLab Research Presentations Impact CHI 2011

USENIX Security 2011: Another Ring on the Tree Trunk for One of Cyber Security's Worthiest Gatherings, and a Strong CyLab Presence

USENIX Security 2011: CyLab Researchers Release Study on Illicit Online Drug Trade and Attacks on Pharma Industry

Monday, May 14, 2012

New You Tube & iTunes Video - CyLab Business Risk Forum: Michelle Dennedy on Privacy by Design for our Technology and Our Future - Why the Future Still Needs Us


Every week, during the school year, the CyLab Seminar Series provides updates on the latest research by our faculty, as well as by visiting scholars from other prestigious institutions. In addition to these academic research presentations, occasional Business Risks Forum events feature security experts from business and government to deliver invaluable insights on the facts on the ground in the operational environment.

Access to our weekly webcasts and on-line archive of Cylab Seminar Series is one of the exclusive benefits of membership in the CyLab's private sector consortium. From time to time, CyLab offers rare glimpses into its Seminar Series with the release of select videos via both the CyLab You Tube Channel and CyLab at iTunesU.

On April 16th, 2012, CyLab presented Michelle Dennedy, VP and Chief Privacy Officer for McAfee in a CyLab Seminar Series Business Risks Forum event. Dennedy spoke on Privacy by Design for our Technology and Our Future - Why the Future Still Needs Us.

CyLab Business Risks Forum: Privacy by Design for our Technology and Our Future

Thursday, April 26, 2012

Mike Farb Offers Insights Into SafeSlinger, CyLab's Powerful New Smartphone App

NOTE: This CyBlog story is cross-posting as both a CyLab Chronicle on our public site and as an issue of CyLab Seminar Notes on our partners-only portal. Access to CyLab Seminar Series webcasts, and to the full archive of Seminar videos, is an exclusive benefit of membership in the CyLab Partners program. But from time to time, we release individual videos both to highlight the vital nature of CyLab research and to promote the great value of partnering with us.

As part of the CyLab Seminar Series for 2011-2012, CyLab Research Programmer Mike Farb spoke on SafeSlinger: Applied Ad-hoc Smartphone Trust Establishment 

In these three brief transcribed excerpts from Farb’s talk, he articulates the need SafeSlinger was developed to address, then takes us on a quick step-by-step tour of how it works, outlines ongoing and future research and summarizes what SafeSlinger is and what it delivers.

These excerpts are meant merely to whet your appetite and encourage you to view the full seminar, which you will find embedded below.

In the course of the full talk, Farb also discusses SPATE, an earlier CyLab research project that SafeSlinger developed out of, he also touches on how Group Diffie-Hellman works, contrasts SafeSlinger with BUMP, and explores the challenges of verification for large groups, as well as delving into other aspects of the project.

SafeSlinger Answers A Need
People want to meet, and then securely communicate later. It could be researchers at a conference, or business people having lunch, or students at a party. But we don’t necessarily have a commonly trusted infrastructure. We may not all belong to a large scale corporate or institution-wide key infrastructure or certificate authority. So we want to be able to create a cryptographic key and exchange it in a secure fashion … Prior solutions include PGP key signing parties and PKI. The PGP key signing party is one way for people to meet digitally, and ensure each other of their actual presence, because we are all in the room, we can run math on the keys that we will be sharing together to make sure they are the ones that we are going to eventually share digitally, but it requires some sophisticated knowledge to do this. And with PKI, we might publish our keys on the key server, but then we are trying to validate that people are who they say they are digitally, so we don’t have the combination of digital and physical … We want to provide secure operations even with careless users and powerful local adversaries who can monitor our messages and potentially alter our messages … We want to be able to detect group members attempting to impersonate other groups members. We want to eliminate the need to count in large groups. … We want to enable remote operation, so that we can also do this over the phone. (We can assure each other of our presences, because we can recognize our voices in real time.) We want no information leaked to outsiders, even if the protocol fails. …

A Brief Tour Of How The User Interacts With SafeSlinger
When you start out the application, it’s going to ask you to select your contact data from your address book. … We generate a long-term private key used in the application, and we ask you to choose a pass phrase. … On the first screen of the exchange, I can select the information I want to share with everyone in the group, e.g., phone number, e-mail address. There are a couple of items, denoted by little Lock icons, and these are two values, SafeSlinger Push and SafeSlinger PubKey, are values from the messaging side of the application to the exchange API; we want to make sure that this information goes across, we don’t want to let the users have the opportunity to de-select them. They can de-select their photo, or not send their phone number, and that’s fine, but the key is critical. Then you click on "Begin Exchange." We ask to confirm the number of users … The server sends us back a group ID, we are asked to find out whatever the lowest number is between everyone in our group, enter it and then continue … So now that we have grouped ourselves, we know which of the various people hitting the server are actually the people in real-time on the phone with us, or in the room with us. … We construct our visual hash, and it is represented on the verification screen as twenty-four bits of the PGP word list, which is this list of five hundred and twelve words put into two columns, one column of two syllable words and one column of three syllable words, and then we represent text data, or binary data … each eighteen bits of that data get a word, and we alternate the even and odd lists, between the two syllable and three syllable words. Instead of just having one hash, we want to prevent people from just clicking "OK," we want to make sure they compare their list with the lists on the other phones. [All phone much match one of the three-word phrases. Compare then pick matching phrase.]… It is distributed randomly on everyone’s phones, so it is not always option one. People are forced to make a choice, forced to compare. At the end of the protocol, you get a list of whom you have just exchanged information with, and you are just told continue on, and import it into your address book. So now that we have exchanged keys, you have these keys in the list of people I can send messages to, and it has that Push token, we use it as a mechanism to deliver messages on the other person’s phone. You can select someone, type some secret message, and send it. You can send attachments too. We integrate with the Android sharing system. ….


A Glimpse Into Ongoing SafeSlinger Research
Some work that we are in the middle of, and are excited about: The iPhone version of the messaging portion, we already have the exchange portion available for people. But we want to get to the point where we are doing cross-platform messaging between Android and iPhone. We want to introduce a feature called, “Secure Introductions.” So I have this system of exchanging cryptographic keys securely between groups of people, and I have a method of sending messages using those long-term keys. If A and B create an exchange, and B and C create a separate exchange, I should be able to forward the public key exchange, from B’s perspective, to A and C, and sort of extend the web of trust. So that’s one of the things we are going to try to implement. In terms of advanced features, some users have asked to be able to import and export their existing public keys. At the moment, the messaging application creates its own private key, just for ease of use since most users don’t use public key infrastructure. We really want to get to the point where we can do some open source collaboration, and really work with some of these systems, e.g., Android Privacy Guard … And of course, we would like to implement more platforms, as they get more popular …

Summary
What we have created is this Internet-based communication, which is fast and reliable. We have been able to maintain user privacy. Only other group members learn the exchange information. The server doesn’t learn information or location. We have created user features to make SafeSlinger resistant to user error. It is a simple protocol, with minimal user actions to perform.

 Visit the instructions page for step-by-step directions on how to use SafeSlinger.


 

 See Also:

 CyLab's New Smartphone App, SafeSlinger, Empowers Users' to Strengthen Their Own Security and Privacy

 SafeSlinger App for Mobile Devices

 SafeSlinger: An Easy-to-use and Secure Approach for Human Trust Establishment

 CyLab Chronicles: Q & A with Mike Farb (2011)

 CyLab Researchers Release KeySlinger, Security App for iPhone and Android