Wednesday, August 27, 2014

CMU CyLab Researchers Wins USENIX Security 2014 Best Student Paper Award; Seven Other CMU Papers Delivered



As with other leading conferences in the vital fields of cyber security and privacy, Carnegie Mellon University (CMU) CyLab researchers distinguished themselves at USENIX Security 2014, the 23rd USENIX Security Symposium, held in San Diego, California, 8/20/14-8/22/14.

Three hundred fifty papers were submitted to the USENIX program committee, and the ensuing process, which involved 1,340 reviews and 1,627 follow up comments, resulted in sixty-seven papers being accepted for publication, including several from CMU CyLab researchers.

Most notably, CMU's Kyle Soska won one of two Best Student Paper Awards for Automatically Detecting Vulnerable Websites Before They Turn Malicious co-authored with CyLab faculty member Nicolas Christin

Additionally, CyLab faculty member David Brumley co-authored three of the published papers:

BYTEWEIGHT: Learning to Recognize Functions in Binary Code, with Tiffany Bao, Jonathan Burket, and Maverick Woo of Carnegie Mellon University and Rafael Turner, University of Chicago.

Blanket Execution: Dynamic Similarity Testing for Program Binaries and Components, with Manuel Egele, Maverick Woo and Peter Chapman.

Optimizing Seed Selection for Fuzzing, with Alexandre Rebert, Carnegie Mellon University and ForAllSecure; Sang Kil Cha and Thanassis Avgerinos of Carnegie Mellon University; Jonathan Foote and David Warren of Software Engineering Institute CERT; Gustavo Grieco of Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas (CIFASIS) and Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET).

Brumley also delivered a paper for one of the workshops that proceeded the main body of the Symposium itself, PicoCTF: A Game-Based Computer Security Competition for High School Students, co-authored with Peter Chapman and Jonathan Burket, also from CMU.

CyLab Usable Security and Privacy (CUPS) Lab director Lorrie Cranor teamed up with Cormac Herley, Principal Researcher in the Machine Learning Department at Microsoft Research, and several colleagues, Saranga Komanduri and Richard Shay of CMU and Stuart Schechter of Microsoft Research to co-author Telepathwords: Preventing Weak Passwords by Reading Users' Minds 

Two other CMU-authored papers were presented at USENIX Security 2014

The Long "Taile" of Typosquatting Domain Names co-authored by Janos Szurdi, Carnegie Mellon University; Balazs Kocso and Gabor Cseh, Budapest University of Technology and Economics; Jonathan Spring, Carnegie Mellon University; Mark Felegyhazi, Budapest University of Technology and Economics; and Chris Kanich, University of Illinois at Chicago. 

Password Managers: Attacks and Defenses co-authored by David Silver, Suman Jana, and Dan Boneh, Stanford University; Eric Chen and Collin Jackson, Carnegie Mellon University

Related Posts

Thursday, August 21, 2014

CMU CyLab PPP and CUPS teams win “Capture the Flag” and “Crack Me If You Can" contests at DEFCON 22

Members of CMU CyLab's Plaid Parliament of Pwning (PPP)
Carnegie Mellon University demonstrated its cyber prowess at DEFCON 22 by winning the “Capture the Flag” and “Crack Me If You Can” contests ...

Carnegie Mellon’s computer hacking team, the Plaid Parliament of Pwning (PPP), took first place for the second consecutive year in the Capture the Flag (CTF) contest. Globally, hundreds of teams battle throughout the year for one of 20 slots at DEFCON’s CTF competition, which has been called the “World Series of hacking.”

“Our team competed against universities and also against large defense contractors. This win is a huge accomplishment for our team,” says team advisor David Brumley, an associate professor of Electrical and Computer Engineering and the technical director of Carnegie Mellon CyLab.

The PPP team qualified for DEFCON for the last three years, and won first place in 2013 and again in 2014. The PPP team is part of CyLab’s Undergraduate Computer Security Research group, and it consists of 35 members from the College of Engineering and the School of Computer Science.

At DEFCON 22, the team was limited to eight members: George Hotz, Ryan Goulden, Tyler Nighswander, Brian Pak, Alex Reece, Max Serrano, Andrew Wesie, Ricky Zhou ...

A second team, this one from CyLab Usable Privacy and Security (CUPS) Lab, and simply named “cmu,” won the Street Division category in the “Crack Me If You Can” contest. In this two-day event sponsored by KoreLogic Security, teams exposed or “cracked” encrypted passwords.

"The students leveraged what they had learned from our research studies to develop their winning strategy," CUPS Director Lorrie Cranor says, "It is remarkable for a first-time team to win this competition." Cranor and fellow CyLab faculty members Lujo Bauer, Nicolas Christin, along with their team of students, are responsible for a growing body of work on passwords.


"Black Badge" bestowed upon CTF winners guarantees lifetime free entry to DEFCON
See Also


CyLab's David Brumley and His Student Hacker Team Featured on PBS NEWSHOUR and CNBC

Carnegie Mellon's Capture the Flag Team Excels in Hackjam Competition

CUPS Password Research Studies



Sunday, July 13, 2014

A Decade Into Its Vital Work, Another Savory SOUPS, A Report from the 10th Annual Symposium On Usable Privacy and Security



CMU CyLab's Dr. Lorrie Cranor, Founder of CUPS and SOUPS preps
for welcoming remarks at SOUPS 2014


The CyLab Usable Privacy and Security Laboratory (CUPS) 10th Annual Symposium on Usable Privacy and Security (SOUPS) was hosted by Facebook at its headquarters in Menlo Park, California (7/9/14 - 7/11/14). CUPS Director Lorrie Cranor welcomed the attendees, with the record-breaking numbers in both attendance and papers submitted. For three full days of proceedings, hundreds of researchers from business, academia and government communed together amidst the proliferation of signage which has come to characterize the social media giant's corporate culture: e.g., "Ship Love," "Ruthless Prioritization," "Demand Success," Nelson Mandela, arms outstretched, with the caption, "Open the Doors," etc. (Not so subliminal messaging.)
 
Perhaps more poignantly than any previous SOUPS keynote, Christopher Soghoian of American Civil Liberties Union (ACLU) articulated the vital nature of research into usable privacy and security. Putting flesh and blood on these issues, Soghoian used examples from the shadow world of investigative reporters and whistle-blowers to highlight the need for privacy and security software that is not only robust but eminently usable. One great benefit of the revelations brought forth by Glenn Greenwald in the Edward Snowden affair, Soghoian opined, is that there has been increased crypto adoption by journalists.
But the heightened engagement has also brought long-standing problems into a harsh new light. For example, Soghoian told SOUPS attendees, many investigative journalists using PGP still do not realize subject lines are not encrypted. "The best our community has to offer sucks, the usability and the default values suck," Soghoian declared, "the software is not protecting journalists and human rights activists, and that's our fault as researchers"

As contributing markets factors for why we still don't have usable encryption, Soghoian cited: potential data loss ("telling your customer that they've just lost every photo of their children is a non starter"), current business models, and of course, government pressure.

Facebook HQ Signage, 1 Hacker Way, Menlo Park
In other parts of his very substantive keynote, Soghoian touched on consumer issues related to the efficacy of privacy and security. He elucidated the differences in privacy and security between the iPhone and the Android: "The privacy and security differences ... are not advertised." He also shed light on a new aspect of the growing gap between rich and poor, "security by default for the rich," and "insecurity by default for the poor." "Those who are more affluent get the privacy benefits without shopping around," he explained, because the discounted, and mass-marketed versions of software often do not have the same full-featured privacy and security as the more expensive business or professional versions.

[NOTE: Full-length video of Soghoian's keynote is available via the CyLab YouTube Channel.]

Several awards were also announced during the opening sessions, including:

The 2014 IAPP SOUPS Privacy Award for the paper with the most practical application in the field of privacy went to Would a Privacy Fundamentalist Sell Their DNA for $1000...If Nothing Bad Happened as a Result? The Westin Categories, Behavioral Intentions, and Consequences authored by Allison Woodruff, Vasyl Pihur, Sunny Consolvo, and Lauren Schmidt of Google; and Laura Brandimarte and Alessandro Acquisti of Carnegie Mellon University.

The 2014 SOUPS Impact Award for a SOUPS paper "published between 2005 and 2009 that has had a significant impact on usable privacy and security research and practice" went to Usability of CAPTCHAs or Usability Issues in CAPTCHA Design authored in 2008 by Jeff Yan and Ahmad Salah El Ahmad of Newcastle University (UK).

Two Distinguished Papers awards were presented:

Understanding and Specifying Social Access Control Lists, authored by Mainack Monda of Max Planck Institute for Software Systems (MPI-SWS), Yabing Liu of Northeastern University, Bimal Viswanath and Krishna P. Gummadi of Max Planck Institute for Software Systems (MPI-SWS), and Alan Mislove of Northeastern University.

Crowdsourcing Attacks on Biometric Systems, authored by Saurabh Panjwani, an independent consultant and Achintya Prakash of University of Michigan.
Carnegie Mellon University (CMU), home to the CyLab Usable Privacy and Security (CUPS) Lab and the MSIT-Privacy Engineering Masters Program was well-represented in the proceeding.

In addition to the IAPP SOUPS Privacy Award winning "Would a Privacy Fundamentalist Sell Their DNA for $1000...If Nothing Bad Happened as a Result? The Westin Categories, Behavioral Intentions, and Consequences," co-authored with Google researchers, several other CMU papers were presented:

Parents’ and Teens’ Perspectives on Privacy In a Technology-Filled World, authored by Lorrie Faith Cranor, Adam L. Durity, Abigail Marsh, and Blase Ur, Carnegie Mellon University

Privacy Attitudes of Mechanical Turk Workers and the U.S. Public, authored by Ruogu Kang, Carnegie Mellon University, Stephanie Brown, Carnegie Mellon University and American University, Laura Dabbish and Sara Kiesler, Carnegie Mellon University

CMU researcher Ruogu Kang presenting
Privacy Attitudes of Mechanical Turk Workers and the U.S. Public
Harder to Ignore? authored by Cristian Bravo-Lillo, Lorrie Cranor, and Saranga Komanduri, Carnegie Mellon University, Stuart Schechter, Microsoft Research, Manya Sleeper, Carnegie Mellon University

The Effect of Social Influence on Security Sensitivity, authored by Sauvik Das, Tiffany Hyun-Jin Kim, Laura A. Dabbish, and Jason I. Hong, Carnegie Mellon University

Modeling Users’ Mobile App Privacy Preferences: Restoring Usability in a Sea of Permission Settings, authored by Jialiu Lin, Bin Liu, Norman Sadeh, and Jason I. Hong, Carnegie Mellon University

The full proceedings of SOUPS 2014 are available via USENIX.

-- Richard Power

Check out CyLab CyBlog's Archive of SOUPS Coverage

A Distinguish Paper Award for CUPS, and Other News from Ninth Annual SOUPS 2013

CyLab's SOUPS 2012 Continues Its Ongoing, Deepening Dialogue on What Works and What Doesn't

SOUPS 2011 Advances Vital Exploration of Usability and Its Role in Strengthening Privacy and Security  

 SOUPS 2010: Insight into Usable Privacy & Security Deepens at 6th Annual Symposium

Reflections on SOUPS 2009: Between Worlds, Cultivating Superior Cleverness, Awaiting a Shift in Consciousness

Glimpses into the Fourth Annual Symposium on Usable Security and Privacy (SOUPS 2008)

Mike Farb of CyLab's SafeSlinger project presents during the 2014 EFF Crypto Usability Prize (EFF CUP)
Workshop on Day One of SOUPS 2014

Facebook HQ Signage, 1 Hacker Way, Menlo Park

Wednesday, June 25, 2014

CyLab Researchers In 5-Year NSF-Funded Project to Overcome Challenges in Synchronizing Time for Cyber-Physical Systems

A Sundial in Saint-Rémy-de-Provence, France (Source: Wikimedia)

NOTE: This CyLab Chronicles is cross-posted on both the Carnegie Mellon CyLab public site and the CyLab partners-only portal. - See more at: http://www.cyblog.cylab.cmu.edu/#sthash.e9Z6CBOz.dpuf

The National Science Foundation (NSF) recently announced "a five-year, $4 million award to tackle the challenge of synchronizing time in cyber-physical systems (CPS)--systems that integrate sensing, computation, control and networking into physical objects and infrastructure." According to the NSF, the award "will support a project called Roseline, which seeks to develop new clocking technologies, synchronization protocols, operating system methods, as well as control and sensing algorithms ... Examples of cyber-physical systems include autonomous cars, aircraft autopilot systems, tele-robotics devices and energy-efficient buildings, among many others." (NSF, 6-13-14)

Two CMU CyLab researchers, Raj Rajkumar and Anthony Rowe, are members of the Roseline project team. Mani Srivastava of UCLA and Rajesh Gupta of UC San Diego will serve as Principle Investigators. The team also includes Sudhakar Pamarti of UCLA, João Hespanha of UC Santa Barbara and Thomas Schmid of the University of Utah.

CyLab News caught up with both CyLab researchers to get their perspectives on the scope and significance of Project Roseline.

"We all know that time is a fundamental attribute," Rajkumar said. "Computers maintain time by using local clocks, and synchronize these values among themselves or with reliable clock sources on the internet. However, the accuracies of these synchronized clocks are very highly system- and network-dependent. This in turn causes a wide range of applications from smart grid systems to robotic systems like autonomous vehicles to be customized and tweaked. In other words, there does not yet exist core foundations to specify, implement and utilize a notion of time, whose accuracy can be specified, controlled and achieved - we refer to this as the Quality of Time. This project will develop the foundations for managing the Quality of Time in computer systems." 

"There is a notion of time that transcends all layers of modern computer systems," Rowe adds. "At the lowest-level you have clocks driving hardware. Above that you have operating systems and networking that use time to manage how and more importantly when resources should be consumed. At the very top you have applications ranging from GPS to banking transactions that rely on timing. Our goal is to develop mechanisms and interfaces for improving what we call Quality of Time (QoT) aware applications. Specifically at CyLab we will be working on operating system abstractions and approaches to network coordination that improve energy-efficiency and reliability of networked embedded systems. Our target applications range from secure smart-grid monitoring to robotic systems like autonomous vehicles."

Full Text of NSF Press Release

Monday, June 9, 2014

CyLab Chronicles: Anupam Datta on Impact and Implications of IEEE Award Winning Paper, "Bootstrapping Privacy Compliance in Big Data Systems"



NOTE: This CyLab Chronicles is cross-posted on both the Carnegie Mellon CyLab public site and the CyLab partners-only portal.

Anupam Datta is an Associate Professor at Carnegie Mellon University (CMU) and a leading researcher at CMU CyLab. CyLab Chronicles recently sat down with him to discuss his team's latest successes and the future direction of their vital research. Here is our conversation.

In the 21st Century, the privacy space has become a very challenging one - for government, for business and for each of us as individuals. This very challenging space is fraught with complex issues. Which particular issue does the work in "Bootstrapping Privacy Compliance in Big Data Systems" seek to address?

Anupam Datta: To allay privacy concerns, Web services companies, such as Facebook, Google and Microsoft, all make promises about how they will *use* personal information they gather. But ensuring that *millions of lines of code* in their systems *respect* these *privacy promises* is a challenging problem. Recent work with my PhD student in collaboration with Microsoft Research addresses this problem.  We present a workflow and tool chain to automate privacy policy compliance checking in big data systems. The tool chain has been applied to check compliance of over a million lines of code in the data analytics pipeline for Bing, Microsoft’s Web search engine. This is the first time automated privacy compliance analysis has been applied to the production code of an Internet-scale system. The paper, written jointly with my PhD student Shayak Sen and a team from Microsoft Research, was presented at the 2014 IEEE Symposium on Security and Privacy and recognized with a Best Student Paper Award.

Tell us something about the system you and your team designed? And briefly describe its components, Legalease and Grok?

Datta: Central to the design of the workflow and tool chain are (a) *Legalease* — a language that allows specification of privacy policies that impose restrictions on how user data flows through software systems; and (b) *Grok* — a data inventory that annotates big data software systems (written in the Map-Reduce programming model) with Legalease’s policy datatypes, thus enabling automated compliance checking. Let me elaborate.
Privacy policies are often crafted by legal teams while software that has to respect these policies are written by developers. An important challenge is thus to design privacy policy languages that are *usable* by legal privacy teams, yet have precise *operational meaning* (semantics) that software developers can use to restrict how their code operates on personal information of users. The design of Legalease was guided by these dual considerations. Legalease builds on prior work from my research group that shows that privacy policies often involve *nested allow-deny information flow rules with exceptions* (see DeYoung et al. 2010, Garg et al. 2011 for the first complete logical specification and audit of the HIPAA Privacy Rule for healthcare privacy in the US). For example a rule might say: "IP address will not be used for advertising, except it address may be used for detecting abuse. In such cases it will not be combined with account information.” Our hypothesis was that such rules match the *mental model* of legal privacy policy authors. The results of our *user study*, involving participants from the legal privacy team and privacy champions at Microsoft (who sit between the legal privacy team and software developers), provide evidence in support of our hypothesis: after a short tutorial, the participants were able to encode the entire Bing policy pertaining to how users’ personal information will be used on the Bing servers (9 policy clauses) in about 15 minutes with high accuracy.
Software systems that perform data analytics over personal information of users are often written without a technical connection to the privacy policies that they are meant to respect. Tens of millions of lines of such code are already in place in companies like Facebook, Google, and Microsoft. An important challenge is thus to *bootstrap* existing software for privacy compliance. Grok addresses this challenge. It annotates software written in programming languages that support the Map-Reduce programming model (e.g., Dremel, Hive, Scope) with Legalease’s policy datatypes. We focus on this class of languages because they are the languages of choice in industry for writing data analytics code. A simple way to conduct the bootstrapping process would be to ask developers to manually annotate all code with policy datatypes (e.g., labelling variables as IPAddress, programs as being for the purpose of Advertising etc.). However, this process is too labor-intensive to scale. Instead, we develop a set of techniques to automate the bootstrapping process. I should add that the development of Grok was led by Microsoft Research and was underway before our collaboration with them began.

What do you see as the practical outcome of this research? Is it going to be applicable in the operational environment? What is it leading toward?

Datta: One practical outcome of this research is that it enables companies to check that their big data software systems are compliant with their privacy promises. Indeed the prototype system is already running on the production system for Bing, Microsoft's search engine. So, yes, it is applicable to current operational environments.
I view this result as a significant step forward in ensuring privacy compliance at scale. With the advent of technology that can help companies keep their privacy promises, users can reasonably expect stronger privacy promises from companies that operate over their personal information. Of course, much work remains to be done in expanding this kind of compliance technology to cover a broader set of privacy policies and in its adoption in a wide range of organizations.

What other privacy challenges are you looking to address in your ongoing research? What's next for your team?

Datta: We are deeply examining the Web advertising ecosystem, in particular, its *transparency* and its implications for *privacy* and *digital discrimination*. There are important computational questions lurking behind this ecosystem: Are web-advertising systems transparently explaining what information they collect and how they use that information to serve targeted ads? Are these systems compliant with promises they make that they won't use certain types of information (e.g., race, health information, sexual orientation) for advertising? Can we answer these questions from the "outside", i.e., without gaining access to the source code and data of web advertising systems (in contrast with our privacy compliance work with Microsoft Research)? How can we enhance transparency of the current Web advertising ecosystem? What does digital discrimination mean in computational terms and how can we design systems that avoid it?

As a researcher who has looked long and deeply into the privacy space, what do you think is most lacking in the approaches of government and business? What is most needed? What are people missing?

Datta: First, I believe that there is a pressing need for better *computational tools* that organizations can (and should) use to ensure that their software systems and people are compliant with their privacy promises. These tools have to be based on well-founded computing principles and at the same time have to be usable by the target audience, which may include lawyers and other groups who are not computer scientists.
Second, current policies about collection, use, and disclosure of personal information often make rather *weak promises*. This is, in part, a corollary of my first point: organizations do not want to make promises that they don't have the tools to help them be compliant with. But it is also driven by economic considerations, e.g., extensive use of users' personal information for advertising can result in higher click-through rates. While many companies make promises that they will not use certain types of information (e.g., race, health information, sexual orientation) for advertising, it is far from clear what these promises even mean and how a company can demonstrate they are compliant with these promises.
Third, we need better *transparency mechanisms* from organizations that explain what information they collect and how they use that information. Self-declared privacy policies and mechanisms like ad settings managers are steps in the right direction. However, the transparency mechanisms should themselves be auditable to ensure that they indeed transparently reflect the information handling practices of the organization.
These are all fascinating questions -- intellectually deep from a computer science standpoint and hugely relevant to the lives of hundreds of millions of people around the world. They keep me and my research group up at night!

Related Posts

IEEE Security and Privacy Symposium 2014: Another Challenging Year, Another Compelling IEEE SPS, and Another Significant Contribution from CMU CyLab 

(CMU-CyLab-14-005) Temporal Mode-Checking for Runtime Monitoring of Privacy Policies

(CMU-CyLab-13-005) Purpose Restrictions on Information Use

Anupam Datta - Privacy through Accountability (2014 CyLab Seminar Series, You Tube Video)

Anupam Datta - Privacy through Accountability (10th Annual CyLab Partners Conference, You Tube Video)

CyLab Chronicles: Q&A with Anupam Datta (2009)

CyLab Chronicles: Q&A with Anupam Datta (2008)



Thursday, May 22, 2014

IEEE Security and Privacy Symposium 2014: Another Challenging Year, Another Compelling IEEE SSP, and Another Significant Contribution from CMU CyLab


Giovanni Domenico Tiepolo - Procession of the Trojan Horse in Troy (1773)
Another challenging year in cyber security and privacy means another compelling IEEE Security and Privacy Symposium, and another compelling IEEE Security and Privacy Symposium means another significant contribution from Carnegie Mellon University CyLab.

This year, three hundred and thirty three papers were submitted. After a rigorous review process (which included ninety nine "intensive discussions," one thousand two hundred eighteen reviews and a rebuttal phase), forty four papers were selected to be published as part of the Symposium.

Of these forty four worthy contributions, four were singled out for IEEE Security and Privacy Symposium 2014 Best Papers Awards:

Best Paper
 
Secure Multiparty Computations on BitCoin by Marcin Andrychowicz, Stefan Dziembowski, Daniel Malinowski, and Łukasz Mazurek (University of Warsaw)

Best Practical Paper
 
Using Frankencerts for Automated Adversarial Testing of Certificate Validation in SSL/TLS Implementations by Chad Brubaker and Suman Jana (University of Texas at Austin), Baishakhi Ray (University Of California Davis), and Sarfraz Khurshid and Vitaly Shmatikov (University of Texas at Austin)

Best Student Papers

Framing Signals — A Return to Portable Shellcode by Erik Bosman and Herbert Bos (Vrije Universiteit Amsterdam)

Bootstrapping Privacy Compliance in Big Data Systems by Shayak Sen (Carnegie Mellon University), Saikat Guha (Microsoft Research, India), Anupam Datta (Carnegie Mellon University), Sriram Rajamani (Microsoft Research, India), Janice Tsai (Microsoft Research, Redmond), and Jeannette Wing (Microsoft Research)

CMU CyLab researcher Shayak Sen presented the award winning paper co-authored by members of the CyLab and Microsoft Research teams:

In this paper, we demonstrate a collection of techniques to transition to automated privacy compliance compliance checking in big data systems. To this end we designed the LEGALEASE language, instantiated for stating privacy policies as a form of restrictions on information flows, and the GROK data inventory that maps low level data types in code to highlevel policy concepts. We show that LEGALEASE is usable by non-technical privacy champions through a user study. We show that LEGALEASE is expressive enough to capture real-world privacy policies with purpose, role, and storage restrictions with some limited temporal properties, in particular that of Bing and Google. To build the GROK data flow grap we leveraged past work in program analysis and data flow analysis. We demonstrate how to bootstrap labeling the graph with LEGALEASE policy datatypes at massive scale. We note that the structure of the graph allows a small number of annotations to cover a large fraction of the graph. We report on our experiences and learnings from operating the system for over a year in Bing. -- Shayak Sen (Carnegie Mellon University), Saikat Guha (Microsoft Research, India), Anupam Datta (Carnegie Mellon University), Sriram Rajamani (Microsoft Research, India), Janice Tsai (Microsoft Research, Redmond), and Jeannette Wing (Microsoft Research), Bootstrapping Privacy Compliance in Big Data Systems, IEEE Security and Privacy Symposium 2014, Best Student Paper (1 of 2)

But, of course, the Bootstrapping Privacy Compliance paper was not the only CyLab contribution to the Symposium program, e.g., CMU CyLab researcher Zongwei Zhou spoke on Dancing with Giants; Wimpy Kernels for On-Demand Isolation I/O, a paper co-authored with Miao Yu and Virgil Gligor:

Trustworthy applications are unlikely to survive in the marketplace without the ability to use a variety of basic services securely, such as on-demand isolated I/O channels to peripheral devices. This paper presents a security architecture based on a wimpy kernel that provides these services without bloating the underlying trusted computing base. It also presents a concrete implementation of the wimpy kernel for a major I/O subsystem, namely USB subsystem, and a variety of device drivers. Experimental measurements show that the desired minimality and efficiency goals for the trusted base are achieved. -- Zongwei Zhou, Miao Yu, Virgil Gligor, Dancing with Giants; Wimpy Kernels for On-Demand Isolation I/O, IEEE Security and Privacy Symposium 2014

Other CMU papers selected and presented at IEEE SSP 2014 included:

All Your Screens Are Belong to Us; Attacks Exploiting the HTML5 Screen Sharing API, Analyzing Forged SSL Certificates in the Wild by Lin-Shung Huang, Yuan Tian, Patrick Tague and others, CMU SV and Facebook

Analyzing Forged SSL Certificates in the Wild by Lin-Shung Huang, Alrex Rice, Erling Ellingsen, Collin Jackson

Stopping A Rapid Tornado with A Puff by Jose Lopes and Nuno Neves of CMU Portugal
CyLab's contribution to IEEE SPP 2014 also included several papers from two CMU CyLab alumni and alumna.

There were three papers co-authored by CMU CyLab alumnus XiaoFeng Wang of Indiana University (Bloomington): Hunting the Red Fox Online: Understanding and dectection of Mass Redirect-Script Injections, Upgrading Your Android, Elevating My Malware - Privilege Escalation Through Mobile OS updating, and Perils of Fragmentation: Security Hazards in Android Device Driven Customizations.

Also CMU CyLab alumnus Bryan Parno of Microsoft Research and a CMU CyLab alumna Elaine Shi of University of Maryland (College Park) were among the co-authors of PermaCoin: Repurposing Bitcoin Work for Data Preservation, and Shi co-authored a second paper, Automating Efficient RAM-Model Secure Computation.

CyLab's efforts were also apparent on the organizational level at IEEE SSP 2014:

Adrian Perrig of ETH Zürich, formerly CyLab's Research Director, now a CyLab Distinguished Fellow, served as one of the Symposium's three program chairs.

Three CyLab researchers served as Session Chairs, Lujo Bauer for Systems Security, Virgil Gligor (CyLab Director) for Attacks 3 and Anupam Datta for Secure Computation and Storage.

Also, CMU CyLab alum Bryan Parno served as a Session Chair for Privacy and Anonymity.

And, looking ahead to next year, Lujo Bauer will be one of the Symposium program chairs. 2015 will likely be another challenging year in cyber security and privacy, which will mean another compelling IEEE Security and Privacy Symposium, with another significant contribution from Carnegie Mellon University CyLab.

Related Posts

CyLab's Strong Presence Continues at Annual IEEE Symposium on Security and Privacy (2013)

CyLab Chronicles: CyLab's Strong Presence at IEEE Security and Privacy 2012 Packs A Wallop

A Report on 2012 IEEE Symposium on Privacy and Security

Microcosm & Macrocosm: Reflections on 2010 IEEE Symposium on Security & Privacy; Q & A on Cloud, Cyberwar & Internet Freedom w/ Dr. Peter Neumann

CyLab Research has Powerful Impact on 2010 IEEE Security & Privacy Symposium



Tuesday, April 15, 2014

CyLab Research Sheds Light on Heartbleed and Its Implications


Heartbleed is a significant event along the cyber security timeline. Its consequences will be with us all for quite awhile. If you haven't already come to grips with this issue, you should do so urgently.

For some guidance go to http://heartbleed.com/

To verify if a particular server is vulnerable, go to: http://filippo.io/Heartbleed/

For a command-line tool, go to: https://github.com/FiloSottile/Heartbleed

Here at CyLab, the story has provided us with an opportunity to reflect on some of our recent research and its relevancy to the problem at hand, e.g., --

Perspectives: If you scan a server, and find that it isn't vulnerable, you would still need to know if it had been vulnerable in the past, and/or if it has been updated. One way to answer that question is to determine if the private key has been updated. If you connect to the server with a Firefox browser that has the Perspectives extension installed, and then inspect the key history. To do so, click the Perspectives icon on the left-hand of the URL bar, and select "View Notary Results." (Of course, if the key has not been changed, you're still none the wiser.) For more on Perspectives, visit the Perspectives Project page.

TrustVisor: In TrustVisor, we proposed keeping the OpenSSL private key inside a PAL, which would have defended against this vulnerability. See our paper on TrustVisor: Efficient TCB Reduction and Attestation, authored by Jonathan M. McCune (now with Google), Yanlin Li, Ning Qu, Zongwei Zhou, Anupam Datta, Virgil Gligor and Adrian Perrig.

Flicker: In Flicker, we proposed to store the SSH password database inside a PAL, which would also prevent password theft. See Flicker: An Execution Infrastructure for TCB Minimization,
authored by three CyLab researchers Jonathan M. McCune, Bryan Parno (now with Microsoft), and Adrian Perrig (now with ETH Zurich), along with Michael K. Reiter of University of North Carolina (Chapel Hill), and Hiroshi Isozaki of Toshiba Corporation.

Perspectives, TrustVisor and Flicker all evolved out of CyLab's work on Trustworthy Computing Platforms and Devices. And this continues to be one of CyLab's major research thrusts.

Amit Vasudevan, a CyLab  Research Systems Scientist, and Miao Yu, a CyLab grad student, took a few moments to sit down with CyBlog, and share some insights on where we are and what's next.

According to Vasudevan, "the IEE technologies and prototypes we have been developing (XMHF - TrustVisor, KISS, Minibox, etc.) lay a solid foundation to protect against Heartbleed-like attacks."

"But going from our prototypes to the real-world is a different kind of challenge. The software ecosystem out there today does not really consider security as a first-class citizen. Consequently, tweaking these components to adapt to our IEE design is non-trivial ...  In the long term, developers of security-oriented/sensitive software would benefit from a simple and solid security framework that would allow them to leverage strong security properties, while letting them also implement the desired functionality. And our work with XMHF plus Trustvisor plus other hypapps (http://xmhf.org) is the right step in this direction."

"This bug is still underestimated," warns Yu.

He cites three reasons for his concern:
"Currently, we putting a lot of care into HTTPS websites. But other protocols, e.g., FTPS (used in file transfer) server, can also be impacted by this bug. 

"Not only servers, but also clients, e.g. smart phones and other devices, may suffer from this bug. And for certain devices, the problem can be even worse. For example, mobiles phones have long patch cycles. For the heartbleeding bug, the first patch of this bug came out in 20 minutes and web servers began the repair in the first day. But Android phones only get scanners, e.g., Bluebox Heartbleed Scanner or Heartbleed Detector to help users find out if their phone is vulnerable ... From our experience with past vulnerabilities, it would take tens of weeks until half of the mobile devices get patched. During this period, the devices are at risk. Other devices, which may use OpenSSL for establishing administration channels, also may suffer from long patch cycles. At CyLab, Zongwei Zhou, Miao Yu, Yoshiharu Imamoto, Amit Vasudevan, Virgil Gilgor and I have developed an isolated execution environment for the ARM mobile platform. It is quite similar to TrustVisor, but focuses on mobile system security, so that, e.g., you could run a banking client (or some other sensitive application) in an isolated execution environment, so that your code and data would still be secure in spite of this or other vulnerabilities present in Android.

"All three recent SSL bugs, i.e., IOS's goto fail bug, the GnuTLS bug and the Heartbleed bug are implementation-related rather than design related. The lesson is that design security doesn't mean implementation security. We do need runtime protection as a last line of defense."

-- Richard Power