Tuesday, May 25, 2010

CyLab's Lorrie Cranor Participates in Capitol Hill Briefing on Nuts & Bolts of Online Privacy, Advertising, Notice & Choice



CyLab's Lorrie Cranor Participates in Capitol Hill Briefing on Nuts & Bolts of Online Privacy, Advertising, Notice & Choice

On 5/24/10, Dr. Lorrie Cranor, Director of CyLab Usable Privacy and Security (CUPS) Laboratory participated in a Capital Hill briefing on "Nuts & Bolts of Online Privacy, Advertising, Notice & Choice."

Organized by the Progress and Freedom Foundation, the event featured a panel of experts; Ari Schwartz, Vice President & Chief Operating Officer at the Center for Democracy & Technology and Shane Wiley, Senior Director of Privacy & Data Governance for Yahoo! participated along with Dr. Cranor, and Berin Szoka, a Senior Fellow at the Progress & Freedom Foundation. Szoka moderated.

Here is a brief excerpt from Szoka's post to the Progress and Freedom Foundation blog:

Ari got us started with an intro to the Boucher bill and Shane offered an overview of the technical mechanics of online advertising and why it requires data about what users do online. Lorrie & Ari then talked about concerns about data collection, leading into a discussion of the challenges and opportunities for empowering privacy-sensitive consumers to manage their online privacy without breaking the advertising business model that sustains most Internet content and services. In particular, we had a lengthy discussion of the need for computer-readable privacy disclosures like P3P (pioneered by Lorrie & Ari) and the CLEAR standard developed by Yahoo! and others as a vital vehicle for self-regulation, but also an essential ingredient in any regulatory system that requires that notice be provided of the data collection practices of all tracking elements on the page.

For more on the event, including audio and video, see PFF Event Recap: Nuts & Bolts of Online Privacy, Advertising, Notice & Choice, 5-25-10

Some Related Posts:

CyLab Chronicles Q&A with Lorrie Cranor (2010)

CUPS wins Google Focused Research Award

SOUPS 2010: Sixth Annual Symposium On Usable Privacy and Security

CUPS Director Lorrie Cranor Receives NSF Funding For Interdisciplinary Doctoral Program in Privacy and Security

CyLab Usable Privacy and Security Researchers Release Study on SSL Warning Effectiveness

Reflections on SOUPS 2009: Between Worlds, Cultivating Superior Cleverness, Awaiting a Shift in Consciousness

CUPS Research Takes on Both Widespread Attack Method & Dangerous Meme (requires Partners Portal access)

CyLab Chronicles: Wombat - The Latest CyLab Success Story

CyLab Chronicles: Q&A with Lorrie Cranor (2008)

Saturday, May 22, 2010

Microcosm & Macrocosm: Reflections on 2010 IEEE Symposium on Security & Privacy; Q & A on Cloud, Cyberwar & Internet Freedom w/ Dr. Peter Neumann

Ross Anderson and Steven Murdoch of University of Cambridge accept 2010 IEEE Symposium of Security & Privacy Best Practical Paper Award

Microcosm & Macrocosm: Reflections on 2010 IEEE Symposium on Security & Privacy; Q & A on Cloud, Cyberwar & Internet Freedom w/ Dr. Peter Neumann

By Richard Power


The 2010 IEEE Symposium on Security and Privacy, held in Oakland, California, marked the 30th anniversary of this prestigious event.

Carl Landwehr, Program Director for the National Science Foundation, Senior Research Scientist at University of Maryland Institute for Systems Research (and Editor in Chief of IEEE Security & Privacy Magazine) received two awards: IEEE Computer Society Distinguished Service Award and Computer Society Technical Committee on Security and Privacy Outstanding Community Service Award.

Jerry Saltzer, Professor Emeritus of the M.I.T. Computer Science and Artificial Intelligence Lab (CSAIL), received the National Computer Security Award. Previous recipients include Jim Anderson, Dennis Branstad, Steven Bellovin, David Clark, Robert Courtney, Dorothy Denning, Whit Diffie, Virgil Gligor, Martin Hellman, Butler Lampson, Peter Neumann, Donn Parker, Ron Rivest, Roger Schell, Mike Schroeder, Eugene Spafford, Walter Tuchman, Steve Walker, and Willis Ware.

The "Best Paper" award went to Margarita Osadchy, Benny Pinkas, Ayman Jarrous, Boaz Moskovich of Univesity of Haifa for "CiFI - A System for Secure Face Identification."

The "Best Student Paper" award went to "TaintScope: A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection" by Tielei Wang, Tao Wei and Wei Zou of Peking University, and Guofei Gu of Texas A & M University

The award for "Best Practical Paper," sponsored by IEEE Security and Privacy Magazine, went to Ross Anderson, Steven Murdoch, Saar Drimer and Mike Bond of the University of Cambridge for Chip and PIN is Broken. This work describes and demonstrates "a protocol flaw which allows criminals to use a genuine card to make a payment without knowing the card’s PIN, and to remain undetected even when the merchant has an online connection to the banking network. The fraudster performs a man-in-the-middle attack to trick the terminal into believing the PIN verified correctly, while telling the card that no PIN was entered at all." (For more information and the full paper, go Ross Anderson's blog, Light Blue Touchpaper, 2-11-10)

As mentioned in my previous post on this year's Symposium, there were 31 papers presented in the course of the three day event. These papers explored ten research areas, from Malware Analysis (e.g., Automated Extraction of Proprietary Gadgets from Malware Binaries) and Network Security (e.g., Round-Efficient Broadcast Authentication Protocols for Fixed Topology Class) to Systemization II (e.g., Bootstrapping Trust in Commodity Computers) and Analyzing Deployed Systems (e.g., Experimental Security Analysis of a Modern Automobile).

Such research is vital, and it is always inspiring to listen to the fruits of this worthy labor. But I confess that as I sat through session after session, I found myself drawing back to contemplate the big picture, as I have been doing in my writings and talks over the last few years. (See, for example, Starting Over After A Lost Decade, In Search of a Bold New Vision for Cyber Security (Cerias Security Seminar, 9-30-09) and Red Pill? Blue Pill? Ruminations on the Intersection of Inner Space and Cyber Space (CSO Magazine, 10-23-09).

As I scanned the overflowing audience, I saw Peter Neumann hunched over his laptop, and sitting in the last row. So I asked him to give me his insights on the three big pictures questions I have been mulling over.

Q: At this year's RSA conference, I was struck by one keynote speaker after another, declaring the "cloud" as the future, and exhorting everyone to hurry into the "cloud" where we will find security much easier to attain, and everything will be better. Who is "we" is hard to answer in the cloud. Who is securing who, and what else are they doing? These are real concerns. Could you talk about the cyber security and privacy implications of "Cloud computing"?

Peter Neumann: Yes, I noticed Scott Charney, Howard Schmidt, Janet Napolitano extolling the wonders of cloud computing, and so many vendors saying they had it all under control. This is sheer and utter nonsense. Having to trust untrustworthy third- and fourth-party vendors, some of whom you do not even know exist (cf. Les Lamport's definition of a Distributed System) is ridiculous, given that the infrastractures, the computer systems, and the authentication processes are not trustworthy. Confidentiality and privacy may be least of our concerns, compared with system integrity, denials of service attacks, the lack of traceability and attribution, the lack of meaningful audit trails, and so on.

Q: The term "Cyberwar" is taking on a life of its own. You and I discussed "information warfare" well over a decade ago. What would you like to say about this term "cyberwar" and what it purports to describe? Overly hyped? Something different than the issues we have been dealing with all along in the struggle to secure cyberspace? Both?

Neumann: Cyberwar is indeed an overly hyped concept. The "war" on terrorism is a bad enough metaphor, but "cyberwar" is even worse. Who is the enemy? As Pogo once said, "We have met the enemy, and he is us." We will never completely secure "cyberspace", and the "enemy" will always have many advantages. However, we could do much better than we do at present. Also, take a look at my paragraph on the misuse of "cyber" (which is a combining form, not a noun or an adjective) on my website: http://www.csl.sri.com/neumann. (I just put up a new limerick on "metrics" also.)

Q: Spaf said something to me awhile ago, that perhaps there will be no Internet readily and freely accessible ten or twenty years into the future. I used to jokingly tell people that although the proverbial "they" succeeded in burning down the library of Alexandria, the proverbial "they" won't be able to due the equivalent to the Internet ... Ha ha ... Now I wonder. For example, the battle over net neutrality could be won in the legislatures but lost in the cloud, couldn't it? Net neutrality, government censorship, the mysterious hidden workings of the cloud, do these threaten the future freedom and evolution of the internet as a global commons? And is there any hope?

Neumann: On "network neutrality," unless the lobbyists' control over Congress ceases, legislative solutions will continue to be largely misguided in this area. But even if legislation were to become sensible here, you are correct -- it could still be lost in the clouds. Is there any hope? Yes, of course, we have to retain some modicum of optimism, but it must be accompanied by a radical shift in the entire culture by which mediocre systems with short-sighted requirements and short-sighted development practices abound. Think about BP's practices in the Gulf, and the financial industry, and you have an approximation for the computer industry and practice.

In conclusion, Neummann added, "This is off the top of the head, and reflects just a few of my holistic concerns. The picture requires much greater total-system long-term thinking than is used today."

As always, Neumann's thinking is both provocative and profoundly insightful.

And as always, if industry and government choose to ignore him, they do so not only at their own peril, but at the peril of all.

Of course, the Cloud is now inevitable; after all, it has been decreed by the captains of industry. Yes, it will offer both challenges and opportunities. But we should not sell it to ourselves as a security strategy, we should not fool ourselves, it is simply another dimension of risk added to the many dimensions of risk we are already operating within.

In regard to the term "Cyberwar," my views are somewhat complex, and contradictory, I find myself promoting it in some contexts, and debunking it in others, depending upon the misconceptions that dominate the space of the discussion.

And the future of the Internet? Well, the future of the Internet as a free and open cyberspace is nothing less than the future of human civilization; not necessarily the future of the human race, but of human civilization, or perhaps more precisely anything worthy of being called a "human civilization." Therefore, it is too important to be left to industry and government, or both, at least as long as there is a revolving door between the two, and especially while all other voices are without counter-balancing influence.

Preserving a free and open Internet, and making it accessible for the private use of all humans is both a security issue and a human rights issue; and increasingly, in the 21st Century, security issues and human rights issues are becoming interdependent, and as in other arenas of human endeavor, we can no long allow commercial interests to trump security and human rights concerns.

See Also

CyLab Research has Powerful Impact on 2010 IEEE Security & Privacy Symposium

RSA 2010: Lost in the Cloud, & Shrouded in the Fog of War, How Far Into the Cyber Future Can You Peer? Can You See Even Beyond Your Next Step?

Tuesday, May 18, 2010

CyLab Research has Powerful Impact on 2010 IEEE Security & Privacy Symposium

Samuel Langhorne Clemens (a.k.a. Mark Twain) in the lab of Nikola Tesla, spring of 1894.

CyLab Research has Powerful Impact on 2010 IEEE Symposium on Security & Privacy

By Richard Power


Thirty-one papers were presented at the thirty-first annual IEEE Symposium on Security and Privacy, held in Oakland, California (5/16/10-5/19/10); of those thirty-one papers, six were authored by CyLab researchers, offering powerful testimony to the relevancy of CyLab's research and its impact on current and future trends in security and privacy in cyberspace.

TrustVisor, Efficient TCB Reduction and Attestation: Jonathan McCune (Carnegie Mellon University), Yanlin Li (Carnegie Mellon University), Ning Qu (Nvidia), Zongwei Zhou (Carnegie Mellon University), Anupam Datta (Carnegie Mellon University), Virgil Gligor (Carnegie Mellon University), Adrian Perrig (Carnegie Mellon University)

"We present TrustVisor, a special-purpose hypervisor that provides code integrity as well as data integrity and secrecy for selected portions of an application."

Round-Efficient Broadcast Authentication Protocols for Fixed Topology Classes: Haowen Chan, Adrian Perrig (Carnegie Mellon University)

"The new protocols avoid the high computation overhead of one-time signatures and multi-receiver MACs, as well as the time synchronization needed by TESLA. In terms of rounds of complexity and communication congestion, our protocols provide points in the design space that are not achievable by previously published protocols."

All You Ever Wanted to Know about Dynamic Taint Analysis and Forward Symbolic Execution (but might have been afraid to ask): Thanassis Avgerinos, Edward Schwartz, David Brumley (Carnegie Mellon University)

"The contributions of this paper are two-fold. First, we precisely describe the algorithms for dynamic taint analysis and forward symbolic execution as extensions to the run-time semantics of a general language. Second, we highlight important implementation choices, common pitfalls, and considerations when using these techniques in a security context."

A Proof-Carrying File System: Deepak Garg, Frank Pfenning (Carnegie Mellon University)

"We present the design and implementation of PCFS, a file system that adapts proof-carrying authorization to provide direct, rigorous and efficient enforcement of dynamics access policies."

Scalable Parametric Verification of Secure Systems: How to Verify Reference Monitors without Worrying about Data Structure Size: Jason Franklin (Carnegie Mellon University), Sagar Chaki (Carnegie Mellon University), Anupam Datta (Carnegie Mellon University), Arvind Seshadri (IBM Research)

"This paper develops a parametric verification technique that scales even when reference monitors and adversaries operate over unbounded, but finite data structures. Specifically, we develop a parametric guarded command language for modeling reference monitors and adversaries."

Bootstrapping Trust in Commodity Computers: Bryan Parno, Jonathan M. McCune, Adrian Perrig (Carnegie Mellon University)

"In this survey, we organize and clarify extensive research on bootstrapping trust in commodity systems. We identify inconsistencies (e.g., attacks prevented by various forms of secure and trusted boot) and commonalities (e.g., all existing attempts to capture dynamic system properties still reply in some sense on static, load-time guarantees) in previous work."

In addition to the six papers presented, two CyLab researchers (Jonathan McCune and David Brumley) chaired sessions, and a third, Collin Jackson, led a workshop.

Wednesday, May 12, 2010

Evolving Rapidly, BSIMM2 Offers Key Elements of Successful Software Security Initiatives Shared by 30 Major Corporations



"We have tripled the size of the data set to thirty firms, including Adobe, Aon, Bank of America, Capital One, The Depository Trust & Clearing Corporation (DTCC), EMC, Google, Intel, Intuit, Microsoft, Nokia, QUALCOMM, Sallie Mae, Standard Life, SWIFT, Symantec, Telecom Italia, Thomson Reuters, VMware, and Wells Fargo." -- Gary McGraw, co-author of BSIMM2

Evolving Rapidly, BSIMM2 Offers Key Elements of Successful Software Security Initiatives Shared by 30 Major Corporations

By Richard Power


The Building Security In Maturity Model (BSIMM) project is a worthy resource for those serious about software security in their organizations.

In March 2009, I covered the release of the initial BSIMM study here on CyBlog. Now, I am happy to report that BSIMM2 has been released.

As friend and colleague Gary McGraw, CTO of Cigital, one of the project's creators, explains, it is "a measuring stick for software security," and with its number of participants tripled since last year, it is evolving fast.

What is the BSIMM?

BSIMM2 is the end product of the collective work of 635 people working in the software security groups for those thirty organizations.

It is calibrated for use by "anyone charged with creating and executing a software security initiative" (typically, a senior executive) who led an internal Software Security Group (SSG)"charged with directly executing or facilitating the activities described in the BSIMM."

The BSIMM itself is organized around a Software Security Framework (SSF) consists of four domains, with its own goals and practices. Within this overarching framework, the BSIMM documents 109 activities. (See the table above.)

Using the SSF, McGraw and his co-authors, Brian Chess, Chief Scientist and co-founder of Fortify Software and Sammy Migues, Director of Knowledge Management and Principal at Cigital, conducted a series of in-depth fact-finding interviews with executives in charge of the thirty software security initiatives.

The BSIMM approach, "based entirely on the data gathered and processed from the interviews is a practical one," and it reflects a sophisticated understanding of real-world challenges faced by those tasked with software security in operational environments.

"We think the descriptive nature of the BSIMM study is an important characteristic of
the work," the authors remark. "We describe not what you should do for software security, but what successful software security initiatives are actually doing. BSIMM2 can be used to measure your own software security initiative and compare it to others ... One could build a maturity model for software security theoretically (by pondering what organizations should do) or one could build a maturity model by understanding what a set of distinct organizations have already done successfully. The latter approach is both scientific and grounded in the real world, and it is the one we followed.

How to Use the BSIMM

"The best way to use the BSIMM is to identify goals and objectives of your own and look to the BSIMM to determine which activities make sense. Choosing among and between practices is not recommended. In particular, we don’t believe a successful initiative will be able to omit all activity for any of the twelve practices. Put another way, the
data show that high maturity initiatives are well rounded and carry out activities in all twelve practices. In any case, browsing through objectives and noting which appeal to your culture is a much cleaner way to proceed. Once you know what your objectives are, you can determine which activities to adopt. Please note that in our view it is better to put some of the level one activities in each practice into place than to accelerate to level three in any one practice while ignoring other practices. This view is informed by the data we have gathered."

A spider chart from the new study, included in this post, should give you a sense of how to use the BSIMM. This particular spider chart shows data from the twelve financial services firms and the seven independent software vendors that participated in BSIMM2. The chart has twelve spokes, one for each of the twelve practices.

"These charts are created by noting the highest level of activity in a practice for a given firm, and then averaging those scores for a group of firms, resulting in twelve numbers (one for each practice)."

"One obvious comparison would be to chart your own maturity high water mark against these averages to see how your program stacks up."

The Devil is in the Details

To give you a better idea of the granularity of the one hundred and nine activities within the twelve practices and the four domains, I have randomly selected one activity from one practice within each of the domains.

Governance: Strategy and Metrics
SMI.4 Identify gate locations, gather necessary artifacts. The software security process will eventually involve release gates at one or more points in the software development lifecycle (SDLC) or SDLCs. The first two steps toward establishing these release gates is to identify gate locations that are compatible with existing development practices and to begin gathering the input necessary for making a go/no go decision. Importantly at this stage, the gates are not enforced. For example, the SSG can collect security testing results for each project prior to release, but stop short of passing judgment on what constitutes sufficient testing or acceptable test results.

Intelligence: Security Features and Design (SFD)
SFD 2.2 Create SSG capability to solve difficult design problems. When the SSG is involved early in the new product process, it contributes to new architecture and solves difficult design problems. The negative impact security has on other constraints (time to market, price, etc.) is minimized. If an architect from the SSG is involved in the design of a new protocol, he or she could analyze the security implications of existing protocols and identify elements that should be duplicated or avoided.

SSDL TOUCHPOINTS: Code Review (CR)
CR2.3 Make code review mandatory for all projects. Code review is a mandatory release gate for all projects. Lack of code review or unacceptable results will stop the release train. While all projects must undergo code review, the review process might be different for different kinds of projects. The review for low-risk projects might rely more heavily on automation, and the review for high-risk projects might have no upper bound on the amount of time spent by reviewers.

DEPLOYMENT: Configuration Management and Vulnerability Management (CMVM)
CMVM 2.1 Have emergency codebase response. The organization can make quick code changes when an application is under attack. A rapid-response team works in conjunction with the application owners and the SSG to study the code and the attack, find a resolution, and push a patch into production.


To download the study, FOR FREE, from the BSIMM2 site.

Related Posts

From Biometrics to BSIMM , & "50 Hurricanes Hitting At Once!" -- A Report on the Sixth Annual Partners Conference

CyLab Business Risks Forum: Gary McGraw on Online Games, Electronic Voting and Software Security

Fortify & Cigital Release BSIMM -- Integrating Best Practices from Nine Software Security Initiatives

Tuesday, May 11, 2010

Truth AND Consequences for Critical Infrastructure: What if the Smart Grid has Stupid Security & Other Real World Answers on Energy Security

U.S. Power Grid (FEMA, 2008)


"A cyber attack could disable trains all over the country … It could blow up pipelines. It could cause blackouts and damage electrical power grids so that the blackouts would go on for a long time. It could wipe out and confuse financial records, so that we would not know who owned what, and the financial system would be badly damaged. It could do things like disrupt traffic in urban areas by knocking out control computers. It could, in nefarious ways, do things like wipe out medical records." Richard A. Clarke to Terry Gross on National Public Radio (4-19-10)

Truth AND Consequences for Critical Infrastructure: What if the Smart Grid has Stupid Security & Other Real World Answers on Energy Security

By Richard Power


You are no doubt familiar with the CBS Sixty Minutes story (11-8-09) on Cyber War that highlighted the attacks on the Brazilian power grid; and you no doubt recall when CIA analyst Tom Donohue referenced declassified information on successful cyber attacks on several non-US cities via the Internet (PC World, 1-19-08).

Let’s not wait for the big power grid security story of 2010. The time for truth and consequences for critical infrastructure is already here.

To get some real world answers to real world questions on power grid security, I turned to a friend and colleague, Seth Bromberger. He has been involved in security for more than sixteen years, and in cyber security for a major utility for five years. He is also on the Board of Directors of EnergySec, “a private forum of information security, physical security, audit, disaster recovery and business continuity professionals from energy industry asset owners.”

In response to some probing questions, Bromberger and EnergySec director Steve Parker shared their vital perspectives with me.

You can find the full text of the interview in my latest CSO Magazine piece, What if the smart grid has stupid security?

But for CyBlog readers, here is a bonus question and answer that don't appear in the CSO piece:

Q: A lot of consultants, technology vendors, officials and researchers are scrambling to address issues in the energy sector cybersecurity -- what do you think is the biggest misconception?

Seth Bromberger: The biggest misconception results from some of these groups you mention coming in with the mistaken assumption that the folks who are already here don't understand security. They therefore tend to underestimate both the scope of the challenge and the real talent working on solving real-world problems. Nobody's going to be a white knight coming in to save the day - if it were that easy, we'd have solved all the security problems by now.
In my experience, the successful players are the ones who come in with specific expertise designed to solve a particular problem that the utilities are facing. They are working not to get rich quickly or make a name for themselves, but to improve the security posture of the owners and operators of our critical infrastructure by sharing their knowledge in partnership with those responsible for securing that infrastructure.

Again, for the full text of the interview, see my latest piece in CSO Magazine.