Sunday, April 26, 2009

Silver Bullet: Gary McGraw Interviews Virgil Gligor on Software Security and Other Vital Issues


Software security will be with us forever, as far as I am concerned; and I will tell you why, software is by and large a creative process. Don’t let anybody tell you that formal method will account for any more than 10-15% of software development. It hasn’t happen in the last thirty plus years, and it probably will not happen in the future either. Virgil Gligor, Silver Bullet Security Interview with Gary McGraw, 4-21-09

Silver Bullet: Gary McGraw Interviews Virgil Gligor on Software Security and Other Vital Issues

Maybe you are a veteran of many digital wars. Maybe you are a fresh recruit to the virtual corps of info warriors. Maybe you slog on at the frontlines of corporate culture. Maybe you press the cutting edge in a lab at the other side of tomorrow. Maybe you operate out beyond enemy lines in the shadow world of the electronic underground. Whatever your level or circumstance, if you find yourself passionately engaged in the continuum that is cyber security, it is vital that you glean what you can from the experience and insights of others, whether they are along side of you or ahead of you on this treacherous trail into the future.

So I encourage you to spend twenty-seven minutes listening to this podcast of Gary McGraw’s interview with Virgil Gligor. You will benefit from this rich dialogue between thought leaders from two generations, who are both still very much on point, and you will gain invaluable perspective on some of the critical issues that confront us.

Gary McGraw, Chief Technology Officer (CTO), for Citigal Group, is a globally recognized authority on software security and the author of six best selling books, including Exploiting Online Games (2007). McGraw is also editor of the Addison-Wesley Software Security series and produces the monthly Silver Bullet Security Podcast for IEEE Security & Privacy magazine.

For decades, Virgil D. Gligor, Professor of Electrical and Computer Engineering at Carnegie Mellon University and co-director of Carnegie Mellon CyLab, has been a leader in the field of cyber security research, with interests ranging from access control mechanisms, penetration analysis, and denial-of-service protection to cryptographic protocols and applied cryptography, and be served in senior advisory roles for technology leaders, as well as the US government and the global cyber security community.

In this brief, but far-ranging interview McGraw and Gligor touch on a broad range of issues including the need for what Gligor and CyLab term “usable security," the fate of virus scanners, the state of electronic voting, the future of the Foreign Intelligence Surveillance Act (FISA) and even how Gligor’s experience of growing up in the Romania of the communist dictator Nicolae Ceauşescu helped inspire his contribution to the field of cyber security.

-- Richard Power

Here is a transcribed excerpt followed by a link to the full podcast:

McGraw: My own work focuses on Software Security, where I think we have made some tangible progress.

Gligor: Oh, absolutely.

McGraw: I have been tracking the software security space for several years running, and in 2008, revenues from tools and services companies passed $450 million, so that’s exciting, that’s when the middle market begins to emerge, and analysts come in and start covering the space. Do you think that Software Security is here to stay?

Gligor: Software security will be with us forever, as far as I am concerned; and I will tell you why, software is by and large a creative process. Don’t let anybody tell you that formal method will account for any more than 10-15% of software development. It hasn’t happened in the last thirty plus years, and it probably will not happen in the future either.

McGraw: Especially in the US.

Gligor: Well, anywhere. So let’s admit to that fact. No matter what we academics would like the world to be, the world is not that way, in the sense that formal methods have had minuscule penetration so far. But things are improving. Starting from that premise, there will always be bugs; consequently there is always going to be room for improvement in the software development process, because it is a creative endeavor. So what engineering does is actually help mitigate the consequences of the errors introduced, of course, because our creativity sometimes exceeds our ability to do things correctly. But I believe from that perspective that software security, as a discipline, is extremely important. By the way, at Carnegie Mellon, you don’t see a single course called Software Security, but we teach it in all of our courses. It starts from Introduction Security, and goes to Network Security, and then it goes to Systems Software Security, and on and on. As a matter of fact, even in Applied Cryptography, we talk about software security. ... Also, we have the benefit of having the Software Engineering Institute (SEI) and the Institute for Software Research (ISR) and of course the Computer Science Department teaches a number of courses in the security aspects of software. Model checking, which is the only formal or semi-formal method applied on larger scale is being taught [here at Carnegie Mellon University] by one of the most recent Turing Award winners, Ed Clark.

McGraw: So when I was starting in the computer field some twenty-three years after you were starting in the computer security field, I was at a meeting at NIST where we were bemoaning sharing information about actual exploits when someone stood up and there was some guy there, [saying] “Oh, software security is like a fad, its kind of like a sign wave, it comes and it goes, and it’s just come again, but it is going to go again.” You don’t believe that?

Gligor:I don’t believe any of that, but I can explain where it came from …

McGraw: Please do. I’ve always wondered.

Gligor: In the early days the idea was that we could build security kernels, which were formally verified, in other words, the software security of this kernel was assured formally. But these kernels were very small. The thought was that if we had these small kernels that were formally verified, then the applications and the rest of the operating system software could have as many bugs as possible and it would do no damage to the system. Of course, I am exaggerating this view to make a point, but essentially that was the view; and that view, unfortunately, never materialized.

McGraw: It seems like there is a ripple of that view in this notion of security co-processors in laptops, etc.

Gligor: Well, security co-processors help to actually check things faster. They helped to a significant extent to make security usable from a performance point of view. But, basically, doing all sorts of traces, and checking traces, and backing up computations essentially turn out to be good band-aids.


The Silver Bullet Security Podcast can be played online or downloaded from Citigal.