Monthly Archives: March 2010

WS-I Publishes Basic Security Profile (BSP) 1.1

This morning the Web Services Interoperability Organization (WS-I) published the Basic Security Profile (BSP), version 1.1. This is a very significant milestone, because BSP is the definitive reference that we will all use to create secure and interoperable Web services for the foreseeable future.

I have a close personal connection to Basic Security Profile. I was one of the editors of both this specification and its predecessor, BSP 1.0. It took a lot of hard work to get this far, but the results of the our working group’s labours are important for the community. We all use Web services because of their potential for interoperability, but interoperability is only practical with the formalization that WS-I offers.

People have questioned the need for BSP in a world that already has OASIS WS-Security and the handful of WS-* standards that relate to it. The truth is, formal standards like WS-Security are so broad, complex, and new that it just isn’t realistic to expect vendor implementations to integrate perfectly. The WS-I approach differs from conventional standards efforts because the focus is strictly on promoting much needed interoperability. WS-I does not define new functionality, nor does it attempt to show people the “correct” way to use existing standards; it exists to profile existing standards by refining messages, amplifying important statements, and clarifying ambiguities so that systems from different vendors can communicate securely.

WS-I promotes interoperability by providing three important components: a profile of an existing standard (or set of standards), a suite of test tools to validate conformance, and finally sample applications. Microsoft’s Paul Cotton, the chair for the BSP working group, likens this to a three-legged stool—the effort can only stand when all three legs are present. This holistic approach taken by the WS-I distinguishes it from most standards efforts by including both specification and reference.

In the case of BSP 1.1, a very important component of the effort was the vendor interop. Six companies participated in this:

This is your short list of the vendors who are serious about Web services security. Obviously, we are the odd man out in terms of size and market reach, but we believe it’s important to drive the standards, not just implement them. And this is something we will continue to do. The days of big WS-* standards efforts are over. With the release of BSP, we have what we need to secure SOA. The next big challenge will be standardization of the cloud, and Layer 7 will be there.

Advertisement

Upcoming Webinar: Security in the Cloud vs Security for the Cloud

I was speaking recently to Steve Coplan, Senior Analyst, Enterprise Security Practice at the 451 Group. I always enjoy talking to Steve. He has a deep understanding of technology and our business, but it’s his training as a journalist that I think sets him apart from the other analysts. His work comes through as erudite but accessible, and it is always very well written.

In our discussion, Steve was careful to make a clear distinction between between security in the cloud and security for the cloud. This intrigued me, because I think the differences are too often lost when people talk about cloud in the abstract. Steve’s point became the topic of a webinar that he and I will deliver together this Thursday, March 25, 2010 at 12:00pm EDT/9:00am PDT/4:00pm GMT.

I hope you can join us to learn why this distinction is so important. You can sign up for this webinar at the Layer 7 Technologies web site.

Why Intermediaries Matter in SOA

Last week Joe McKendrick from ZDNet asked the question are SOA anti-principles more important than success principles? The idea of anti-principles came from Steve Jones, who a few years back did some nice work documenting SOA anti-patterns. In a post published last fall, Steve builds on his ideas, observing:

The problem is that there is another concept that is rarely listed, what are your anti-principles?

which is one of those good questions that should give you pause.

Steve continues:

In the same way as Anti-Patterns give you pointers when its all gone wrong then Anti-Principles are the things that you will actively aim to avoid during the programme.

I found this interesting because one of the anti-principles the post lists is direct calling. Steve describes this bad practice as follows:

This anti-principle is all about where people just get a WSDL and consume it directly without any proxy or intermediary. It’s programme suicide and it shouldn’t be done.

Naturally, because I’m in the business of building intermediaries, this seemed perfectly reasonable to me. But on reflection, I think that the argument as to why direct calling is an anti-principle needs further explanation.

Indirection is one of the great principles of computer science. Indirection lets us decouple layers, allowing these to change independently as long as they honour the interface contract. Intermediary layers in SOA, a good example being a proxy like Layer 7’s SecureSpan Gateway, build on this concept allowing architects to decouple service providers from consumers—much as Steve advocates in his post. This layer of indirection means that we can tease out certain highly parameterizable aspects of communication—aspects such as security, monitoring, protocol adaptation, routing, etc.—into a separate policy layer that promotes consistency, affords the opportunity for reuse, and insulates clients (and servers) from change.

This is best illustrated by example. Suppose I have two services: foo and bar. Both services have a number of clients that access them. To explore the issues with direct connection, let’s consider the scenario  where all of these clients establish direct connections with my services:

The first thing you should notice is that the firewall is open to allow the external clients to make their direct connections with the service hosts. In other words, these hosts are, for all intents and purposes, deployed in the DMZ and must be hardened under that assumption. For many applications, this is a non-trivial exercise. Hopefully, your internal alarm bells are going off already.

Few applications remain completely static over their lifetime. Patches become necessary; hardware fails and must be replaced—all of this is part of the natural life cycle of software. My services foo and bar are no exception. One day, the server hosting foo starts to fail, and I find myself in the position that I need to quickly move the foo service onto a new host. Suddenly, all of my clients are broken:

Now I have a very significant problem. I need to update the URLs on every client I serve, and to do it quickly. Every minute I’m down, I’m losing business. Welcome to the pressure cooker.

This potential problem would be easy to manage if I had an intermediary, operating as a policy-driven proxy that is placed in the DMZ between my clients and my services:

This proxy now handles URL-based routing on the fly. If foo moves, it’s a simple matter of modifying the internal routing on the intermediary and voila: no client ever has a problem. My client base is completely insulated from a major structural change to my back end service hosts.

Of course there are tricks we could use employing HTTP redirects, common NAT, or more dynamic bindings to URLs to avoid such a contrived problem in the first place. But what if the change was to something less configurable using conventional means, such as the basic security model for communication? Suppose that as a corporation, we decide to mandate that all clients must now authenticate using client-side certificates under SSL? Foo is running on a Java application server, bar is on .NET; both are capable of accommodating this new model, but their administration is radically different. And to make matters worse, I have a dozen or so additional apps implemented in everything from Ruby on Rails to PHP that I also need to change. That’s a lot of work.

An intermediary would make this task trivial by insulating services from this change to policy. The strategy here is to terminate the SSL connection and authenticate the client on the intermediary instead of on the service hosts. A few clicks of a mouse later, and my job is complete for every service.

This certainly saves time and adds consistency, but the real value is in the change of responsibility. The task of implementing this security model now falls under the jurisdiction of a professional security administrator, not the developers of each separate application. In fact, no code or configuration needs to change on foo, bar, or any of my services. The security model is decoupled from the application, taken out of the hands of each developer and centralized. This is the basic value proposition of intermediaries in SOA, and this value is never realized effectively if you allow direct connections between clients and servers. This is why architectural patterns are sometimes necessary to allow us to be consistent with our principles—or our anti-principles, as the case may be.

Interested in trying an intermediary? You can get a Layer 7 SecureSpan virtual appliance to try out at http://www.layer7tech.com. Alternatively, do your evaluation completely in the cloud. Check out the SecureSpan virtual appliance gateway on the Amazon marketplace. This virtual appliance AMI runs in the EC2 cloud on Amazon Web services. It is the first and only SOA gateway to run in the cloud.

REST Security Does Exist—You Just Need To Apply It

On the eve of the RSA conference this year, Chris Comerford and Pete Soderling published a provocative article in Computerworld titled Why REST security doesn’t exist. It’s a prelude to a talk the author’s are delivering at the conference. Their premise is that while good REST security best practices do indeed exist, developers just don’t seem to follow them.

Comerford and Sodering attribute this state of affairs to a combination of two things. First, REST lacks a well-articulated security model. Few would argue with this—REST, by virtue of its grassroots origins, suffers from a security just-do-it-like-the-web nonchalance that’s certainly done it no favors.

The second issue concerns developers who tend to rush implementation without giving due consideration to security. Truthfully, this is the story of security across all of IT, but I might suggest that with REST, the problem is especially acute. The REST style owes much of its popularity to being simple and fast to implement, particularly when faced with the interest-crushing complexity and tooling demands of the WS-* stack. It’s reasonable to think that in the enthusiastic dash to cross the working application finish line, that security is conveniently de-emphasized or forgotten altogether.

REST, of course, can be secured, and the author’s offer sound advice to accomplish this deceptively simple task. They recommend that API developers:

  • “Do employ the same security mechanisms for your APIs as any web application your organization deploys. For example, if you are filtering for XSS on the web front-end, you must do it for your APIs, preferably with the same tools.
  • Don’t roll your own security. Use a framework or existing library that has been peer-reviewed and tested. Developers not familiar with designing secure systems often produce flawed security implementations when they try to do it themselves, and they leave their APIs vulnerable to attack.
  • Unless your API is a free, read-only public API, don’t use single key-based authentication. It’s not enough. Add a password requirement.
  • Don’t pass unencrypted static keys. If you’re using HTTP Basic and sending it across the wire, encrypt it.
  • Ideally, use hash-based message authentication code (HMAC) because it’s the most secure. (Use SHA-2 and up, avoid SHA & MD5 because of vulnerabilities.) “

I agree with this advice. And just to demonstrate how easy it is to implement, I’ve constructed a simple policy for the Layer 7 Technologies SecureSpan Gateway demonstrating their directives:

In this policy, I’m ensuring that the REST client is using SSL for three things: confidentiality, integrity, and server authentication. I could require client-side certificate authentication here, but instead I’m using HTTP digest, to emphasize the requirement to avoid using plain text HTTP basic or simple user keys. I’m authorizing access based on group membership here, restricting access to members of the sales group.

Finally, I’ve added a scan for cross site scripting attacks.

In the interest of deeper vigilance, I’m also searching for PHP and shell injection signatures. This is admittedly broad, but it covers me in case the developer of the service changes implementation without warning.

This last point—that there is an explicit separation made between developers and the security administrators writing and enforcing policy—is an important one. Developers will be developers: some will be rigorous about implementing security best practices; others won’t be. The only way to manage this is to assume a defensive posture in service policy, both from the perspective of incoming transactions, but also around the services themselves. The best practice here is to externalize policy enforcement and assign dedicated security professionals to administer policy.

This defensive approach to securing REST services fits well with the spirit of Comerford and Soderling’s directives. It addresses, in particular, their point about leveraging peer-reviewed frameworks. This is precisely what Layer 7’s SecureSpan Gateway is—a peer-reviewed security framework offering great depth of functionality. SecureSpan is undergoing Common Criteria Review of its implementation, as well as the entire development process for the product. We’re certifying to EAL4+, which is particularly rigorous. This provides assurance that the technology is sufficiently robust for deployment at the highest levels of military and the government. Common Criteria is an arduous process, and going through it demonstrates Layer 7’s deep commitment to security. You should not ever consider a security gateway—for REST, or for XML messaging—that isn’t undergoing the Common Criteria evaluation. Remember, Common Criteria is a necessary stamp of approval for governments around the world; it should also be a basic requirement for you.

Try SecureSpan yourself, and see how you can implement robust application security and monitoring without changing code. Download an evaluation of the SecureSpan virtual appliance right here.

The Seven Deadly Sins: The Cloud Security Alliance Identifies Top Cloud Security Threats

Today marks the beginning of RSA conference in San Francisco, and the Cloud Security Alliance (CSA) has been quick out of the gate with the release of its Top Threats to Cloud Computing Report. This peer-reviewed paper characterizes the top seven threats to cloud computing, offering examples and remediation steps.

The seven threats identified by the CSA are:

  1. Abuse and Nefarious Use of Cloud Computing
  2. Insecure Application Programming Interfaces
  3. Malicious Insiders
  4. Shared Technology Vulnerabilities
  5. Data Loss/Leakage
  6. Account, Service, and Traffic Hijacking
  7. Unknown Risk Profile

Some of these will certainly sound familiar, but the point is to highlight threats that may be amplified in the cloud, as well as those that are unique to the cloud environment.

This CSA threats report is a true community effort. The working group had representatives from a broad range of cloud providers, infrastructure vendors, and cloud customers, including:

  • HP
  • Oracle
  • Bank of America
  • Microsoft
  • Rackspace
  • Verizon
  • Cigital
  • Qualsys
  • Trend Micro
  • Websense
  • Zscalar
  • CloudSecurity.org
  • Cloud Security Alliance
  • Layer 7 Technologies

I represented Layer 7. I tackled Data Loss/Leakage, and performed some editorial of the paper as a whole. As working groups go, I can tell you that this one simply worked well. I’ve been involved with a number of standards groups in the past, this time we seemed to have all of the right people involved. The group converged on the key issues quickly and decisively. It was a good process, and I’m happy with the results.

We thing we did debate was how best to rate each threat. We finally agreed that the best approach was to let the community decide You may recall that last week I wrote an blog entry soliciting your input to help classify threat severity. Well, the results are in and they are certainly interesting. Perhaps not surprising, the threat of Data Loss/Leakage leads the community’s list of concerns, at around 28%. But what is more intriguing is that there really isn’t too much of a difference between the perceived impact of any threat on the list (all fall between around 8-28%). This is encouraging, as it suggests that we nailed the current zeitgeist in our list. It is just a little disconcerting that there remain seven significant threats to consider.

The latest survey results, and the threats paper itself, are available from the CSA web site. Bear in mind that is evolving work. The working group intends to update the list regularly, so if you would like to make a contribution to the cloud community, please do get involved. And remember: CSA membership is free to individuals; all you need to give us is your time and expertise.