Category Archives: Layer 7 Technologies

The Swimming Pool Model of Public and Private Clouds

This morning, I recorded a podcast with Keith Shaw from Network World. Our discussion was about the 5 mistakes people make when moving out into the cloud. The podcast should be available next week, but in the meantime, I thought I would share a nice analogy that Keith came up with illustrating the difference between public and private clouds.

Clouds are like swimming pools. Private clouds are like a pool in your backyard. Every pool has a fence for reasons of practicality and liability. Since this is your pool, you get to decide who is allowed to go for a dip. Sometimes there is only one person in the pool; sometimes there’s ten—but anybody going for a swim is your responsibility. Each day you add chlorine and keep up with the cleaning. But more likely, you hire someone to do this for you.

Public clouds are like public pools. Someone else—probably the city—builds the pool and maintains it. Anyone who can pay the admission is welcome, as long as they agree to follow a few simple rules. There are lifeguards to watch over you and your kids, and you trust the pool management has checked them out to make sure they are trustworthy and posses the proper credentials. Often the public pool is crowded, and there is this annoying fat kid that keeps doing cannonballs close to where you are swimming, but overall it provides good value. True, once you came home with a strange itch, but the local public pool is certainly cheaper and a lot less work than maintaining your own.

It’s just too bad they don’t serve daiquiris.

XML Acceleration Using Virtual Appliances

I have never met Gordon Moore. But his law? Well, Moore’s Law is my best friend. My entire career, it helped to make my work run faster. On more than one occasion, I think it may have saved my job. I like Moore’s Law a lot.

Moore’s Law has had a big impact on XML acceleration. XML processing—specifically schema validation, XSLT transform and XPath query—is one of those problems that lends itself well to acceleration using specialized silicon. Tarari (now a part of LSI) is the leader in developing specialized chip sets for accelerating basic XML functions. We’ve leveraged their technology for years here at Layer 7 for our hardware appliance line, and we will continue to do so in the future. But like all specialized chip designers, Tarari’s engineers are engaged in a protracted battle with the ever-increasing capacity of general-purpose processors. Dr. Moore’s Law pursues them with a relentless pace, driving Tarari toward the breakthroughs that leave general CPUs far behind each time these begin to nip at their heels.

Silicon, however, is not the only approach to accelerating XML. Tremendous gains in XML processing can also be realized using highly optimized, pure-software algorithms that run on generic CPUs. These, of course, effortlessly ride the wave of Moore’s law. We use such algorithms at Layer 7 to provide very real XML acceleration in our virtual appliances, which obviously have no access to dedicated acceleration boards.

This fact that virtual appliances can accelerate XML processing is often missed. I was reminded of this when reading Joe McKendrick’s latest blog entry  The Case for Considering XML Appliances. Joe’s entry builds off a recent piece published by Thomas Rischbeck from IPT concerning SOA intermediaries (I’m particularly fond of the last diagram in Thomas’ article). As with all of Joe’s work, his article today is very perceptive, but I disagree with one statement he makes:

And, as is the case with appliances these days, they also come in virtual form as well. The only catch is that virtual XML appliances cannot provide XML acceleration.

I see XML acceleration as a continuum like this:

Virtual appliances may not be as fast as hardware appliances for XML acceleration, but they do accelerate processing over conventional approaches. And one of value propositions of virtual appliances is that these provide a simple means to scale horizontally (in clouds, or in conventional virtualization farms) instead of vertically. Moore’s Law is its friend too.

Layer 7 Technologies is the only SOA Gateway vendor that offers a product line that features both hardware and virtual appliances. You can buy the hardware SecureSpan Gateway appliance that includes silicon for XML acceleration. Or you can buy the virtualized SecureSpan Gateway appliance that includes our highly tuned algorithms for XML acceleration. These products offer identical functionality—so choose the one that makes the most sense in your architecture.

And don’t ever let a form factor dictate your architecture.

All Things Considered About Cloud Computing Risks and Challenges

Last month during the RSA show, I met with Rob Westervelt from ITKnowledgeExchange in the Starbucks across from Moscone Center. Rob recorded our discussion about the challenges of security in the cloud and turned this into a podcast. I’m quite pleased with the results. You can pick up a little Miles Davis in the background, the odd note of an espresso being drawn. Alison thinks that I sound very NPR. Having been raised on CBC Radio, I take this as a great compliment.

Pour yourself a coffee and have a listen.

WS-I Publishes Basic Security Profile (BSP) 1.1

This morning the Web Services Interoperability Organization (WS-I) published the Basic Security Profile (BSP), version 1.1. This is a very significant milestone, because BSP is the definitive reference that we will all use to create secure and interoperable Web services for the foreseeable future.

I have a close personal connection to Basic Security Profile. I was one of the editors of both this specification and its predecessor, BSP 1.0. It took a lot of hard work to get this far, but the results of the our working group’s labours are important for the community. We all use Web services because of their potential for interoperability, but interoperability is only practical with the formalization that WS-I offers.

People have questioned the need for BSP in a world that already has OASIS WS-Security and the handful of WS-* standards that relate to it. The truth is, formal standards like WS-Security are so broad, complex, and new that it just isn’t realistic to expect vendor implementations to integrate perfectly. The WS-I approach differs from conventional standards efforts because the focus is strictly on promoting much needed interoperability. WS-I does not define new functionality, nor does it attempt to show people the “correct” way to use existing standards; it exists to profile existing standards by refining messages, amplifying important statements, and clarifying ambiguities so that systems from different vendors can communicate securely.

WS-I promotes interoperability by providing three important components: a profile of an existing standard (or set of standards), a suite of test tools to validate conformance, and finally sample applications. Microsoft’s Paul Cotton, the chair for the BSP working group, likens this to a three-legged stool—the effort can only stand when all three legs are present. This holistic approach taken by the WS-I distinguishes it from most standards efforts by including both specification and reference.

In the case of BSP 1.1, a very important component of the effort was the vendor interop. Six companies participated in this:

This is your short list of the vendors who are serious about Web services security. Obviously, we are the odd man out in terms of size and market reach, but we believe it’s important to drive the standards, not just implement them. And this is something we will continue to do. The days of big WS-* standards efforts are over. With the release of BSP, we have what we need to secure SOA. The next big challenge will be standardization of the cloud, and Layer 7 will be there.

Upcoming Webinar: Security in the Cloud vs Security for the Cloud

I was speaking recently to Steve Coplan, Senior Analyst, Enterprise Security Practice at the 451 Group. I always enjoy talking to Steve. He has a deep understanding of technology and our business, but it’s his training as a journalist that I think sets him apart from the other analysts. His work comes through as erudite but accessible, and it is always very well written.

In our discussion, Steve was careful to make a clear distinction between between security in the cloud and security for the cloud. This intrigued me, because I think the differences are too often lost when people talk about cloud in the abstract. Steve’s point became the topic of a webinar that he and I will deliver together this Thursday, March 25, 2010 at 12:00pm EDT/9:00am PDT/4:00pm GMT.

I hope you can join us to learn why this distinction is so important. You can sign up for this webinar at the Layer 7 Technologies web site.

Why Intermediaries Matter in SOA

Last week Joe McKendrick from ZDNet asked the question are SOA anti-principles more important than success principles? The idea of anti-principles came from Steve Jones, who a few years back did some nice work documenting SOA anti-patterns. In a post published last fall, Steve builds on his ideas, observing:

The problem is that there is another concept that is rarely listed, what are your anti-principles?

which is one of those good questions that should give you pause.

Steve continues:

In the same way as Anti-Patterns give you pointers when its all gone wrong then Anti-Principles are the things that you will actively aim to avoid during the programme.

I found this interesting because one of the anti-principles the post lists is direct calling. Steve describes this bad practice as follows:

This anti-principle is all about where people just get a WSDL and consume it directly without any proxy or intermediary. It’s programme suicide and it shouldn’t be done.

Naturally, because I’m in the business of building intermediaries, this seemed perfectly reasonable to me. But on reflection, I think that the argument as to why direct calling is an anti-principle needs further explanation.

Indirection is one of the great principles of computer science. Indirection lets us decouple layers, allowing these to change independently as long as they honour the interface contract. Intermediary layers in SOA, a good example being a proxy like Layer 7’s SecureSpan Gateway, build on this concept allowing architects to decouple service providers from consumers—much as Steve advocates in his post. This layer of indirection means that we can tease out certain highly parameterizable aspects of communication—aspects such as security, monitoring, protocol adaptation, routing, etc.—into a separate policy layer that promotes consistency, affords the opportunity for reuse, and insulates clients (and servers) from change.

This is best illustrated by example. Suppose I have two services: foo and bar. Both services have a number of clients that access them. To explore the issues with direct connection, let’s consider the scenario  where all of these clients establish direct connections with my services:

The first thing you should notice is that the firewall is open to allow the external clients to make their direct connections with the service hosts. In other words, these hosts are, for all intents and purposes, deployed in the DMZ and must be hardened under that assumption. For many applications, this is a non-trivial exercise. Hopefully, your internal alarm bells are going off already.

Few applications remain completely static over their lifetime. Patches become necessary; hardware fails and must be replaced—all of this is part of the natural life cycle of software. My services foo and bar are no exception. One day, the server hosting foo starts to fail, and I find myself in the position that I need to quickly move the foo service onto a new host. Suddenly, all of my clients are broken:

Now I have a very significant problem. I need to update the URLs on every client I serve, and to do it quickly. Every minute I’m down, I’m losing business. Welcome to the pressure cooker.

This potential problem would be easy to manage if I had an intermediary, operating as a policy-driven proxy that is placed in the DMZ between my clients and my services:

This proxy now handles URL-based routing on the fly. If foo moves, it’s a simple matter of modifying the internal routing on the intermediary and voila: no client ever has a problem. My client base is completely insulated from a major structural change to my back end service hosts.

Of course there are tricks we could use employing HTTP redirects, common NAT, or more dynamic bindings to URLs to avoid such a contrived problem in the first place. But what if the change was to something less configurable using conventional means, such as the basic security model for communication? Suppose that as a corporation, we decide to mandate that all clients must now authenticate using client-side certificates under SSL? Foo is running on a Java application server, bar is on .NET; both are capable of accommodating this new model, but their administration is radically different. And to make matters worse, I have a dozen or so additional apps implemented in everything from Ruby on Rails to PHP that I also need to change. That’s a lot of work.

An intermediary would make this task trivial by insulating services from this change to policy. The strategy here is to terminate the SSL connection and authenticate the client on the intermediary instead of on the service hosts. A few clicks of a mouse later, and my job is complete for every service.

This certainly saves time and adds consistency, but the real value is in the change of responsibility. The task of implementing this security model now falls under the jurisdiction of a professional security administrator, not the developers of each separate application. In fact, no code or configuration needs to change on foo, bar, or any of my services. The security model is decoupled from the application, taken out of the hands of each developer and centralized. This is the basic value proposition of intermediaries in SOA, and this value is never realized effectively if you allow direct connections between clients and servers. This is why architectural patterns are sometimes necessary to allow us to be consistent with our principles—or our anti-principles, as the case may be.

Interested in trying an intermediary? You can get a Layer 7 SecureSpan virtual appliance to try out at http://www.layer7tech.com. Alternatively, do your evaluation completely in the cloud. Check out the SecureSpan virtual appliance gateway on the Amazon marketplace. This virtual appliance AMI runs in the EC2 cloud on Amazon Web services. It is the first and only SOA gateway to run in the cloud.

REST Security Does Exist—You Just Need To Apply It

On the eve of the RSA conference this year, Chris Comerford and Pete Soderling published a provocative article in Computerworld titled Why REST security doesn’t exist. It’s a prelude to a talk the author’s are delivering at the conference. Their premise is that while good REST security best practices do indeed exist, developers just don’t seem to follow them.

Comerford and Sodering attribute this state of affairs to a combination of two things. First, REST lacks a well-articulated security model. Few would argue with this—REST, by virtue of its grassroots origins, suffers from a security just-do-it-like-the-web nonchalance that’s certainly done it no favors.

The second issue concerns developers who tend to rush implementation without giving due consideration to security. Truthfully, this is the story of security across all of IT, but I might suggest that with REST, the problem is especially acute. The REST style owes much of its popularity to being simple and fast to implement, particularly when faced with the interest-crushing complexity and tooling demands of the WS-* stack. It’s reasonable to think that in the enthusiastic dash to cross the working application finish line, that security is conveniently de-emphasized or forgotten altogether.

REST, of course, can be secured, and the author’s offer sound advice to accomplish this deceptively simple task. They recommend that API developers:

  • “Do employ the same security mechanisms for your APIs as any web application your organization deploys. For example, if you are filtering for XSS on the web front-end, you must do it for your APIs, preferably with the same tools.
  • Don’t roll your own security. Use a framework or existing library that has been peer-reviewed and tested. Developers not familiar with designing secure systems often produce flawed security implementations when they try to do it themselves, and they leave their APIs vulnerable to attack.
  • Unless your API is a free, read-only public API, don’t use single key-based authentication. It’s not enough. Add a password requirement.
  • Don’t pass unencrypted static keys. If you’re using HTTP Basic and sending it across the wire, encrypt it.
  • Ideally, use hash-based message authentication code (HMAC) because it’s the most secure. (Use SHA-2 and up, avoid SHA & MD5 because of vulnerabilities.) “

I agree with this advice. And just to demonstrate how easy it is to implement, I’ve constructed a simple policy for the Layer 7 Technologies SecureSpan Gateway demonstrating their directives:

In this policy, I’m ensuring that the REST client is using SSL for three things: confidentiality, integrity, and server authentication. I could require client-side certificate authentication here, but instead I’m using HTTP digest, to emphasize the requirement to avoid using plain text HTTP basic or simple user keys. I’m authorizing access based on group membership here, restricting access to members of the sales group.

Finally, I’ve added a scan for cross site scripting attacks.

In the interest of deeper vigilance, I’m also searching for PHP and shell injection signatures. This is admittedly broad, but it covers me in case the developer of the service changes implementation without warning.

This last point—that there is an explicit separation made between developers and the security administrators writing and enforcing policy—is an important one. Developers will be developers: some will be rigorous about implementing security best practices; others won’t be. The only way to manage this is to assume a defensive posture in service policy, both from the perspective of incoming transactions, but also around the services themselves. The best practice here is to externalize policy enforcement and assign dedicated security professionals to administer policy.

This defensive approach to securing REST services fits well with the spirit of Comerford and Soderling’s directives. It addresses, in particular, their point about leveraging peer-reviewed frameworks. This is precisely what Layer 7’s SecureSpan Gateway is—a peer-reviewed security framework offering great depth of functionality. SecureSpan is undergoing Common Criteria Review of its implementation, as well as the entire development process for the product. We’re certifying to EAL4+, which is particularly rigorous. This provides assurance that the technology is sufficiently robust for deployment at the highest levels of military and the government. Common Criteria is an arduous process, and going through it demonstrates Layer 7’s deep commitment to security. You should not ever consider a security gateway—for REST, or for XML messaging—that isn’t undergoing the Common Criteria evaluation. Remember, Common Criteria is a necessary stamp of approval for governments around the world; it should also be a basic requirement for you.

Try SecureSpan yourself, and see how you can implement robust application security and monitoring without changing code. Download an evaluation of the SecureSpan virtual appliance right here.

The Seven Deadly Sins: The Cloud Security Alliance Identifies Top Cloud Security Threats

Today marks the beginning of RSA conference in San Francisco, and the Cloud Security Alliance (CSA) has been quick out of the gate with the release of its Top Threats to Cloud Computing Report. This peer-reviewed paper characterizes the top seven threats to cloud computing, offering examples and remediation steps.

The seven threats identified by the CSA are:

  1. Abuse and Nefarious Use of Cloud Computing
  2. Insecure Application Programming Interfaces
  3. Malicious Insiders
  4. Shared Technology Vulnerabilities
  5. Data Loss/Leakage
  6. Account, Service, and Traffic Hijacking
  7. Unknown Risk Profile

Some of these will certainly sound familiar, but the point is to highlight threats that may be amplified in the cloud, as well as those that are unique to the cloud environment.

This CSA threats report is a true community effort. The working group had representatives from a broad range of cloud providers, infrastructure vendors, and cloud customers, including:

  • HP
  • Oracle
  • Bank of America
  • Microsoft
  • Rackspace
  • Verizon
  • Cigital
  • Qualsys
  • Trend Micro
  • Websense
  • Zscalar
  • CloudSecurity.org
  • Cloud Security Alliance
  • Layer 7 Technologies

I represented Layer 7. I tackled Data Loss/Leakage, and performed some editorial of the paper as a whole. As working groups go, I can tell you that this one simply worked well. I’ve been involved with a number of standards groups in the past, this time we seemed to have all of the right people involved. The group converged on the key issues quickly and decisively. It was a good process, and I’m happy with the results.

We thing we did debate was how best to rate each threat. We finally agreed that the best approach was to let the community decide You may recall that last week I wrote an blog entry soliciting your input to help classify threat severity. Well, the results are in and they are certainly interesting. Perhaps not surprising, the threat of Data Loss/Leakage leads the community’s list of concerns, at around 28%. But what is more intriguing is that there really isn’t too much of a difference between the perceived impact of any threat on the list (all fall between around 8-28%). This is encouraging, as it suggests that we nailed the current zeitgeist in our list. It is just a little disconcerting that there remain seven significant threats to consider.

The latest survey results, and the threats paper itself, are available from the CSA web site. Bear in mind that is evolving work. The working group intends to update the list regularly, so if you would like to make a contribution to the cloud community, please do get involved. And remember: CSA membership is free to individuals; all you need to give us is your time and expertise.

You Can Help the Cloud Security Alliance Classify the Top Threats in the Cloud

The Cloud Security Alliance (CSA) needs your help to better understand the risk associated with cloud threats. Earlier this year, the CSA convened a working group with the mandate to identify the top threats in the cloud. This group brought together a diverse set of security and cloud experts, including myself representing Layer 7. Our group identified 7 major threats that exist in the cloud, but now we would like to gauge how the community as a whole perceives the risk these threats pose.

I would like to invite you to participate in a short survey so we can get your input. This should only take you about 5 minutes to complete. We intend to work the results of this survey into the CSA Top Threats to Cloud Computing document. This will be formally unveiled at the Cloud Security Alliance Summit, which is part of next week’s RSA conference in San Francisco.

Help us to make the cloud a safer place by identifying and characterizing its greatest threats. Share this survey link with your colleagues. The more participation we can get, the better our results will be, and the stronger the work will become.

You will find our survey here.

The Revolution Will Not Be Televised

Technology loves a good fad. Agile development, Web 2.0, patterns, Web services, XML, SOA, and now the cloud—I’ve lived through so many of these I’m beginning to lose track. And truth be told, I’ve jumped on my fair share of bandwagons. But one thing I have learned is that the successful technologies move at their own incremental pace, independent of the hype cycle around them. Two well known commentators, Eric Knorr from Infoworld, and David Linthicum, from Blue Mountain Labs, both made posts this week suggesting that this may be the case for cloud computing.

Eric Knorr, in his piece Cloud computing gets a (little) more real, writes:

The business driver for the private cloud is clear: Management wants to press a button and get what it needs, so that IT becomes a kind of service vend-o-matic. The transformation required to deliver on that promise seems absolutely immense to me. While commercial cloud service providers have the luxury of a single service focus, a full private cloud has an entire catalogue to account for — with all the collaboration and governance issues that stopped SOA (service-oriented architecture) in its tracks.

I agree with Eric’s comment about SOA, as long as you interpret this as “big SOA”. The big bang, starting-Monday-everything-is-SOA approach certainly did fail—and in hindsight, this shouldn’t be surprising. SOA, like cloud computing, cuts hard across fiefdoms and challenges existing order. If you move too fast, if your approach is too draconian, of course you will fail. In contrast, if you manage SOA incrementally, continuously building trust and earning mindshare, then SOA will indeed work.

Successful cloud computing will follow the incremental pattern. It just isn’t reasonable to believe that if you build a cloud, they will come—and all at once, as Eric contends. We have not designed our mission critical applications for cloud deployment. Moreover, our people and our processes may not be ready for cloud deployment. Like the applications, these too can change; but this is a journey, not a destination.

Private clouds represent an opportunity for orderly transition. Some would argue that private clouds are not really clouds at all, but I think this overstates public accessibility at the expense of the technical and operational innovations that better characterize the cloud. Private clouds are important and necessary because they offer an immediate solution to basic governance concerns and offer a trustworthy transition environment for people, process and applications.

David Linthicum seems to agree. In his posting What’s the Deal With Private Clouds? Dave writes:

In many instances, organizations leverage private clouds because the CIO wants the architectural benefits of public cloud computing, such as cost efficiencies through virtualization, but is not ready to give up control of data and processes just yet.

Dave sees private clouds as a logical transition step, one that supports an incremental approach to cloud computing. It’s not as radical as jumping right into the public cloud, but for that reason it’s a much easier sell to the business. It pulls staff in, rather than driving them out, and in the modern enterprise this is a much better recipe for success. He continues:

I think that many enterprises will stand up private clouds today, and then at some point learn to leverage public clouds, likely through dynamic use of public cloud resources to support bursts in processing on the private cloud. Many are calling this “cloud bursting,” but it’s a great way to leverage the elastic nature of public cloud computing without giving up complete control.

Dave’s hypothesis struck a chord with me. Only last week I had a discussion with a group of architects from a large investment bank, and this describes their strategy precisely. The bank has an internal, private cloud today; but they anticipate moving select applications into public clouds, leveraging the knowledge and experience they gained from their private cloud. These architects recognize that cloud isn’t just about the technology or a change in data center economics, but represents a fundamental shift in how IT is delivered that must be managed very carefully.

This revolution just doesn’t make good TV. The hype will certainly be there, but the actual reality will be a slow, measured, but nonetheless inevitable transition.

PS: The title, of course, is from the great Gil Scott-Heron