Category Archives: Layer 7 Technologies

My Thoughts on Cloud Security in SearchCloudComputing.com

I had a good talk the other day with Carl Brooks, the technology writer for SearchCloudComputing.com. We spoke about why security is different in the cloud, and what you can learn from approaches like SOA about how to secure cloud-based apps. The full interview is the lead story today on SearchCloudComputing.com.

How to Safely Publish Internal Services to the Outside World

So you’ve bought into the idea of service-orientation. Congratulations. You’ve begun to create services throughout your internal corporate network. Some of these run on .NET servers; others are Java services; still others are Ruby-on-Rails—in fact, one day you woke up and discovered you even have a mainframe service to manage. But the question you face now is this: how can all of these services be made available to consumers on the Internet? And more important, how can you do it securely?

Most organizations buffer their contact with the outside world using a DMZ. Externally facing systems, such as web servers, live in the DMZ. They mediate access to internal resources, implementing—well, hopefully implementing—a restrictive security model. The DMZ exists to create a security air gap between protocols. The idea is that any system deployed into the DMZ is hardened, resilient, and publishes a highly constrained API (in most cases, a web form). To access internal resources, you have to go through this DMZ-based system, and this system provides a restricted view of the back-end applications and data that it fronts.

The DMZ represents a challenge for publishing services. If services reside on internal systems, how can external clients get through the DMZ and access the service?

Clearly, you can’t simply start poking holes in firewall #2 to allow external systems to access your internal providers directly; this would defeat the entire purpose of the DMZ security model. But this is exactly what some vendors advocate. They propose that you implement local security agents that integrate into the container of the internal service provider. These agents implement policy-based security—essentially taking on the processing burden of authentication, authorization, audit, confidentiality, integrity and key management. While this may seem attractive, as it does decouple security into a purpose-built policy layer, it has some very significant drawbacks. The agent model essentially argues that once the internal policy layer is in place, the internal service provider is ready for external publication. But this implies poking holes in the DMZ, which is a bad security practice.  We have firewalls precicely because we don’t want to harden every internal system to DMZ-class resiliancy. An application-layer policy agent does nothing to defeat OS-targetted attacks, which means every service provider would need to be sufficiently locked down and maintained. This becomes unmanagable as the server volume grows, and completely erodes the integrity of firewall #2.

Furthermore, in practice, agents  just don’t scale well. Distribution of policy among a large number of distributed agents is a difficult problem to solve. Policies rapidly become unsynchronized, and internal security practices are often compromised just to get this ponderous and dependent system to work.

At Layer 7 we advocate a different approach to publishing services that is both scalable and secure. Our flagship product, the SecureSpan Gateway, is a security proxy for Web services, REST, and arbitrary XML and binary transactions. It is a hardened hardware or virtual appliance that can be safely deployed in the DMZ to govern all access to internal services. It acts as the border guard, ensuring that each transaction going in or out of the internal network conforms to corporate policy.

SecureSpan Gateways act as a policy air-gap that constrains access to back end services through a rich policy-based security model. This integrates consistently with the design philosophy of the DMZ. Appliances are hardened so they can withstand Internet-launched attacks, and optimized so they can scale to enormous traffic loads. We built full clustering into SecureSpan in the first version we released, close to eight years ago. This ensures that there is no single point of failure, and that systems can be added to accommodate increasing loads.

The separate policy layer—and the policy language that defines this—is the key to the security model and is best illustrated using a real example. Suppose I have a warehouse service in my internal network that I would like to make available to my distributors. The warehouse service has a number of simple operations, such as inventory queries and the ability to place an order. I’ll publish this to the outside world through a SecureSpan Gateway residing in the DMZ, exactly as shown in the diagram above.

SecureSpan provides a management console used to build the policies that govern access to each service. Construction of the initial policy is made simple using a wizard that bootstraps the process using the WSDL, which is a formal service description for my warehouse service. The wizards allows me to create a basic policy in three simple steps. First, I load the WSDL:

Next, I declare a basic security model. I’ll keep this simple, and just use SSL for confidentiality, integrity, and server authentication. HTTP basic authentication will carry the credentials, and I’ll only authorize access to myself:

If this policy sounds familiar, it’s because it’s the security model for most web sites. It turns out that this is a reasonable model for many XML-based Web services as well.

Finally, I’ll define a proxy routing to get to my internal service, and an access control model once there. In this example, I will just use a general account. Under this model, the service trusts the SecureSpan Gateway to authenticate and authorize users on it’s behalf:

You may have noticed that this assumes that the warehouse services doesn’t need to know the identity of the original requester-—that is, Scott. If the service did need this, there are a number of ways to communicate my identity claim downstream to the service, using techniques like SAML, IBM’s Trust Association Interceptor (TAI), proxied credentials, or various other tricks that I won’t cover here.

The wizard generates a simple policy for me that articulates my simple, web-oriented security model. Here’s what this policy looks like in the SecureSpan management console:

Policy is made up of individual assertions. These encapsulate all of the parameters that make up that operation. When a message for the warehouse service is identified, SecureSpan loads and executes the assertions in this policy, from top to bottom. Essentially, policy is an algorithm, with all of the classic elements of flow control. SecureSpan represents this graphically to make the policy simple to compose and understand. However, policy can also be rendered as an XML-based WS-Policy document. In fact, if you copy a block of graphical assertions into a text editor, they resolve as XML. Similarily, you can paste XML snippets into the policy composer and they appear as graphical assertion elements.

This policy is pretty simplistic, but it’s a good foundation to build on. I’ll add some elements that further restrict transactions and thus constrain access to the back end system the SecureSpan Gateway is protecting.

The rate limit assertion allows me to cap the number of transactions getting through to the back end. I can put an absolute quota on the throughput: say, 30,000 transaction/sec because I know that the warehouse service begins to fail once traffic exceeds this volume. But suppose I was having a problem with individual suppliers overusing particular services. I could limit use by an individual identity (as defined by an authenticated user or originating IP address) to 5,000 transasctions/sec—still a lot, but leaving headroom for other trading partners. The rate limit assertion gives me this flexibility. Here is its detailed view:

Note that if I get 5,001 transactions from a user in one second, I will buffer the last transaction until the rate drops in a subsequent time window (subject, of course, to resource availability on the gateway). This provides me with application-layer traffic shaping that is essential in industries like telco, who use this assertion extensively.

I would also like to evaluate each new transaction for threats. SecureSpan has assertions that cover a range of familar threats, such as SQL-injection (which has been around for a long time, but has become newly relevant in the SOA world), as well as a long list of new XML attacks that attempt to exploit parser infrastructure and autogenerated code. For the warehouse service, I’m concerned about code-injection attacks. Fortunately, there’s an assertion for that:

Here’s what these two assertions look like dropped into the policy:

This policy was simple to compose (especially since we had the wizard to help us). But it is also very effective. It’s a visible and understandable, which is an important and often overlooked aspect of security tooling. SOA security suffers from an almost byzantine complexity. It is much too easy to build a security model that obscures weakness behind its detail. One of the design goals we had at Layer 7 for SecureSpan was to make it easy to do the simple things that challenge us 80% of the time. However, we also wanted to provide the richness to solve the difficult problems that make up the other 20%. These are problems such as adaptation. They are the obscure impedance-mismatches between client and server security models, or fast run-time adaptation of message content to accommodate version mismatches.

In this example, it took only seven simple assertions to build a basic security policy for publishing services to the outside world. Fortunately, there are over 100 other assertions—covering everything from message-based security to transports like FTP to orchestration—that are there when you need to solve the tougher problems.

Up is the New Up

As some of you may have already heard, 2009 was a spectacular year for Layer 7. Despite the economic downturn—leading to the “flat is the new up” quip around the Valley—we actually grew more than in any year previous. In fact, we’ve grown steadily year-over-year for the last four years. 2009, however, turned out to be special for a number of reasons.

First, Gartner placed us in the leader’s quadrant in the MQ for Integrated SOA Governance. This is important because it recognizes that SecureSpan Gateways offer a comprehensive solution for SOA governance in-a-box. Now, you will never catch me equating governance with technology; but technology is a critical component of governance, and if the technology is well-designed it will give you the tools you need to support your overall SOA governance initiative.

2009 also saw the formal launch of our cloud product line. With the Amazon AMI image of SecureSpan, we released the industry’s first—and only—SOA Governance Gateway for the cloud. We were active contributors to important efforts like the Cloud Security Alliance’s critical guidance for securing the cloud. And we successfully promoted mantra that “in the cloud, you must protect services, not networks,” at watershed industry events like GigaOm Structure and through ongoing presentations with leading thinkers like David Linthicum and Anne Thomas Manes. These days, everybody seems to be jumping onto the cloud bandwagon; what is much more rare is a company offering something besides talk and vapor. Well, talk is easy.  I’m very happy that we made real accomplishments in the cloud space in 2009.

But it’s not all about the cloud. With the release of the Oracle OSB Appliance, which came out last fall, we’re showing the continued value of high performance, secure appliances in the enterprise. Layer 7’s OSB gives you a real solution for putting important infrastructure like OSB into the DMZ. The DMZ is in our DNA, and you should expect further innovation in this area in the near future.

In the end, the measure of all of this is sales success, and we’re very happy with our results. Layer 7 recorded a 35% growth in customers over 2008 and a blow-out year for revenue. This success energizes the team and fosters a great determination to continue to win—it is tangible as you walk the floor. I couldn’t ask for anything better going into 2010, because the products and innovation that are underway will far exceed what came out in ’09. Watch this space.

How to Secure REST and JSON

Here at Layer 7 we get asked a lot about our support for REST. We actually have a lot to offer to secure, monitor and manage REST-style transactions. The truth is, although we really like SOAP and XML here at Layer 7, we also really like REST and alternative data encapsulations like JSON. We use both REST and JSON all the time in our own development.

Suppose you have a REST-based service that you would like to publish to the world, but you are concerned about access control, confidentiality, integrity, and the risk from incoming threats. We have an answer for this: SecureSpan Gateway clusters, deployed in the DMZ, give you the ability to implement run time governance across all of your services:

Pictures are nice, but this scenario is best understood using a concrete example. For the services, Yahoo’s REST-based search API offers us everything we need–it even returns results in JSON format, instead of XML. Yahoo has a great tutorial describing how to use this. The tutorial is a little dated, but it’s simple, to the point, and the REST service is still available. Let’s imagine that I’m deploying a SecureSpan Gateway in front of the servers hosting this API, as I’ve illustrated above. The first thing I will do is create a very simple policy that just implements a reverse proxy. No security yet–just a level of indirection (click on the picture for detail):

This is just about as simple as a policy can get. Notice that the policy validator is warning me about a few potential issues. It’s pointing out that the transaction will pass arbitrary content, not just XML. Because I’m expecting JSON formatted data in the response, this is the behavior I expect. The validation is also warning me that this policy has no authentication at all, leaving the service open to general, anonymous access. We’ll address this in the next step.

I’ve explicitly attached this policy to the gateway URL:

 http://scottssg/yahooImageSearch

If need be, I could easily add wild card characters here to cover a range of incoming URLs. For this demonstration, I’m just running a virtual SecureSpan Gateway here on my Macbook; it’s not actually residing in the Yahoo DMZ, as would be the case in a real deployment. But from the perspective of an administrator building policy, the process is exactly the same regardless of where the gateway lives.

I’ve also placed a restriction on the listener to only accept HTTP GET verbs:

Now I can point my web browser to the gateway URL shown above, and get back a JSON formatted response proxied from Yahoo. I’ll use the same example in the Yahoo tutorial, which lists pictures of Madonna indexed by Yahoo:

http://scottssg:8080/yahooImageSearch?appid=YahooDemo&query=Madonna&output=json

This returns a list looking something like this:

{"ResultSet":{"totalResultsAvailable":"1630990", "totalResultsReturned":10, "firstResultPosition":1, "Result":[{"Title":"madonna jpg", ...

which I’ve truncated a lot because the actual list spans thousands of characters. The Yahoo tutorial must be fairly old; when it was written, there were only 631,000 pictures of the Material Girl. Clearly, her popularity continues unabated.

Now let’s add some security. I’d prefer that nobody on the Internet learns that I’m searching for pictures of Madge, so we need to implement some privacy across the transaction. I can drag-and-drop an SSL/TLS assertion into the top of my policy to ensure that the gateway will only accept SSL connections for this RESTful service. Next, I’ll put in an assertion that checks for credentials using HTTP basic authentication. I’ll use the internal identity provider to validate the username/password combination. The internal identity provider is basically a directory hosted on the SecureSpan Gateway. I could just as easily connect to an external LDAP, or just about any commercial or open source IAM system. As for authentication, I will restrict use of the yahooImageSearch REST service to members of the development group:

HTTP basic authentication isn’t very sophisticated, so we could easily swap this out and implement pretty much anything else, including certificate authentication, Kerberos, SAML, or whatever satisfies our security requirements. My colleague here at Layer 7, Francois Lascelles, recently wrote an excellent blog post exploring some of the issues associated with REST authentication schemes.

Let’s review what we this simple policy has given us:

  1. Confidentiality, integrity, and server (gateway) authentication
  2. Authentication
  3. Authorization
  4. Virtualization of the internal service, and publication to authorized users

This is good, but I’d like to add some more REST-specific constraints, and to filter out potential REST attacks that may be launched against my service. I can do this with two simple assertions: one that validates form field in HTML, and another that scans the content for various code injection signatures:

The form data assertion allows me to impose a series of tight constraints on the content of query parameters. In effect, it let’s me put a structural schema on an HTTP query string (or POST parameters). I’m going to be very strict here, and explicitly name every parameter I will accept, to the exclusion of all others. For the Yahoo search API, this includes:

  • appid
  • query
  • output
  • callback

The latter does some wrapping of the return request to facilitate processing in JavaScript within a browser:

Depending on my security requirements, I could also be rigorous with parameter values using regular expressions as a filter. I’ll leave that as an exercise for the reader.

Naturally, I’m concerned about REST-born threats, so I will configure the code injection assertion to scan for all the usual suspects. This can be tuned so that it’s not doing unnecessary work that might affect performance in a very high volume situation:

That’s it–we’re done. A simple 6 assertion policy that handles confidentiality, integrity, authentication, authorization, schema validation, threat detection, and visualization of RESTful JSON services. To call this, I’ll again borrow directly from the Yahoo tutorial, using their HTML file and simply change to URL to point to my gateway instead of directly to Yahoo:

<html>
<head>
<title>How Many Pictures Of Madonna Do We Have?</title>
</head>
</body>
<script type="text/javascript">
function ws_results(obj) {
alert(obj.ResultSet.totalResultsAvailable);
}
</script>
<script type="text/javascript" src="https://scottssg:8443/yahooImageSearch?appid=YahooDemo&query=Madonna&output=json&callback=ws_results"></script>
<body></body>
</html>

Still can’t get over how many pictures of Madonna there are.

I ran it a few times and here’s what it looks like in the dashboard. I threw in some policy failures to liven up the display:

So where can we go from here? Well, I would think about optimization of the policy. Depending on predicted loads and available hardware, we might want to check for code injection and validate the schema before performing authentication, which in the real world would likely call out to an LDAP directory. After all, if we are being fed garbage, there’s no sense in propagating this load to the directory. We can add SLA constraints across the service to insulate back end hosts from traffic bursts. We could also provide basic load distribution across a farm of multiple service hosts. We might aggregate data from several back-end services using lightweight orchestration, effectively creating new meta-services from existing components. SecureSpan Gateways provide over 100 assertions that can do just about anything want to an HTTP transaction, regardless of whether it contains XML or JSON data. You can also develop custom assertions which plug into the system and implement new functionality that might be unique to your situation. Remember: when you are an intermediate, standing in the middle between a client and a service–as is the case with any SecureSpan Gateway–you have complete control over the transaction, and ultimately the use of the service itself. This has implications that go far beyond simple security, access control, and monitoring.

On the Death of Design-Time Service Governance

Practically on the anniversary of Anne Thomas Manes now-famous SOA-is-Dead pronouncement, David Linthicum suggests we convene the vigil for design-time service governance. Dave maintains that cloud technology is going to kill this canonical aspect of governance because runtime service governance simply provides much more immediate value. Needless to say, rather than a somber occasion, Dave’s started more of a donnybrook. I guess it’s about time to get off of the bench and join in the fun.

The incendiary nature of is-dead statements often conceal  the subtle but important ideas behind them. Dave’s declaration is no different. What he’s really suggesting is that cloud will rapidly shift the traditional priorities with respect to services governance. In the big SOA era (before Jan 1, 2009), design-time governance was king. It fit nicely into the plan-manage-control discipline underpinning a large enterprise-scale SOA effort. And to support this, tool vendors rushed in and offered up applications to facilitate the process. Run time services governance, in contrast, was often perceived as something you could ignore until later–after all, on-premise IT has reasonably good security and monitoring in place. The technology might be old and the process a little byzantine, but it could tide you over. And if any internal staff were caught accessing services they shouldn’t be you could just fire them.

The cloud inverts this priority. If you deploy even one service into the public cloud, you must have runtime service governance in place. That one service needs to be hardened and protected as you would a DMZ-based application. You must have continuous visibility into its operation from the moment you turn it on. Run time governance simply cannot be put off.

The cloud is also about agility. We are attracted to the cloud because it offers rapid deployment, elastic computing, and that hard to refuse pay-for-what-you-need model. This same sense of agility is a leitmotiv throughout the entire development process. When you talk to teams that have been successful with the cloud, they are usually small, they rail against enterprise bureaucracy, and agile development process is in their DNA. They are comfortable and extremely efficient with the tools and process they have, and they don’t see a lot of reason to change this. Do they still need discipline, organization, process, communication, sharing? Absolutely–but they have this; it’s ingrained into their existing process. I see this in my teams developing products every day. It’s the mix of great management, excellent people, process where it’s been shown to be needed, and a basic set of tools we all rely on every day.

Will design-time services governance tools disappear completely? Of course not. But their value becomes apparent with scale, when you begin to approach “a lot of services” (and no,  nobody really knows what this means–probably more than a scoville, less than a parsec). The point is, in the cloud nobody starts there, but they do need to focus immediately on the threats.

In the end, governance priorities in the cloud come down to a pretty simple precept: Duplicating a piece of functionality probably won’t get you fired; leaving an open door that was used to compromise corporate data probably will.

Cloud Security Alliance Guidance v2 Released

Last week, the Cloud Security Alliance (CSA) released its Security Guidance for Critical Areas of Focus in Cloud Computing V2.1. This is a follow-on to first guidance document released only last April, which, gives you a sense of the speed at which cloud technology and techniques are moving. I was one of the contributors to this project.

The guidance explores the issues in cloud security from the perspective of 13 different domains:

Cloud Architecture

  • Domain 1: Cloud Computing Architectural Framework

Governing in the Cloud

  • Domain 2: Governance and Enterprise Risk Management
  • Domain 3: Legal and Electronic Discovery
  • Domain 4: Compliance and Audit
  • Domain 5: Information Lifecycle Management
  • Domain 6: Portability and Interoperability

Operating in the Cloud

  • Domain 7: Traditional Security, Business Continuity, and Disaster Recovery
  • Domain 8: Data Center Operations
  • Domain 9: Incident Response, Notification, and Remediation
  • Domain 10: Application Security
  • Domain 11: Encryption and Key Management
  • Domain 12: Identity and Access Management
  • Domain 13: Virtualization

I thought the domain classification was quite good because it serves to remind people that technology is only a small part of a cloud security strategy. I know that’s become a terrible security cliche, but there’s a difference between saying this and understanding what it really means. The CSA domain structure–even without the benefits of the guidance–at least serves as a concrete reminder of what’s behind the slogan.

Have a close look at the guidance.  Read it; think about it; disagree with it; change it–but in the end, make it your own. Then share your experiences with the community. The guidance is an evolving document that is a product of a collective, volunteer effort. It’s less political than a conventional standards effort (look though the contributors and you will find individuals, not companies). The group can move fast, and it doesn’t need to be proscriptive like a standard–it’s more a distillation of considerations and best practices. This one is worth tracking.

eBizQ: SOA in This Year and the Next

It’s that time when we look back on one year and forward to the next. Over at the eBizQ forum Peter Schooff asked about SOA’s past and future:

What Developments in SOA Are You Most Thankful For This Year?

What Do You Think Will be the Biggest Trend or Development for SOA in 2010?

Identity Buzz Podcast with Now Available: End-to-End Web Services Security

I recently had a great, freewheeling discussion with Daniel Raskin, Sun’s Chief Identity Strategist. Daniel runs the Identity Buzz podcasts. We talked about issues in identity and entitlement enforcement in SOA, compliance, and the problems you run into as you move into new environments like the cloud.

Daniel’s post about our podcast is on his blog.

You can download the podcast directly right here.

Upcoming Webinar: Controlling Your SOA Across The Global Enterprise and Cloud

I’ll be delivering a Webinar next week about Layer 7’s Enterprise Service Manager (ESM) product. ESM offers the global view of clusters of SecureSpan Gateways and the services under their management. It’s functions fall into three main areas:

Enterprise-scale Management

  • Centrally manage and monitor all Gateways and associated services across the extended enterprise and into the cloud

Automated Policy Migration

  • Centrally approve and then push policy to any Gateway across the enterprise, automatically resolving environmental discrepancies

Disaster Recovery

  • Remotely manage, troubleshoot, backup and restore all Gateways, supporting full disaster recovery

ESM is an important tool for managing SOA in the Enterprise, but it also plays a critical role when SOA moves to the cloud. In addition to extending an organization’s visibility and control, ESM provides critical tools for automating the migration of policy to new environments:

I’ll go into this process in detail in the webinar. Hope to see you there. Here is the official abstract:

Organizations have begun extending their SOA initiatives beyond traditional enterprise boundaries to encompass third-party, geographically remote, and even cloud-based resources. As a result, the complexity associated with migrating applications across these environments (for example, from development in India to test in the cloud to production in a hosted data center) has increased exponentially. In this webinar from Layer 7, you will learn how topology and identity issues between environments, geographies and settings (i.e., enterprise vs. cloud) can be easily resolved and even automated, dramatically reducing migration risk.

You can sign up for this webinar at http://www.layer7tech.com/main/media/webinars.html.

Using URI Templates on XML Security Gateways

Earlier this fall, Anil John put out the following Twitter challenge:

“@Vordel, @layer7, @IBM_DataPower If you support REST, implement support for URI templates in XML Security Gateways”

Somebody brought Anil’s tweet to our attention this week, and Jay Thorne, who leads our tactical group, put together a nice example of just how to do this using SecureSpan Gateways.

URI templates are a simple idea to formalize variable expansion inside URI prototypes. A receiving system can then trivially parse out substituted components of the URI and use these as input. There’s an IETF submission here that describes the approach. It turns out that it was co-authored by my old friend and ex-IBM colleague Dave Orchard. Another co-author is Mark Nottingham, who I worked with at the WS-I. I guess I should have looked into this earlier. Sorry guys.

Here’s a real example of how URI templates work. Let’s begin with the template:

http://somesite/Template/{Noun}/{Verb}/{Object}

The braces represent variables for run time substitution. Suppose these variables equaled:

Noun=Dog
Verb=bites
Object=man

The result of the substitution would then be:

http://somesite/Template/Dog/bites/man

Which is pretty easy to parse up and process by a web application residing at http://somesite/Template

URI templates are also very easy to process on a SecureSpan Gateway. Here’s a policy that demonstrates how to access the variables in the URI. Basically, all this policy does is parse up the URI, audit the results, and echo the content back to the sender:

I’ll bind this policy to a particular URI range on my SecureSpan gateway virtual instance. This happens to be running with an HTTP listener on port 8080 on my Macbook Pro:

Now let’s walk through the policy in detail. The first assertion uses a regular expression to parse the URI.:

It places the content of the expansion variables Noun, Verb, Object into Vars[1] , Vars[2] and Vars[3] respectively. The next two assertions in the policy simply audit the resulting variables into the SSG audit subsystem; this can sink to cluster storage, or go off box. Finally, the policy populates a template XML document with the contents of the variables and echos this back to the original sender:

When I fire up my browser and hit the REST service http://scottssg:8080/Template/Dog/bites/man I get:

Obviously in a real policy, you would do something more interesting than just echoing parameters, such as route the message to a particular internal host based on template content. And you can certainly compose outgoing URIs using existing variable substitution capability in SecureSpan. The take away is this is very simple to implement, and I think that it highlights that here at Layer 7, we think that supporting RESTful services is just as important as supporting SOAP.