The API Gateway As Endpoint

The US Customs and Immigration station at Point Roberts is a small building set on the edge of the coastal rainforest of Washington State. It is the last customs station in a vast western chain strung out along the 49 parallel. I enjoy crossing the border at Point Roberts. The abrupt change from sprawling Canadian suburbs to American rural countryside always appeals to me.

The customs station, despite it’s small size, offers a range of services from granting a visa to simply giving advice about where to find a good deal for gas in Point Roberts. It’s not simply a gateway controlling access into the US, but a provider of a broad range of services associated with crossing an international boundary.

There exists a similar duality with API gateways. Although the common deployment is as a border guard, protecting APIs hosted by an organization, an API gateway can also act as an endpoint providing valuable standalone services. Nearly all of my customers first purchase SecureSpan Gateways to fulfill the role the name implies: that is, an API or service gateway. The border guard deployment pattern is now so commonplace that I no longer need to evangelize it as I did in the early days of Layer 7, close to a decade ago. Architects recognize this pattern as an accepted best practice for securing and managing APIs. But like the Point Roberts border station, a SecureSpan Gateway can also provide services that have nothing to do with access control or transaction confidentiality, but provide value on their own, independent of any APIs they may be guarding.

Here’s a very simple example illustrating the endpoint pattern using a SecureSpan Gateway. Normally I might deploy my gateway in front of a web server, acting as an authenticating reverse proxy for my web pages. In this role, the gateway is responsible for access control, SSL management, audit, lightweight load balancing, etc. All classic gateway functions.

Figure 1: Gateway as boarder guard at the edge of a network. This is classic perimeter security.

But suppose I wanted to use the gateway itself as the web server? In other words, it’s not acting as a gateway, but the server endpoint in itself?

Figure 2: Gateway as service endpoint. Here the gateway is providing valuable APIs on its own without necessarily routing a request to an internal host.

It’s very simple to configure a SecureSpan Gateway as an endpoint. You simply create a policy that never routes a request onward to an internal host, but instead acts on the message content internally. I sometimes think of it as an internal loop back mechanism. I’ll build a policy implementing my web site, and add the assertion Return Template Response to Requester. I will bind this policy to the gateway URL http://localhost:8080/helloworld as shown in the screen snippet below from the SecureSpan management console:

With just one assertion, this is pretty much the simplest policy you can write, and so fitting for the time-honoured Hello World example. Let’s expand the assertion to examine at what the template actually contains:

Pretty plain-vanilla HTML. The only departure I’ve made from the deliberate austerity of the Hello World tradition is to add a time stamp to showcase the use of predefined context variables. There are dozens of context variables that are automatically set as the gateway processes a transaction (regardless of whether it’s actually being a gateway or an endpoint—remember we consider this just a policy implementation pattern, not a different capability). Context variables cover everything from accessing individual fields in the X509 certificate used for client-side SSL authentication to storing the IP address of a request sender. And of course, you can set your own context variables at any time within a policy.

Just to be complete, here’s what happens when I point my browser at the gateway-as-endpoint:

Now, before you think I’ve gone mad, I am not advocating that you deploy a web site like this; there are much better tools for that purpose. But the point I’m making is that there are legitimate endpoint services that do indeed lend themselves to implementation on a gateway. These are just a little more complicated than what I’ve shown here, but nevertheless simple to implement using the declarative policy language in SecureSpan. For example:

  • Signing Services: In the notary pattern, a gateway with an on-board Hardware Security Module (HSM) responsible for protecting key material can easily provide authoritative signing services for all kinds of data, from XML documents to arbitrary text.
  • Encryption Services: Same idea as signing, with the goal to avoid insecure proliferation of keys.
  • Validation Services: Ensure data conforms to a particular schema.
  • Threat Detection Services: Scan content for SQL injection, Cross Site Scripting (XSS), XML threats, or custom threat signatures.
  • Security Token Services (STS): Issue security tokens like SAML or OAuth tokens. Validate or exchange tokens.
  • Centralized Session or Caching Management: Cache frequently used data items according to a scheme that is tuned to your application.
  • Policy Decision Point (PDP) Services: Receive and act on SAMLP requests.
  • XACML Services: Act as a centralized XACML engine providing authorization services.

The list above is by no means exclusive; however, at Layer 7 we’ve implemented all of these at one time or another using SecureSpan Gateways. It’s an important design point for us that the only real difference between a gateway and an endpoint is how you define your policy.

When Is The Cloud Not A Cloud?

Sometimes I joke that as my kids grow up they won’t see clouds, they’ll just see air—meaning of course that their use of cloud-based services will become so ubiquitous as to make the cloud moniker largely unnecessary. What we so enthusiastically label cloud will just be the way everyone builds and deploys their apps. “Nothing to see here folks; but look at my wonderful new application…”

We won’t arrive at this future until we strip the word cloud of its power. And to do this, we need to go after the things we thought made cloud unique and special in the first place. Today, Amazon took a vicious swipe at the canonical definition by introducing dedicated EC2 instances. Dedicating hardware to a single customer addresses the next logical layer in the hierarchy of security concerns after virtual isolation. Amazon’s VPC product, introduced back in August 2009, provided virtualized isolation in their multi-tenant environment. Essentially VPC is like a virtual zone housing only your instances. This zone is tied back to your on-premise network using a VPN. The only way in or out of a zone is through your corporate network. Other Amazon-resident applications can not access your apps directly—in fact, any external app, Amazon-resident or otherwise—must go through your conventional corporate security perimeter and route back to Amazon over the VPN to be able to gain access to a VPC app. The real value of VPC is that it puts instance access back into the hands of the corporate security group.

The problem that the highly security conscious organization has with VPC is that the “V” is for virtual. VPC may implement clever isolation tricks using dynamic VLANs and hypervisor magic known only to a gifted few, but when your critical application loads up you may actually reside on exactly the same hardware as your own worst enemy. In theory, neither of you can exploit this situation. But you need to believe the theory. Completely.

Today’s announcement means that Amazon’s customers can literally have exclusive use of hardware. This is good news for anyone with reservations about hypervisor isolation. However, the networking remains virtualized, and of course you can still ask the classic cloud security questions about where data resides, or the background of the staff running the infrastructure. So a mini-private cloud, it is not; but dedicated instances is an interesting offering, nonetheless.

What is more intriguing is that by providing dedicated hardware, Amazon is beginning to erode one of the basic foundations of the canonical cloud definition: multi-tenancy. Purists will argue—as they do so with unexpected vehemence with regard to private cloud—that what Amazon is offering is not a cloud at all, but in fact a retrograde step back to simple hosting or co-loc. I’m inclined to disagree, however, and think instead Amazon offers a logical next step (and certainly not the last) in the evolution of cloud services. By doing so, Amazon amplifies some of the other important aspects that define what the cloud really is. Things like self-service, a greatly changed division and scope of operational responsibility, the leverage of commodity of scale, elasticity, and the ability to pay for what you actually use.

I don’t think Amazon’s new offering will be wildly successful because it still leaves many security issues unresolved. But I do think it points the way to the future cloud, which will have many different attributes and characteristics that solve different problems. Some may conflict with traditional definitions and expectations. Some may fulfill them. What is important is to choose the service that meets your needs, and don’t worry what it’s called. That’s marketing’s problem.

Upcoming Webinar: Extending Enterprise Security Into The Cloud

On March 21, 2011 Steve Coplan, Security Analyst from the 451 Group and I will present a webinar describing strategies CIOs and enterprise architects can  implement to create a unified security architecture between on-premise IT and the cloud.

I have great respect for Steve’s research. I think he is one of the most cerebral analysts in the business; but what impresses me most is that he is always able to clearly connect the theory to its practical instantiation in the real world. It’s a rare skill. He also has a degree in Zulu, which has little to do with technology, but makes him very interesting nonetheless.

Lately Steve and I have been talking about the shrinking security perimeter in the cloud and what this means to the traditional approaches for managing single sign-on and identity federation. This presentation is a product of these discussions, and I’m anticipating that it will be a very good one.

I hope you can join us for this webinar. It’s on Tuesday, March 15, 2011 9:00 AM PST | 12:00 PM EST | 5:00 PM GMT. You can register here.

Overview:
For years enterprises have invested in identity, privacy and threat protection technologies to guard their information and communication from attack, theft or compromise. The growth in SaaS and IaaS usage however introduces the need to secure information and communication that spans the enterprise and cloud. This webinar will look at approaches for extending existing enterprise security investments into the cloud without significant cost or complexity.

The API Lab

Promotion is a problem faced by every API developer. Long nights of coding have given form to the stroke of genius you had six months ago in the cafe. You’ve just written the API that will serve as the front door into your application. But how do you document this so that your peers will use it—and hopefully make you rich in the process?

Java had Javadoc, an innovation that managed to strike a surprisingly effective balance between ease of use and systematization (three cheers for strong typing and static binding). Web services “solved” the interface definition problem with WSDL, a standard only its authors could love (and some of these won’t admit to participating in conception). To the registry crowd was left the task of devising clever ways to document a SOAP API around the torturous abstractions of WSDL so that humans could grok it as effectively as the machines. Few would argue that their solutions were, well, solutions.

RESTful APIs neither benefit nor suffer from formalized approach and automation. Every so often WADL pokes its head up, only to be knocked decisively back into its hole in a community game of Standards-Whac-A-Mole. REST is—and indeed, has always been—a style that resists any formalism that threatens its simplicity and accessibility. For this, above all, the community deserves praise.

But this does leave a hole around documentation. APIs tend to be described-by-wiki, which can be great or just awful, depending on a developer’s ability to write and discipline to update. Good ideas like ProgrammableWeb showcase both of these extremes; however, the site suffers in particular from out-of-date API descriptions which is too bad.

Outside of the eventual acceptance en masse of some kind of standardized wiki template for APIs (which I think is actually quite likely), we probably won’t see substantial change in API documentation soon. Minor changes, though, can still count for a lot, and it’s worth pointing these out.

Google, always an innovator to watch, create a lot of APIs. They recently published the Periodic Table of Google APIs, which as its name suggests is a clever way to visually categorize their API portfolio into groups like mobile, data, geo, etc. It’s a bit gimmicky, but it is also fun and actually inspired me to try some new Google APIs that I wasn’t aware of before.

So, isn’t it about time for more creative solutions around API documentation?

Layer 7 Technologies Joins the Cloud Security Alliance (CSA)

I’m pleased to announce that Layer 7 has joined the Cloud Security Alliance (CSA) as a full corporate member. For the past several years, the CSA has assumed the leadership role in defining the best practices to secure cloud applications, data, and infrastructure.

I believe that when you join a community organization, you are obliged to make a real contribution. Being a member means a lot more than just having your company logo on the sponsor list. I’ve been involved previously with the CSA, as a co-author of version 2 of its Security Guidance for Critical Areas of Focus in Cloud Computing, and as a co-author of the organization’s Top Threats in Cloud Computing document. Now that we are corporate members, Layer 7 will help to drive two important events within the CSA.

First, Layer 7 is a sponsor the CSA summit at this year’s RSA conference in San Francisco, running Feb 14-18, 2011. I was a participant at the CSA summit last year. This one-day event sold out instantly, and most attendees agree it was one of the highlights of the RSA conference. If you are in San Francisco for the 2011 RSA show, you should try to get into Monday’s CSA event. The CSA has some very special guests lined up to speak—including Vivek Kundra, US Chief CIO—and I can assure you that once again the summit will be the talk of the RSA.

I am also fortunate to be co-presenting a CSA-sponsored webinar about Managing API Security in SaaS and Cloud with Liam Lynch, eBay’s Head of Security. The rapidly growing API management space has a number of unique challenges with segmentation of roles, access to usage information, developer on-boarding, user management, and community building. Liam and I will talk about our own experiences in this space, and I will explore several case studies that illustrate each issue and its solution. I hope you can join us on Feb 23, 2011 for this talk.

REST is Simple, But Simple is not REST

Last December, William Vambenepe posted a provocative blog entry about RESTful APIs in the cloud. In it, he states that Amazon proves that REST doesn’t matter for Cloud APIs. Writing over at InfoQ, Mark Little re-ignited the controversy in his recent post asking Is REST important for Cloud? And Vambenepe responded to the re-invigorated commentary in Cloud APIs are like Military Parades. All of these posts are very much worth reviewing, and be sure to read the comments left by members of the community.

Amazon’s query APIs are certainly not textbook examples of the RESTful architectural style, a point the Restafarians are quick to make. But they are simple, understandable, and very easy to implement, which is largely the set of characteristics that attracted developers to REST in the first place. REST does have a satisfying elegance and undeniable philosophical appeal, but accessibility probably counts for more in the popularity stakes. If there is one theme that repeats throughout the history of computing, surely it is the story of simple and good enough winning the day.

Amazon’s APIs are not the reason that AWS is such a resounding success; it is the service offering that was responsible for that. However, the APIs were good enough that they didn’t hinder Amazon’s success, and this is the relevant point. Dig a little deeper, though, and Amazon’s API story offers some interesting insights.

Amazon, it turns out, is a good case study illustrating the evolution of Internet APIs. When AWS first appeared, Amazon engineers faced what at the time was an unresolved question of style: should a cloud publish SOAP services, or RESTful APIs? I’ve heard that question led to some pretty heated internal debates, and I don’t doubt this is true. Amazon lies in Microsoft’s backyard, and .NET in those days was very firmly in the SOAP camp. REST, however, was rapidly gaining mind share, especially due to the growing popularity of AJAX in Web 2.0 apps.

The engineer’s solution was to hedge, producing a mix of SOAP, a “REST-like” query (HTTP with query params as NVPs), and REST APIs somewhat in parallel. Developers gravitated toward the later two options so much more than SOAP that AWS left the orbit of WS-* for good. Amazon gave the market choice, and the market spoke decisively. This was an interesting experiment, and in performing it Amazon actually led the way—as they have in so many ways—for future cloud providers. Each new provider had the luxury of an answer to the question of API style: REST won (even if this wasn’t really REST). Now all an emerging cloud provider needed to do was to design their API according to current best practices, and this is something for which experience counts. Roy Fielding’s thesis may have come out in 2000, but the appreciation of what constitutes good RESTful design among developers is a much more recent phenomena, one that has benefited greatly from the growing popularity of this style.

APIs are of course simply an interface between components of a distributed application. Like any interface, properly versioned, they can evolve. One thing I’ve noticed lately is that our customers are increasingly using Layer 7 technology to refine their APIs as they come to better understand their own needs.

For example, we have a customer in the publishing industry that is actually adapting existing SOAP-based APIs to RESTful design principles. They use our SecureSpan Gateways to present a true REST facade, and adapt this to existing SOAP messages on-the-fly. This transformation is fairly mechanical; what is more important is that the exercise also gave their developers the opportunity to refine their API suite to better meet their needs. Many of their new RESTful APIs leverage SecureSpan’s orchestration capabilities to consolidate multiple SOAP calls behind a single REST call. The resulting API 2.0 is a better match to their current business—and best of all, they were able to do all of this without writing a line of new code.

The lesson here is that while we like to think that APIs are perfectly static and never change, the truth is that we rarely get interfaces right the first time. Requirements change, business moves, and our understanding advances. What is important is that your APIs can respond with agility to changing needs, because change, we all know, is inevitable.

A Clear View of Virtualization

I spoke recently with Robyn Weisman from Processor Magazine about the basics of virtualization. This is one of those terms that everyone in the computing industry uses, but very often with different meanings. Her article is now available in the Jan 28, 2011 print issue. It is also accessible online: A Clear View of Virtualization.

Defeating the Facebook Hack

This week, Facebook fell victim to hackers who managed to deface Mark Zuckerberg’s page, no doubt earning the perpetrators tremendous props within their own social community. Facebook quickly closed the door on that particular exploit, but by then of course the Internets were abuzz and the damage was done. The company quickly followed up with some unrelated security distractions: HTTPS, good for countering Firesheep (I love that name); social authentication instead of CAPTCHAs (this is actually interesting and plays to their strengths); and an announcement that this Friday is “Data Privacy Day” (Ouch).

Their aren’t many details available on the hack (the Guardian has a great investigation examining some of the clues that were left behind), but it appears that one particular API didn’t perform sufficient authorization on a POST. This is a common problem when you don’t make the configuration of basic security functions like AAA highly visible, auditable, and tunable. Leave security to the developers, and rigor is often overlooked because of a coder’s naturally intense focus on functionality. Decouple security out of the API, promoting security management to a first class citizen administered by internal experts—well, then you have a much greater chance to avoid embarrassing gaffs like this one. Consistency is the soul of good security, and the decoupling strategy makes consistency an achievable goal across all of an organization’s APIs. Specialize where necessary, but make everything highly visible to the experts so they can easily move from big picture to necessary minutia. Make security configuration highly declarative both to advance understanding and facilitate rapid change in response to evolving threats. This is how API management must be in 2011.

Facebook has its pick of the best and brightest these days—and they still face these problems. As other, less privileged organizations attempt to create APIs—and this is very much the trend of the day—we can expect to see many more such attacks. Having the CEO’s page defaced is certainly embarrasing. But in the long run, it’s a lot less expensive than a real privacy breach, or cleaning up from massive sustained fraud.

The time for do-it-yourself security and management of APIs has passed. This is the time to deploy professional solutions to the problem that have been proven out by the military, intelligence community, and health care—groups that have come to understand the best practices around API security out of absolute necessity.

To learn more, take a look at Layer 7 Technologies solutions for securing and managing APIs.

How to Choose a SOA Gateway

Dana Crane, Product Marketing Manager for Layer 7, is delivering a webinar this Thursday January 27 that tells you what you should consider when choosing a SOA Gateway. The SOA gateway category is a little like an iceberg, in that there is a lot more to it than first meets the eye. Here at Layer 7, we find that customers first become interested in SOA Gateways as a means to solve their security and transaction management problems. But they quickly discover that SOA Gateways can easily function as an Enterprise Service Bus (ESB), a service orchestration engine, a lightweight PKI system, a Security Token Service (STS), and even a service host for applications. We have a number of customers who have been able to discontinue expensive maintenance contracts on basic infrastructure applications simply because their SOA Gateway was able to meet all of their needs.

I hope you can join my colleague Dana later this week and learn more about what you need to look for in an SOA Gateway solution. Dana is one of the best product guys I know. He really understands the needs of the marketplace, and he keeps all of us here at Layer 7 honest. You can sign up for Dana’s webinar on the Layer 7 web site right here.

Hacking the Cloud

I’m not sure who is more excited about the cloud these days: hackers or venture capitalists. But certainly both groups smell opportunity. An interesting article published by CNET a little while back nicely illustrates the growing interest the former have with cloud computing. Fortify Software sponsored a survey of 100 hackers at last month’s Defcon. They discovered that 96% of the respondents think that the cloud creates new opportunities for hacking, and 86% believe that “cloud vendors aren’t doing enough to address cyber-security issues.”

I don’t consider myself a hacker (except maybe in the classical sense of the word, which had nothing to do with cracking systems and more about solving difficult problems with code), but I would agree with this majority opinion. In my experience, although cloud providers are fairly proficient at securing their own basic infrastructure, they usually stop there. This causes a break in the security spectrum for applications residing in the cloud.

Continuity and consistency are important principles in security. In the cloud, continuity breaks down in the hand-off of control between the provider and their customers, and potential exploits often appear at this critical transition.  Infrastructure-as-a-Service (IaaS) provides a sobering demonstration of this risk very early in the customer cycle. The pre-built OS images that most IaaS cloud providers offer are often unpatched and out-of-date. Don’t believe me? Prove it to yourself the next time you bring up an OS image in the cloud by running a security scan from a SaaS security evaluation service like CloudScan. You may find the results disturbing.

IaaS customers are faced with a dilemma. Ideally, a fresh but potentially vulnerable OS should first be brought up in a safe and isolated environment. But to effectively administer the image and load patch kits, Internet accessibility may be necessary. Too often, the solution is a race against the bad guys to secure the image before it can be compromised. To be fair, OS installations now come up in a much more resilient state than in the days of Windows XP prior to SP2 (in those days, the OS came up without a firewall enabled, leaving vulnerable system services unprotected). However, it should surprise few people that exploits have evolved in lock step, and these can find and leverage weaknesses astonishingly fast.

The world is full of ex-system administrators who honestly believed that simply having a patched, up-to-date system was an adequate security model. Hardening servers to be resilient when exposed to the open Internet is a discipline that is  time-consuming and complex. We create DMZs at our security perimeter precisely so we can concentrate our time and resources on making sure our front-line systems are able to withstand continuous and evolving attacks. Maintaining a low risk profile for these machines demands significant concentrated effort and continual ongoing monitoring.

The point so many customers miss is that cloud is the new DMZ. Every publicly accessible server must address security with the same rigor and diligence of a DMZ-based system. But ironically, the basic allure of the cloud—that it removes barriers to deployment and scales rapidly on demand—actually conspires to work against the detail-oriented process that good security demands. It is this dichotomy that is the opportunity for system crackers. Uneven security is the irresistible low-hanging fruit for the cloud hacker.

CloudProtect is a new product from Layer 7 Technologies that helps reconcile the twin conflicts of openness and security in the cloud.  CloudProtect is a secure, cloud-based virtual appliance based on RedHat Enterprise Linux (RHEL). Customers use this image as a secure baseline to deploy their own applications. CloudProtect features the hardened OS image that Layer 7 uses in its appliances. It boots in a safe and resilient mode from first use. This RHEL distribution includes a fully functioning SecureSpan Gateway – that governs all calls to an application’s APIs hosted on the secured OS. CloudProtect offers a secure console for visual policy authoring and management, allowing application developers, security administrators, and operators to completely customize the API security model based to their requirements. For example, need to add certificate-based authentication to your APIs? Simply drag and drop a single assertion into the policy and you are done. CloudProtect also offers the rich auditing features of the SecureSpan engine, which can be the input to a billing process or be leveraged in a forensic investigation.

More information about the full range of Layer 7 cloud solutions, including Single Sign-On (SSO) using SAML for SaaS applications such as Salesforce.com and Google Apps, can be found here on the Layer 7 cloud solutions page.