Category Archives: Layer 7 Technologies

The API Lab

Promotion is a problem faced by every API developer. Long nights of coding have given form to the stroke of genius you had six months ago in the cafe. You’ve just written the API that will serve as the front door into your application. But how do you document this so that your peers will use it—and hopefully make you rich in the process?

Java had Javadoc, an innovation that managed to strike a surprisingly effective balance between ease of use and systematization (three cheers for strong typing and static binding). Web services “solved” the interface definition problem with WSDL, a standard only its authors could love (and some of these won’t admit to participating in conception). To the registry crowd was left the task of devising clever ways to document a SOAP API around the torturous abstractions of WSDL so that humans could grok it as effectively as the machines. Few would argue that their solutions were, well, solutions.

RESTful APIs neither benefit nor suffer from formalized approach and automation. Every so often WADL pokes its head up, only to be knocked decisively back into its hole in a community game of Standards-Whac-A-Mole. REST is—and indeed, has always been—a style that resists any formalism that threatens its simplicity and accessibility. For this, above all, the community deserves praise.

But this does leave a hole around documentation. APIs tend to be described-by-wiki, which can be great or just awful, depending on a developer’s ability to write and discipline to update. Good ideas like ProgrammableWeb showcase both of these extremes; however, the site suffers in particular from out-of-date API descriptions which is too bad.

Outside of the eventual acceptance en masse of some kind of standardized wiki template for APIs (which I think is actually quite likely), we probably won’t see substantial change in API documentation soon. Minor changes, though, can still count for a lot, and it’s worth pointing these out.

Google, always an innovator to watch, create a lot of APIs. They recently published the Periodic Table of Google APIs, which as its name suggests is a clever way to visually categorize their API portfolio into groups like mobile, data, geo, etc. It’s a bit gimmicky, but it is also fun and actually inspired me to try some new Google APIs that I wasn’t aware of before.

So, isn’t it about time for more creative solutions around API documentation?

Layer 7 Technologies Joins the Cloud Security Alliance (CSA)

I’m pleased to announce that Layer 7 has joined the Cloud Security Alliance (CSA) as a full corporate member. For the past several years, the CSA has assumed the leadership role in defining the best practices to secure cloud applications, data, and infrastructure.

I believe that when you join a community organization, you are obliged to make a real contribution. Being a member means a lot more than just having your company logo on the sponsor list. I’ve been involved previously with the CSA, as a co-author of version 2 of its Security Guidance for Critical Areas of Focus in Cloud Computing, and as a co-author of the organization’s Top Threats in Cloud Computing document. Now that we are corporate members, Layer 7 will help to drive two important events within the CSA.

First, Layer 7 is a sponsor the CSA summit at this year’s RSA conference in San Francisco, running Feb 14-18, 2011. I was a participant at the CSA summit last year. This one-day event sold out instantly, and most attendees agree it was one of the highlights of the RSA conference. If you are in San Francisco for the 2011 RSA show, you should try to get into Monday’s CSA event. The CSA has some very special guests lined up to speak—including Vivek Kundra, US Chief CIO—and I can assure you that once again the summit will be the talk of the RSA.

I am also fortunate to be co-presenting a CSA-sponsored webinar about Managing API Security in SaaS and Cloud with Liam Lynch, eBay’s Head of Security. The rapidly growing API management space has a number of unique challenges with segmentation of roles, access to usage information, developer on-boarding, user management, and community building. Liam and I will talk about our own experiences in this space, and I will explore several case studies that illustrate each issue and its solution. I hope you can join us on Feb 23, 2011 for this talk.

REST is Simple, But Simple is not REST

Last December, William Vambenepe posted a provocative blog entry about RESTful APIs in the cloud. In it, he states that Amazon proves that REST doesn’t matter for Cloud APIs. Writing over at InfoQ, Mark Little re-ignited the controversy in his recent post asking Is REST important for Cloud? And Vambenepe responded to the re-invigorated commentary in Cloud APIs are like Military Parades. All of these posts are very much worth reviewing, and be sure to read the comments left by members of the community.

Amazon’s query APIs are certainly not textbook examples of the RESTful architectural style, a point the Restafarians are quick to make. But they are simple, understandable, and very easy to implement, which is largely the set of characteristics that attracted developers to REST in the first place. REST does have a satisfying elegance and undeniable philosophical appeal, but accessibility probably counts for more in the popularity stakes. If there is one theme that repeats throughout the history of computing, surely it is the story of simple and good enough winning the day.

Amazon’s APIs are not the reason that AWS is such a resounding success; it is the service offering that was responsible for that. However, the APIs were good enough that they didn’t hinder Amazon’s success, and this is the relevant point. Dig a little deeper, though, and Amazon’s API story offers some interesting insights.

Amazon, it turns out, is a good case study illustrating the evolution of Internet APIs. When AWS first appeared, Amazon engineers faced what at the time was an unresolved question of style: should a cloud publish SOAP services, or RESTful APIs? I’ve heard that question led to some pretty heated internal debates, and I don’t doubt this is true. Amazon lies in Microsoft’s backyard, and .NET in those days was very firmly in the SOAP camp. REST, however, was rapidly gaining mind share, especially due to the growing popularity of AJAX in Web 2.0 apps.

The engineer’s solution was to hedge, producing a mix of SOAP, a “REST-like” query (HTTP with query params as NVPs), and REST APIs somewhat in parallel. Developers gravitated toward the later two options so much more than SOAP that AWS left the orbit of WS-* for good. Amazon gave the market choice, and the market spoke decisively. This was an interesting experiment, and in performing it Amazon actually led the way—as they have in so many ways—for future cloud providers. Each new provider had the luxury of an answer to the question of API style: REST won (even if this wasn’t really REST). Now all an emerging cloud provider needed to do was to design their API according to current best practices, and this is something for which experience counts. Roy Fielding’s thesis may have come out in 2000, but the appreciation of what constitutes good RESTful design among developers is a much more recent phenomena, one that has benefited greatly from the growing popularity of this style.

APIs are of course simply an interface between components of a distributed application. Like any interface, properly versioned, they can evolve. One thing I’ve noticed lately is that our customers are increasingly using Layer 7 technology to refine their APIs as they come to better understand their own needs.

For example, we have a customer in the publishing industry that is actually adapting existing SOAP-based APIs to RESTful design principles. They use our SecureSpan Gateways to present a true REST facade, and adapt this to existing SOAP messages on-the-fly. This transformation is fairly mechanical; what is more important is that the exercise also gave their developers the opportunity to refine their API suite to better meet their needs. Many of their new RESTful APIs leverage SecureSpan’s orchestration capabilities to consolidate multiple SOAP calls behind a single REST call. The resulting API 2.0 is a better match to their current business—and best of all, they were able to do all of this without writing a line of new code.

The lesson here is that while we like to think that APIs are perfectly static and never change, the truth is that we rarely get interfaces right the first time. Requirements change, business moves, and our understanding advances. What is important is that your APIs can respond with agility to changing needs, because change, we all know, is inevitable.

A Clear View of Virtualization

I spoke recently with Robyn Weisman from Processor Magazine about the basics of virtualization. This is one of those terms that everyone in the computing industry uses, but very often with different meanings. Her article is now available in the Jan 28, 2011 print issue. It is also accessible online: A Clear View of Virtualization.

Defeating the Facebook Hack

This week, Facebook fell victim to hackers who managed to deface Mark Zuckerberg’s page, no doubt earning the perpetrators tremendous props within their own social community. Facebook quickly closed the door on that particular exploit, but by then of course the Internets were abuzz and the damage was done. The company quickly followed up with some unrelated security distractions: HTTPS, good for countering Firesheep (I love that name); social authentication instead of CAPTCHAs (this is actually interesting and plays to their strengths); and an announcement that this Friday is “Data Privacy Day” (Ouch).

Their aren’t many details available on the hack (the Guardian has a great investigation examining some of the clues that were left behind), but it appears that one particular API didn’t perform sufficient authorization on a POST. This is a common problem when you don’t make the configuration of basic security functions like AAA highly visible, auditable, and tunable. Leave security to the developers, and rigor is often overlooked because of a coder’s naturally intense focus on functionality. Decouple security out of the API, promoting security management to a first class citizen administered by internal experts—well, then you have a much greater chance to avoid embarrassing gaffs like this one. Consistency is the soul of good security, and the decoupling strategy makes consistency an achievable goal across all of an organization’s APIs. Specialize where necessary, but make everything highly visible to the experts so they can easily move from big picture to necessary minutia. Make security configuration highly declarative both to advance understanding and facilitate rapid change in response to evolving threats. This is how API management must be in 2011.

Facebook has its pick of the best and brightest these days—and they still face these problems. As other, less privileged organizations attempt to create APIs—and this is very much the trend of the day—we can expect to see many more such attacks. Having the CEO’s page defaced is certainly embarrasing. But in the long run, it’s a lot less expensive than a real privacy breach, or cleaning up from massive sustained fraud.

The time for do-it-yourself security and management of APIs has passed. This is the time to deploy professional solutions to the problem that have been proven out by the military, intelligence community, and health care—groups that have come to understand the best practices around API security out of absolute necessity.

To learn more, take a look at Layer 7 Technologies solutions for securing and managing APIs.

How to Choose a SOA Gateway

Dana Crane, Product Marketing Manager for Layer 7, is delivering a webinar this Thursday January 27 that tells you what you should consider when choosing a SOA Gateway. The SOA gateway category is a little like an iceberg, in that there is a lot more to it than first meets the eye. Here at Layer 7, we find that customers first become interested in SOA Gateways as a means to solve their security and transaction management problems. But they quickly discover that SOA Gateways can easily function as an Enterprise Service Bus (ESB), a service orchestration engine, a lightweight PKI system, a Security Token Service (STS), and even a service host for applications. We have a number of customers who have been able to discontinue expensive maintenance contracts on basic infrastructure applications simply because their SOA Gateway was able to meet all of their needs.

I hope you can join my colleague Dana later this week and learn more about what you need to look for in an SOA Gateway solution. Dana is one of the best product guys I know. He really understands the needs of the marketplace, and he keeps all of us here at Layer 7 honest. You can sign up for Dana’s webinar on the Layer 7 web site right here.

Hacking the Cloud

I’m not sure who is more excited about the cloud these days: hackers or venture capitalists. But certainly both groups smell opportunity. An interesting article published by CNET a little while back nicely illustrates the growing interest the former have with cloud computing. Fortify Software sponsored a survey of 100 hackers at last month’s Defcon. They discovered that 96% of the respondents think that the cloud creates new opportunities for hacking, and 86% believe that “cloud vendors aren’t doing enough to address cyber-security issues.”

I don’t consider myself a hacker (except maybe in the classical sense of the word, which had nothing to do with cracking systems and more about solving difficult problems with code), but I would agree with this majority opinion. In my experience, although cloud providers are fairly proficient at securing their own basic infrastructure, they usually stop there. This causes a break in the security spectrum for applications residing in the cloud.

Continuity and consistency are important principles in security. In the cloud, continuity breaks down in the hand-off of control between the provider and their customers, and potential exploits often appear at this critical transition.  Infrastructure-as-a-Service (IaaS) provides a sobering demonstration of this risk very early in the customer cycle. The pre-built OS images that most IaaS cloud providers offer are often unpatched and out-of-date. Don’t believe me? Prove it to yourself the next time you bring up an OS image in the cloud by running a security scan from a SaaS security evaluation service like CloudScan. You may find the results disturbing.

IaaS customers are faced with a dilemma. Ideally, a fresh but potentially vulnerable OS should first be brought up in a safe and isolated environment. But to effectively administer the image and load patch kits, Internet accessibility may be necessary. Too often, the solution is a race against the bad guys to secure the image before it can be compromised. To be fair, OS installations now come up in a much more resilient state than in the days of Windows XP prior to SP2 (in those days, the OS came up without a firewall enabled, leaving vulnerable system services unprotected). However, it should surprise few people that exploits have evolved in lock step, and these can find and leverage weaknesses astonishingly fast.

The world is full of ex-system administrators who honestly believed that simply having a patched, up-to-date system was an adequate security model. Hardening servers to be resilient when exposed to the open Internet is a discipline that is  time-consuming and complex. We create DMZs at our security perimeter precisely so we can concentrate our time and resources on making sure our front-line systems are able to withstand continuous and evolving attacks. Maintaining a low risk profile for these machines demands significant concentrated effort and continual ongoing monitoring.

The point so many customers miss is that cloud is the new DMZ. Every publicly accessible server must address security with the same rigor and diligence of a DMZ-based system. But ironically, the basic allure of the cloud—that it removes barriers to deployment and scales rapidly on demand—actually conspires to work against the detail-oriented process that good security demands. It is this dichotomy that is the opportunity for system crackers. Uneven security is the irresistible low-hanging fruit for the cloud hacker.

CloudProtect is a new product from Layer 7 Technologies that helps reconcile the twin conflicts of openness and security in the cloud.  CloudProtect is a secure, cloud-based virtual appliance based on RedHat Enterprise Linux (RHEL). Customers use this image as a secure baseline to deploy their own applications. CloudProtect features the hardened OS image that Layer 7 uses in its appliances. It boots in a safe and resilient mode from first use. This RHEL distribution includes a fully functioning SecureSpan Gateway – that governs all calls to an application’s APIs hosted on the secured OS. CloudProtect offers a secure console for visual policy authoring and management, allowing application developers, security administrators, and operators to completely customize the API security model based to their requirements. For example, need to add certificate-based authentication to your APIs? Simply drag and drop a single assertion into the policy and you are done. CloudProtect also offers the rich auditing features of the SecureSpan engine, which can be the input to a billing process or be leveraged in a forensic investigation.

More information about the full range of Layer 7 cloud solutions, including Single Sign-On (SSO) using SAML for SaaS applications such as Salesforce.com and Google Apps, can be found here on the Layer 7 cloud solutions page.

Using SecureSpan Gateways for Traffic Distribution into Oracle OSB

The SERPLand blog talks about Layer 7 deployment as a load balancer in front of Oracle’s OSB product. We introduced a fully integrated OSB+SecureSpan appliance a couple of year’s ago for customers who want an easy way to deploy OSB securely into the DMZ.

Over to You, Rush

Last week’s New York Times article on hacking the new gadgets, including Web-enabled HDTVs, generated an enormous amount of interest. An AM radio station in Tampa Bay, 970 WFLA AM, invited me onto their morning commuter show, AM Tampa Bay with Tedd Webb and Mark Larsen (standing in for Jack Harris). This was a great opportunity to reach a much wider audience than I usually speak to. So I woke up at 4am to an icy-cold Vancouver morning, and tried my best to imagine the sunshine and palm trees in far away Tampa.

It was a great, fast paced discussion. Tedd and Mark are the real thing—listening to the interview makes me wish I was born with that radio voice. Those guys are total pros.

After my five minutes of fame, the station filled the air for the rest of the day with the giants of conservative talk radio, including Glenn Beck, Rush Limbaugh, Sean Hannity, and Mark Levin.

You can listen to the interview here.

Is Web-connected TV the New Power Play for Hackers?

Over the holidays I had the fortune of being quoted in the New York Times. This came out of an interview I had with NYT writer Ashlee Vance about the recent discovery of security weaknesses in Web-connected HDTVs. Researchers at Mocana, a security services company in the Bay Area, identified a number of  vulnerabilities in one of the most popular Internet-enabled televisions. This is the first major security incident for a product category that is very likely to become wildly popular, but I doubt it will be the last.

In the hacking community, cracked systems equal power. Such power may be tangible, such as a botnet available for hire, or simply the social power derived from compromising a particularly high profile target. But as more interesting devices appear on the Internet—such as smart phones, TVs, and even refrigerators—there will be an inevitable shift in focus within the hacking community toward these. This is because these new devices represent enormous potential for the consolidation of new power.

The motivation to attack connected devices isn’t simply to target a new platform that might contain trivial vulnerabilities (though for some, this may be enough). The real attraction here is the sheer number of nodes; this, fundamentally, is about volume. It is estimated that  by the end of 2010, Apple will have shipped around 75 million iPhones. (To put this number into perspective, by July 2010, Microsoft announced it had shipped 150M units of Windows 7.) The iPhone alone represents an enormous injection of computing power onto the Internet, delivered over the course of only 3 1/2 years.

Now, the iPhone happens to be a remarkably stable and secure platform, thanks in part to Apple’s rigid curation of the hardware, software, and surrounding app eco-system. But what is interesting to note is how quickly a new Internet platform can spread, and how much of the total global computing horsepower this can represent. The consumer world, by virtue of its size, its fads and caprice,  its unprecendented spending power, can shift the balance of computing power in months (iPads, anyone?). Today the connected-device explosion centers around mobile phones, but tomorrow it can easily be web-connected TVs, smart power meters, or iToilets. This radical change to Internet demographics—from the servers and desktop, to things and mobile—will prove irresistible to the hacking community.

What is troubling about the vulnerabilities Mocona found is how simplistic these were. Device manufacturers must place far greater emphasis on basic system security. What will happen when the next wildly-popular consumer device is exposed to the full cutting-torch of hacker attention? It is going to be an interesting decade…