Tag Archives: security

The Cloud Security Alliance Introduces The Security, Trust and Assurance Registry

As a vendor of security products, I see a lot of Requests for Proposal (RFPs). More often than not these consist of an Excel spreadsheet with dozens—sometimes even hundreds—of questions ranging from how our products address business concerns to security minutia that only a high-geek can understand. RFPs are a lot of work for any vendor to respond to, but they are an important part of the selling process and we always take them seriously. RFPs are also a tremendous amount of work for the customer to prepare, so it’s not surprising that they vary greatly in sophistication.

I’ve always thought it would be nice if the SOA gateway space had a standardized set of basic questions that focused vendors and customers on the things that matter most in Governance, Risk and Compliance (GRC). In the cloud space, such a framework now exists. The Cloud Security Alliance (CSA) has introduced the Security, Trust and Assurance Registry (STAR), which is a series of questions designed to document the security controls a cloud provider has in place. IaaS, PaaS and SaaS cloud providers will self-assess their status and publish the results in the CSA’s centralized registry.

Providers report on their compliance with CSA best practices in two different ways. From the CSA STAR announcement:

1. The Consensus Assessments Initiative Questionnaire (CAIQ), which provides industry-accepted ways to document what security controls exist in IaaS, PaaS, and SaaS offerings. The questionnaire (CAIQ) provides a set of over 140 questions a cloud consumer and cloud auditor may wish to ask of a cloud provider. Providers may opt to submit a completed Consensus Assessments Initiative Questionnaire.
2. The Cloud Controls Matrix (CCM), which provides a controls framework that gives detailed understanding of security concepts and principles that are aligned to the Cloud Security Alliance guidance in 13 domains. As a framework, the CSA CCM provides organizations with the needed structure, detail and clarity relating to information security tailored to the cloud industry. Providers may choose to submit a report documenting compliance with Cloud Controls Matrix.

The spreadsheets cover eleven control areas, each subdivided into a number of distinct control specifications. The control areas are:

  1. Compliance
  2. Data Governance
  3. Facility Security
  4. Human Resources
  5. Information Security
  6. Legal
  7. Operations Management
  8. Risk Management
  9. Release Management
  10. Resiliency
  11. Security Architecture

The CSA hopes that STAR will help to shorten purchasing cycles for cloud services because the assessment addresses many of the security concerns that users have today with the cloud. As with any benchmark, over time vendors will refine their product to do well against the test—and as with many benchmarks, this may be to the detriment of other important indicators. But this set of controls has been well thought through by the security professionals in the CSA community, so cramming for this test will be a positive step for security in the cloud.

Introducing Layer 7’s OAuth Toolkit

“If your tools don’t work for you, get rid of them,” is a simple creed I learned from my father in the workshop. Over the years, I have found it is just as relevant when applied to software, where virtual tools abound, but with often-dubious value.

OAuth is an emerging technology that has lately been in need of useful tools, and to fill this gap we are introducing an OAuth toolkit into Layer 7’s SecureSpan and CloudSpan Gateways.  OAuth isn’t exactly new to Layer 7; we have actually done a number of OAuth implementations with our customers over the last two years. But what we’ve discovered is that there is a lot of incompatibility between different OAuth implementations, and this is discouraging many organizations from making better use of this technology. Our goal with the toolkit was to provide a collection of intelligently parameterized components that developers can mix-and-match to reduce the friction between different implementations. And thanks to the generalization that characterize the emerging OAuth 2.0 specification, this toolkit helps to extend OAuth into interesting new use cases beyond the basic three-legged scenario of version one.

I have to admit that I was suspicious of OAuth when it first appeared a few years ago. So much effort had gone into the formal specification of SAML, from core definition to interop profiles, that I didn’t see the need for OAuth’s one use case solution and had little faith in the rigor of such a grass roots approach. But in time, OAuth won me over; it fits well with the browser-centric, simple-is-better approach of the modern Internet. The mapping to more generalized, token server-style interactions in the new version of the spec appeals to the architect in me, and the opening up of the security token payload indicates a desire to play well with existing infrastructure, which is a basic enterprise requirement.

However, adding extensibility to OAuth will also bring about this technology’s greatest challenge. The 1.0a specification benefitted enormously from laser focus on a use case so narrow that it was a wonder it gained the mindshare that it did. OAuth in 2011 has no such advantage—generalization being great for architects but hell for standards committees and vendors. It will be interesting to see how well the OAuth community satisfies the oftentimes-conflicting agendas of simple, standard, and interoperable.

Here at Layer 7 we predict a bright future for OAuth. We also think it’s very useful today, which is why we introduced a toolkit instead of a one-size-fits-one approach. We see our customers using OAuth in concert with their existing investments in Identity and Access Management (IAM) products, such as IBM’s Tivoli Access Manager  (TAM) or Microsoft’s Active Directory (AD). We see it being used to transport SAML tokens that require sophisticated interpretation to render entitlement decisions. Taking a cue from OAuth itself, the point of our toolkit is to simplify both implementation and integration. And the toolkit’s parameterization helps to insulate the application from specification change.

I’ll be at the Gartner/Burton Catalyst show this week in San Diego where we’ll be demonstrating the toolkit. I hope you can drop by and talk about how it might help you.

LulzSec Disbands

“Live Fast, die young, and leave a good-looking corpse” was first uttered by actor John Derek in Knock on any Door,a 1949 film also staring Humphrey Bogart. This irresistible catchphrase has inspired generations of rebels from film to music to out-of-control teenagers. It also seems to have been taken to heart by the hacker collective LulzSec, which after a spectacular 50-day blitz across the Internet, is dissolving back into the shadowy back alleys from which it appeared. And just as James Dean—another famous adherent to the formula—did for film, so too have LulzSec changed the face of IT security and left an inspirational challenge for hacking’s next generation.

What is interesting about LulzSec isn’t necessarily their technique but their PR. The group appeared on the heels of high profile hacks by Anonymous and fed masterfully into a media-fueled hack-steria, feeding a public imagination over-stimulated with big audacious exploits that make great copy. LulzSec was the perfectly-timed counterpoint to Anonymous—gang fights equaling news that writes itself, whether the conflict is between thugs, dancers, graffiti writers, or hackers. And slipping away before being caught (sans one alleged member) ties this story up neatly into a narrative made to entertain. I’ve no doubt the movie rights will be bid sky-high.

If LulzSec can make claim to a legacy, then surely it is that effective marketing is just as important as the hack itself. LulzSec went from zero to global brand in a scant 50 days—a success that most marketing gurus can only dream of. In its wake, the collective leaves a somewhat heightened awareness of the terrible cost of security breaches among the general public. Their means to this end, of course, remain dubious; most hackers claim the same as a knee-jerk justification of their actions, though few are as wildly successful as LulzSec has been.

Nevertheless, no CEO wants to be subject to the negative publicity endured by Sony, which has suffered wave-after-wave of successful cyber attack. It is safe to say that LulzSec has dragged Internet security back into the executive suite, something which seemed almost unthinkable only a few months ago. The intelligent response to this new attention should be an increased emphasis on basic IT security foundations.

Blowing Holes in the Web of Trust

The Register today published an excellent summary of the latest issues with SSL. In the typically blunt and mordant style for which the publication is so famous, Dan Goodin illustrates how the gossamer-thin SSL web of trust is built on a superstructure of astonishingly dubious merit. It’s a wonder the whole thing works at all.

Have a careful read of How is SSL hopelessly broken? Let us count the ways and then re-examine the cartel certs that anchor your own web browsing experience. As you roll out your API strategy, make sure you deploy your SSL endpoints with certificates that were subject to organizational or (much better) extended validation. Encourage—or if you can, demand—that your API clients limit their trust stores to a small subset containing only the most legitimate CAs.

The opportunity is largely over in the browser world; affecting massive change there will only happen when individuals personally lose money on a grand scale. But APIs still have a chance to regain some level of trust through rigorous application of SSL best practices, and API providers and developers can take the initiative here.

The API Gateway As Endpoint

The US Customs and Immigration station at Point Roberts is a small building set on the edge of the coastal rainforest of Washington State. It is the last customs station in a vast western chain strung out along the 49 parallel. I enjoy crossing the border at Point Roberts. The abrupt change from sprawling Canadian suburbs to American rural countryside always appeals to me.

The customs station, despite it’s small size, offers a range of services from granting a visa to simply giving advice about where to find a good deal for gas in Point Roberts. It’s not simply a gateway controlling access into the US, but a provider of a broad range of services associated with crossing an international boundary.

There exists a similar duality with API gateways. Although the common deployment is as a border guard, protecting APIs hosted by an organization, an API gateway can also act as an endpoint providing valuable standalone services. Nearly all of my customers first purchase SecureSpan Gateways to fulfill the role the name implies: that is, an API or service gateway. The border guard deployment pattern is now so commonplace that I no longer need to evangelize it as I did in the early days of Layer 7, close to a decade ago. Architects recognize this pattern as an accepted best practice for securing and managing APIs. But like the Point Roberts border station, a SecureSpan Gateway can also provide services that have nothing to do with access control or transaction confidentiality, but provide value on their own, independent of any APIs they may be guarding.

Here’s a very simple example illustrating the endpoint pattern using a SecureSpan Gateway. Normally I might deploy my gateway in front of a web server, acting as an authenticating reverse proxy for my web pages. In this role, the gateway is responsible for access control, SSL management, audit, lightweight load balancing, etc. All classic gateway functions.

Figure 1: Gateway as boarder guard at the edge of a network. This is classic perimeter security.

But suppose I wanted to use the gateway itself as the web server? In other words, it’s not acting as a gateway, but the server endpoint in itself?

Figure 2: Gateway as service endpoint. Here the gateway is providing valuable APIs on its own without necessarily routing a request to an internal host.

It’s very simple to configure a SecureSpan Gateway as an endpoint. You simply create a policy that never routes a request onward to an internal host, but instead acts on the message content internally. I sometimes think of it as an internal loop back mechanism. I’ll build a policy implementing my web site, and add the assertion Return Template Response to Requester. I will bind this policy to the gateway URL http://localhost:8080/helloworld as shown in the screen snippet below from the SecureSpan management console:

With just one assertion, this is pretty much the simplest policy you can write, and so fitting for the time-honoured Hello World example. Let’s expand the assertion to examine at what the template actually contains:

Pretty plain-vanilla HTML. The only departure I’ve made from the deliberate austerity of the Hello World tradition is to add a time stamp to showcase the use of predefined context variables. There are dozens of context variables that are automatically set as the gateway processes a transaction (regardless of whether it’s actually being a gateway or an endpoint—remember we consider this just a policy implementation pattern, not a different capability). Context variables cover everything from accessing individual fields in the X509 certificate used for client-side SSL authentication to storing the IP address of a request sender. And of course, you can set your own context variables at any time within a policy.

Just to be complete, here’s what happens when I point my browser at the gateway-as-endpoint:

Now, before you think I’ve gone mad, I am not advocating that you deploy a web site like this; there are much better tools for that purpose. But the point I’m making is that there are legitimate endpoint services that do indeed lend themselves to implementation on a gateway. These are just a little more complicated than what I’ve shown here, but nevertheless simple to implement using the declarative policy language in SecureSpan. For example:

  • Signing Services: In the notary pattern, a gateway with an on-board Hardware Security Module (HSM) responsible for protecting key material can easily provide authoritative signing services for all kinds of data, from XML documents to arbitrary text.
  • Encryption Services: Same idea as signing, with the goal to avoid insecure proliferation of keys.
  • Validation Services: Ensure data conforms to a particular schema.
  • Threat Detection Services: Scan content for SQL injection, Cross Site Scripting (XSS), XML threats, or custom threat signatures.
  • Security Token Services (STS): Issue security tokens like SAML or OAuth tokens. Validate or exchange tokens.
  • Centralized Session or Caching Management: Cache frequently used data items according to a scheme that is tuned to your application.
  • Policy Decision Point (PDP) Services: Receive and act on SAMLP requests.
  • XACML Services: Act as a centralized XACML engine providing authorization services.

The list above is by no means exclusive; however, at Layer 7 we’ve implemented all of these at one time or another using SecureSpan Gateways. It’s an important design point for us that the only real difference between a gateway and an endpoint is how you define your policy.

When Is The Cloud Not A Cloud?

Sometimes I joke that as my kids grow up they won’t see clouds, they’ll just see air—meaning of course that their use of cloud-based services will become so ubiquitous as to make the cloud moniker largely unnecessary. What we so enthusiastically label cloud will just be the way everyone builds and deploys their apps. “Nothing to see here folks; but look at my wonderful new application…”

We won’t arrive at this future until we strip the word cloud of its power. And to do this, we need to go after the things we thought made cloud unique and special in the first place. Today, Amazon took a vicious swipe at the canonical definition by introducing dedicated EC2 instances. Dedicating hardware to a single customer addresses the next logical layer in the hierarchy of security concerns after virtual isolation. Amazon’s VPC product, introduced back in August 2009, provided virtualized isolation in their multi-tenant environment. Essentially VPC is like a virtual zone housing only your instances. This zone is tied back to your on-premise network using a VPN. The only way in or out of a zone is through your corporate network. Other Amazon-resident applications can not access your apps directly—in fact, any external app, Amazon-resident or otherwise—must go through your conventional corporate security perimeter and route back to Amazon over the VPN to be able to gain access to a VPC app. The real value of VPC is that it puts instance access back into the hands of the corporate security group.

The problem that the highly security conscious organization has with VPC is that the “V” is for virtual. VPC may implement clever isolation tricks using dynamic VLANs and hypervisor magic known only to a gifted few, but when your critical application loads up you may actually reside on exactly the same hardware as your own worst enemy. In theory, neither of you can exploit this situation. But you need to believe the theory. Completely.

Today’s announcement means that Amazon’s customers can literally have exclusive use of hardware. This is good news for anyone with reservations about hypervisor isolation. However, the networking remains virtualized, and of course you can still ask the classic cloud security questions about where data resides, or the background of the staff running the infrastructure. So a mini-private cloud, it is not; but dedicated instances is an interesting offering, nonetheless.

What is more intriguing is that by providing dedicated hardware, Amazon is beginning to erode one of the basic foundations of the canonical cloud definition: multi-tenancy. Purists will argue—as they do so with unexpected vehemence with regard to private cloud—that what Amazon is offering is not a cloud at all, but in fact a retrograde step back to simple hosting or co-loc. I’m inclined to disagree, however, and think instead Amazon offers a logical next step (and certainly not the last) in the evolution of cloud services. By doing so, Amazon amplifies some of the other important aspects that define what the cloud really is. Things like self-service, a greatly changed division and scope of operational responsibility, the leverage of commodity of scale, elasticity, and the ability to pay for what you actually use.

I don’t think Amazon’s new offering will be wildly successful because it still leaves many security issues unresolved. But I do think it points the way to the future cloud, which will have many different attributes and characteristics that solve different problems. Some may conflict with traditional definitions and expectations. Some may fulfill them. What is important is to choose the service that meets your needs, and don’t worry what it’s called. That’s marketing’s problem.

Upcoming Webinar: Extending Enterprise Security Into The Cloud

On March 21, 2011 Steve Coplan, Security Analyst from the 451 Group and I will present a webinar describing strategies CIOs and enterprise architects can  implement to create a unified security architecture between on-premise IT and the cloud.

I have great respect for Steve’s research. I think he is one of the most cerebral analysts in the business; but what impresses me most is that he is always able to clearly connect the theory to its practical instantiation in the real world. It’s a rare skill. He also has a degree in Zulu, which has little to do with technology, but makes him very interesting nonetheless.

Lately Steve and I have been talking about the shrinking security perimeter in the cloud and what this means to the traditional approaches for managing single sign-on and identity federation. This presentation is a product of these discussions, and I’m anticipating that it will be a very good one.

I hope you can join us for this webinar. It’s on Tuesday, March 15, 2011 9:00 AM PST | 12:00 PM EST | 5:00 PM GMT. You can register here.

Overview:
For years enterprises have invested in identity, privacy and threat protection technologies to guard their information and communication from attack, theft or compromise. The growth in SaaS and IaaS usage however introduces the need to secure information and communication that spans the enterprise and cloud. This webinar will look at approaches for extending existing enterprise security investments into the cloud without significant cost or complexity.

Layer 7 Technologies Joins the Cloud Security Alliance (CSA)

I’m pleased to announce that Layer 7 has joined the Cloud Security Alliance (CSA) as a full corporate member. For the past several years, the CSA has assumed the leadership role in defining the best practices to secure cloud applications, data, and infrastructure.

I believe that when you join a community organization, you are obliged to make a real contribution. Being a member means a lot more than just having your company logo on the sponsor list. I’ve been involved previously with the CSA, as a co-author of version 2 of its Security Guidance for Critical Areas of Focus in Cloud Computing, and as a co-author of the organization’s Top Threats in Cloud Computing document. Now that we are corporate members, Layer 7 will help to drive two important events within the CSA.

First, Layer 7 is a sponsor the CSA summit at this year’s RSA conference in San Francisco, running Feb 14-18, 2011. I was a participant at the CSA summit last year. This one-day event sold out instantly, and most attendees agree it was one of the highlights of the RSA conference. If you are in San Francisco for the 2011 RSA show, you should try to get into Monday’s CSA event. The CSA has some very special guests lined up to speak—including Vivek Kundra, US Chief CIO—and I can assure you that once again the summit will be the talk of the RSA.

I am also fortunate to be co-presenting a CSA-sponsored webinar about Managing API Security in SaaS and Cloud with Liam Lynch, eBay’s Head of Security. The rapidly growing API management space has a number of unique challenges with segmentation of roles, access to usage information, developer on-boarding, user management, and community building. Liam and I will talk about our own experiences in this space, and I will explore several case studies that illustrate each issue and its solution. I hope you can join us on Feb 23, 2011 for this talk.

Defeating the Facebook Hack

This week, Facebook fell victim to hackers who managed to deface Mark Zuckerberg’s page, no doubt earning the perpetrators tremendous props within their own social community. Facebook quickly closed the door on that particular exploit, but by then of course the Internets were abuzz and the damage was done. The company quickly followed up with some unrelated security distractions: HTTPS, good for countering Firesheep (I love that name); social authentication instead of CAPTCHAs (this is actually interesting and plays to their strengths); and an announcement that this Friday is “Data Privacy Day” (Ouch).

Their aren’t many details available on the hack (the Guardian has a great investigation examining some of the clues that were left behind), but it appears that one particular API didn’t perform sufficient authorization on a POST. This is a common problem when you don’t make the configuration of basic security functions like AAA highly visible, auditable, and tunable. Leave security to the developers, and rigor is often overlooked because of a coder’s naturally intense focus on functionality. Decouple security out of the API, promoting security management to a first class citizen administered by internal experts—well, then you have a much greater chance to avoid embarrassing gaffs like this one. Consistency is the soul of good security, and the decoupling strategy makes consistency an achievable goal across all of an organization’s APIs. Specialize where necessary, but make everything highly visible to the experts so they can easily move from big picture to necessary minutia. Make security configuration highly declarative both to advance understanding and facilitate rapid change in response to evolving threats. This is how API management must be in 2011.

Facebook has its pick of the best and brightest these days—and they still face these problems. As other, less privileged organizations attempt to create APIs—and this is very much the trend of the day—we can expect to see many more such attacks. Having the CEO’s page defaced is certainly embarrasing. But in the long run, it’s a lot less expensive than a real privacy breach, or cleaning up from massive sustained fraud.

The time for do-it-yourself security and management of APIs has passed. This is the time to deploy professional solutions to the problem that have been proven out by the military, intelligence community, and health care—groups that have come to understand the best practices around API security out of absolute necessity.

To learn more, take a look at Layer 7 Technologies solutions for securing and managing APIs.

Hacking the Cloud

I’m not sure who is more excited about the cloud these days: hackers or venture capitalists. But certainly both groups smell opportunity. An interesting article published by CNET a little while back nicely illustrates the growing interest the former have with cloud computing. Fortify Software sponsored a survey of 100 hackers at last month’s Defcon. They discovered that 96% of the respondents think that the cloud creates new opportunities for hacking, and 86% believe that “cloud vendors aren’t doing enough to address cyber-security issues.”

I don’t consider myself a hacker (except maybe in the classical sense of the word, which had nothing to do with cracking systems and more about solving difficult problems with code), but I would agree with this majority opinion. In my experience, although cloud providers are fairly proficient at securing their own basic infrastructure, they usually stop there. This causes a break in the security spectrum for applications residing in the cloud.

Continuity and consistency are important principles in security. In the cloud, continuity breaks down in the hand-off of control between the provider and their customers, and potential exploits often appear at this critical transition.  Infrastructure-as-a-Service (IaaS) provides a sobering demonstration of this risk very early in the customer cycle. The pre-built OS images that most IaaS cloud providers offer are often unpatched and out-of-date. Don’t believe me? Prove it to yourself the next time you bring up an OS image in the cloud by running a security scan from a SaaS security evaluation service like CloudScan. You may find the results disturbing.

IaaS customers are faced with a dilemma. Ideally, a fresh but potentially vulnerable OS should first be brought up in a safe and isolated environment. But to effectively administer the image and load patch kits, Internet accessibility may be necessary. Too often, the solution is a race against the bad guys to secure the image before it can be compromised. To be fair, OS installations now come up in a much more resilient state than in the days of Windows XP prior to SP2 (in those days, the OS came up without a firewall enabled, leaving vulnerable system services unprotected). However, it should surprise few people that exploits have evolved in lock step, and these can find and leverage weaknesses astonishingly fast.

The world is full of ex-system administrators who honestly believed that simply having a patched, up-to-date system was an adequate security model. Hardening servers to be resilient when exposed to the open Internet is a discipline that is  time-consuming and complex. We create DMZs at our security perimeter precisely so we can concentrate our time and resources on making sure our front-line systems are able to withstand continuous and evolving attacks. Maintaining a low risk profile for these machines demands significant concentrated effort and continual ongoing monitoring.

The point so many customers miss is that cloud is the new DMZ. Every publicly accessible server must address security with the same rigor and diligence of a DMZ-based system. But ironically, the basic allure of the cloud—that it removes barriers to deployment and scales rapidly on demand—actually conspires to work against the detail-oriented process that good security demands. It is this dichotomy that is the opportunity for system crackers. Uneven security is the irresistible low-hanging fruit for the cloud hacker.

CloudProtect is a new product from Layer 7 Technologies that helps reconcile the twin conflicts of openness and security in the cloud.  CloudProtect is a secure, cloud-based virtual appliance based on RedHat Enterprise Linux (RHEL). Customers use this image as a secure baseline to deploy their own applications. CloudProtect features the hardened OS image that Layer 7 uses in its appliances. It boots in a safe and resilient mode from first use. This RHEL distribution includes a fully functioning SecureSpan Gateway – that governs all calls to an application’s APIs hosted on the secured OS. CloudProtect offers a secure console for visual policy authoring and management, allowing application developers, security administrators, and operators to completely customize the API security model based to their requirements. For example, need to add certificate-based authentication to your APIs? Simply drag and drop a single assertion into the policy and you are done. CloudProtect also offers the rich auditing features of the SecureSpan engine, which can be the input to a billing process or be leveraged in a forensic investigation.

More information about the full range of Layer 7 cloud solutions, including Single Sign-On (SSO) using SAML for SaaS applications such as Salesforce.com and Google Apps, can be found here on the Layer 7 cloud solutions page.