Upcoming Webinar: Security in the Cloud vs Security for the Cloud

I was speaking recently to Steve Coplan, Senior Analyst, Enterprise Security Practice at the 451 Group. I always enjoy talking to Steve. He has a deep understanding of technology and our business, but it’s his training as a journalist that I think sets him apart from the other analysts. His work comes through as erudite but accessible, and it is always very well written.

In our discussion, Steve was careful to make a clear distinction between between security in the cloud and security for the cloud. This intrigued me, because I think the differences are too often lost when people talk about cloud in the abstract. Steve’s point became the topic of a webinar that he and I will deliver together this Thursday, March 25, 2010 at 12:00pm EDT/9:00am PDT/4:00pm GMT.

I hope you can join us to learn why this distinction is so important. You can sign up for this webinar at the Layer 7 Technologies web site.

Why Intermediaries Matter in SOA

Last week Joe McKendrick from ZDNet asked the question are SOA anti-principles more important than success principles? The idea of anti-principles came from Steve Jones, who a few years back did some nice work documenting SOA anti-patterns. In a post published last fall, Steve builds on his ideas, observing:

The problem is that there is another concept that is rarely listed, what are your anti-principles?

which is one of those good questions that should give you pause.

Steve continues:

In the same way as Anti-Patterns give you pointers when its all gone wrong then Anti-Principles are the things that you will actively aim to avoid during the programme.

I found this interesting because one of the anti-principles the post lists is direct calling. Steve describes this bad practice as follows:

This anti-principle is all about where people just get a WSDL and consume it directly without any proxy or intermediary. It’s programme suicide and it shouldn’t be done.

Naturally, because I’m in the business of building intermediaries, this seemed perfectly reasonable to me. But on reflection, I think that the argument as to why direct calling is an anti-principle needs further explanation.

Indirection is one of the great principles of computer science. Indirection lets us decouple layers, allowing these to change independently as long as they honour the interface contract. Intermediary layers in SOA, a good example being a proxy like Layer 7’s SecureSpan Gateway, build on this concept allowing architects to decouple service providers from consumers—much as Steve advocates in his post. This layer of indirection means that we can tease out certain highly parameterizable aspects of communication—aspects such as security, monitoring, protocol adaptation, routing, etc.—into a separate policy layer that promotes consistency, affords the opportunity for reuse, and insulates clients (and servers) from change.

This is best illustrated by example. Suppose I have two services: foo and bar. Both services have a number of clients that access them. To explore the issues with direct connection, let’s consider the scenario  where all of these clients establish direct connections with my services:

The first thing you should notice is that the firewall is open to allow the external clients to make their direct connections with the service hosts. In other words, these hosts are, for all intents and purposes, deployed in the DMZ and must be hardened under that assumption. For many applications, this is a non-trivial exercise. Hopefully, your internal alarm bells are going off already.

Few applications remain completely static over their lifetime. Patches become necessary; hardware fails and must be replaced—all of this is part of the natural life cycle of software. My services foo and bar are no exception. One day, the server hosting foo starts to fail, and I find myself in the position that I need to quickly move the foo service onto a new host. Suddenly, all of my clients are broken:

Now I have a very significant problem. I need to update the URLs on every client I serve, and to do it quickly. Every minute I’m down, I’m losing business. Welcome to the pressure cooker.

This potential problem would be easy to manage if I had an intermediary, operating as a policy-driven proxy that is placed in the DMZ between my clients and my services:

This proxy now handles URL-based routing on the fly. If foo moves, it’s a simple matter of modifying the internal routing on the intermediary and voila: no client ever has a problem. My client base is completely insulated from a major structural change to my back end service hosts.

Of course there are tricks we could use employing HTTP redirects, common NAT, or more dynamic bindings to URLs to avoid such a contrived problem in the first place. But what if the change was to something less configurable using conventional means, such as the basic security model for communication? Suppose that as a corporation, we decide to mandate that all clients must now authenticate using client-side certificates under SSL? Foo is running on a Java application server, bar is on .NET; both are capable of accommodating this new model, but their administration is radically different. And to make matters worse, I have a dozen or so additional apps implemented in everything from Ruby on Rails to PHP that I also need to change. That’s a lot of work.

An intermediary would make this task trivial by insulating services from this change to policy. The strategy here is to terminate the SSL connection and authenticate the client on the intermediary instead of on the service hosts. A few clicks of a mouse later, and my job is complete for every service.

This certainly saves time and adds consistency, but the real value is in the change of responsibility. The task of implementing this security model now falls under the jurisdiction of a professional security administrator, not the developers of each separate application. In fact, no code or configuration needs to change on foo, bar, or any of my services. The security model is decoupled from the application, taken out of the hands of each developer and centralized. This is the basic value proposition of intermediaries in SOA, and this value is never realized effectively if you allow direct connections between clients and servers. This is why architectural patterns are sometimes necessary to allow us to be consistent with our principles—or our anti-principles, as the case may be.

Interested in trying an intermediary? You can get a Layer 7 SecureSpan virtual appliance to try out at http://www.layer7tech.com. Alternatively, do your evaluation completely in the cloud. Check out the SecureSpan virtual appliance gateway on the Amazon marketplace. This virtual appliance AMI runs in the EC2 cloud on Amazon Web services. It is the first and only SOA gateway to run in the cloud.

REST Security Does Exist—You Just Need To Apply It

On the eve of the RSA conference this year, Chris Comerford and Pete Soderling published a provocative article in Computerworld titled Why REST security doesn’t exist. It’s a prelude to a talk the author’s are delivering at the conference. Their premise is that while good REST security best practices do indeed exist, developers just don’t seem to follow them.

Comerford and Sodering attribute this state of affairs to a combination of two things. First, REST lacks a well-articulated security model. Few would argue with this—REST, by virtue of its grassroots origins, suffers from a security just-do-it-like-the-web nonchalance that’s certainly done it no favors.

The second issue concerns developers who tend to rush implementation without giving due consideration to security. Truthfully, this is the story of security across all of IT, but I might suggest that with REST, the problem is especially acute. The REST style owes much of its popularity to being simple and fast to implement, particularly when faced with the interest-crushing complexity and tooling demands of the WS-* stack. It’s reasonable to think that in the enthusiastic dash to cross the working application finish line, that security is conveniently de-emphasized or forgotten altogether.

REST, of course, can be secured, and the author’s offer sound advice to accomplish this deceptively simple task. They recommend that API developers:

  • “Do employ the same security mechanisms for your APIs as any web application your organization deploys. For example, if you are filtering for XSS on the web front-end, you must do it for your APIs, preferably with the same tools.
  • Don’t roll your own security. Use a framework or existing library that has been peer-reviewed and tested. Developers not familiar with designing secure systems often produce flawed security implementations when they try to do it themselves, and they leave their APIs vulnerable to attack.
  • Unless your API is a free, read-only public API, don’t use single key-based authentication. It’s not enough. Add a password requirement.
  • Don’t pass unencrypted static keys. If you’re using HTTP Basic and sending it across the wire, encrypt it.
  • Ideally, use hash-based message authentication code (HMAC) because it’s the most secure. (Use SHA-2 and up, avoid SHA & MD5 because of vulnerabilities.) “

I agree with this advice. And just to demonstrate how easy it is to implement, I’ve constructed a simple policy for the Layer 7 Technologies SecureSpan Gateway demonstrating their directives:

In this policy, I’m ensuring that the REST client is using SSL for three things: confidentiality, integrity, and server authentication. I could require client-side certificate authentication here, but instead I’m using HTTP digest, to emphasize the requirement to avoid using plain text HTTP basic or simple user keys. I’m authorizing access based on group membership here, restricting access to members of the sales group.

Finally, I’ve added a scan for cross site scripting attacks.

In the interest of deeper vigilance, I’m also searching for PHP and shell injection signatures. This is admittedly broad, but it covers me in case the developer of the service changes implementation without warning.

This last point—that there is an explicit separation made between developers and the security administrators writing and enforcing policy—is an important one. Developers will be developers: some will be rigorous about implementing security best practices; others won’t be. The only way to manage this is to assume a defensive posture in service policy, both from the perspective of incoming transactions, but also around the services themselves. The best practice here is to externalize policy enforcement and assign dedicated security professionals to administer policy.

This defensive approach to securing REST services fits well with the spirit of Comerford and Soderling’s directives. It addresses, in particular, their point about leveraging peer-reviewed frameworks. This is precisely what Layer 7’s SecureSpan Gateway is—a peer-reviewed security framework offering great depth of functionality. SecureSpan is undergoing Common Criteria Review of its implementation, as well as the entire development process for the product. We’re certifying to EAL4+, which is particularly rigorous. This provides assurance that the technology is sufficiently robust for deployment at the highest levels of military and the government. Common Criteria is an arduous process, and going through it demonstrates Layer 7’s deep commitment to security. You should not ever consider a security gateway—for REST, or for XML messaging—that isn’t undergoing the Common Criteria evaluation. Remember, Common Criteria is a necessary stamp of approval for governments around the world; it should also be a basic requirement for you.

Try SecureSpan yourself, and see how you can implement robust application security and monitoring without changing code. Download an evaluation of the SecureSpan virtual appliance right here.

The Seven Deadly Sins: The Cloud Security Alliance Identifies Top Cloud Security Threats

Today marks the beginning of RSA conference in San Francisco, and the Cloud Security Alliance (CSA) has been quick out of the gate with the release of its Top Threats to Cloud Computing Report. This peer-reviewed paper characterizes the top seven threats to cloud computing, offering examples and remediation steps.

The seven threats identified by the CSA are:

  1. Abuse and Nefarious Use of Cloud Computing
  2. Insecure Application Programming Interfaces
  3. Malicious Insiders
  4. Shared Technology Vulnerabilities
  5. Data Loss/Leakage
  6. Account, Service, and Traffic Hijacking
  7. Unknown Risk Profile

Some of these will certainly sound familiar, but the point is to highlight threats that may be amplified in the cloud, as well as those that are unique to the cloud environment.

This CSA threats report is a true community effort. The working group had representatives from a broad range of cloud providers, infrastructure vendors, and cloud customers, including:

  • HP
  • Oracle
  • Bank of America
  • Microsoft
  • Rackspace
  • Verizon
  • Cigital
  • Qualsys
  • Trend Micro
  • Websense
  • Zscalar
  • CloudSecurity.org
  • Cloud Security Alliance
  • Layer 7 Technologies

I represented Layer 7. I tackled Data Loss/Leakage, and performed some editorial of the paper as a whole. As working groups go, I can tell you that this one simply worked well. I’ve been involved with a number of standards groups in the past, this time we seemed to have all of the right people involved. The group converged on the key issues quickly and decisively. It was a good process, and I’m happy with the results.

We thing we did debate was how best to rate each threat. We finally agreed that the best approach was to let the community decide You may recall that last week I wrote an blog entry soliciting your input to help classify threat severity. Well, the results are in and they are certainly interesting. Perhaps not surprising, the threat of Data Loss/Leakage leads the community’s list of concerns, at around 28%. But what is more intriguing is that there really isn’t too much of a difference between the perceived impact of any threat on the list (all fall between around 8-28%). This is encouraging, as it suggests that we nailed the current zeitgeist in our list. It is just a little disconcerting that there remain seven significant threats to consider.

The latest survey results, and the threats paper itself, are available from the CSA web site. Bear in mind that is evolving work. The working group intends to update the list regularly, so if you would like to make a contribution to the cloud community, please do get involved. And remember: CSA membership is free to individuals; all you need to give us is your time and expertise.

You Can Help the Cloud Security Alliance Classify the Top Threats in the Cloud

The Cloud Security Alliance (CSA) needs your help to better understand the risk associated with cloud threats. Earlier this year, the CSA convened a working group with the mandate to identify the top threats in the cloud. This group brought together a diverse set of security and cloud experts, including myself representing Layer 7. Our group identified 7 major threats that exist in the cloud, but now we would like to gauge how the community as a whole perceives the risk these threats pose.

I would like to invite you to participate in a short survey so we can get your input. This should only take you about 5 minutes to complete. We intend to work the results of this survey into the CSA Top Threats to Cloud Computing document. This will be formally unveiled at the Cloud Security Alliance Summit, which is part of next week’s RSA conference in San Francisco.

Help us to make the cloud a safer place by identifying and characterizing its greatest threats. Share this survey link with your colleagues. The more participation we can get, the better our results will be, and the stronger the work will become.

You will find our survey here.

The Revolution Will Not Be Televised

Technology loves a good fad. Agile development, Web 2.0, patterns, Web services, XML, SOA, and now the cloud—I’ve lived through so many of these I’m beginning to lose track. And truth be told, I’ve jumped on my fair share of bandwagons. But one thing I have learned is that the successful technologies move at their own incremental pace, independent of the hype cycle around them. Two well known commentators, Eric Knorr from Infoworld, and David Linthicum, from Blue Mountain Labs, both made posts this week suggesting that this may be the case for cloud computing.

Eric Knorr, in his piece Cloud computing gets a (little) more real, writes:

The business driver for the private cloud is clear: Management wants to press a button and get what it needs, so that IT becomes a kind of service vend-o-matic. The transformation required to deliver on that promise seems absolutely immense to me. While commercial cloud service providers have the luxury of a single service focus, a full private cloud has an entire catalogue to account for — with all the collaboration and governance issues that stopped SOA (service-oriented architecture) in its tracks.

I agree with Eric’s comment about SOA, as long as you interpret this as “big SOA”. The big bang, starting-Monday-everything-is-SOA approach certainly did fail—and in hindsight, this shouldn’t be surprising. SOA, like cloud computing, cuts hard across fiefdoms and challenges existing order. If you move too fast, if your approach is too draconian, of course you will fail. In contrast, if you manage SOA incrementally, continuously building trust and earning mindshare, then SOA will indeed work.

Successful cloud computing will follow the incremental pattern. It just isn’t reasonable to believe that if you build a cloud, they will come—and all at once, as Eric contends. We have not designed our mission critical applications for cloud deployment. Moreover, our people and our processes may not be ready for cloud deployment. Like the applications, these too can change; but this is a journey, not a destination.

Private clouds represent an opportunity for orderly transition. Some would argue that private clouds are not really clouds at all, but I think this overstates public accessibility at the expense of the technical and operational innovations that better characterize the cloud. Private clouds are important and necessary because they offer an immediate solution to basic governance concerns and offer a trustworthy transition environment for people, process and applications.

David Linthicum seems to agree. In his posting What’s the Deal With Private Clouds? Dave writes:

In many instances, organizations leverage private clouds because the CIO wants the architectural benefits of public cloud computing, such as cost efficiencies through virtualization, but is not ready to give up control of data and processes just yet.

Dave sees private clouds as a logical transition step, one that supports an incremental approach to cloud computing. It’s not as radical as jumping right into the public cloud, but for that reason it’s a much easier sell to the business. It pulls staff in, rather than driving them out, and in the modern enterprise this is a much better recipe for success. He continues:

I think that many enterprises will stand up private clouds today, and then at some point learn to leverage public clouds, likely through dynamic use of public cloud resources to support bursts in processing on the private cloud. Many are calling this “cloud bursting,” but it’s a great way to leverage the elastic nature of public cloud computing without giving up complete control.

Dave’s hypothesis struck a chord with me. Only last week I had a discussion with a group of architects from a large investment bank, and this describes their strategy precisely. The bank has an internal, private cloud today; but they anticipate moving select applications into public clouds, leveraging the knowledge and experience they gained from their private cloud. These architects recognize that cloud isn’t just about the technology or a change in data center economics, but represents a fundamental shift in how IT is delivered that must be managed very carefully.

This revolution just doesn’t make good TV. The hype will certainly be there, but the actual reality will be a slow, measured, but nonetheless inevitable transition.

PS: The title, of course, is from the great Gil Scott-Heron

My Thoughts on Cloud Security in SearchCloudComputing.com

I had a good talk the other day with Carl Brooks, the technology writer for SearchCloudComputing.com. We spoke about why security is different in the cloud, and what you can learn from approaches like SOA about how to secure cloud-based apps. The full interview is the lead story today on SearchCloudComputing.com.

How to Safely Publish Internal Services to the Outside World

So you’ve bought into the idea of service-orientation. Congratulations. You’ve begun to create services throughout your internal corporate network. Some of these run on .NET servers; others are Java services; still others are Ruby-on-Rails—in fact, one day you woke up and discovered you even have a mainframe service to manage. But the question you face now is this: how can all of these services be made available to consumers on the Internet? And more important, how can you do it securely?

Most organizations buffer their contact with the outside world using a DMZ. Externally facing systems, such as web servers, live in the DMZ. They mediate access to internal resources, implementing—well, hopefully implementing—a restrictive security model. The DMZ exists to create a security air gap between protocols. The idea is that any system deployed into the DMZ is hardened, resilient, and publishes a highly constrained API (in most cases, a web form). To access internal resources, you have to go through this DMZ-based system, and this system provides a restricted view of the back-end applications and data that it fronts.

The DMZ represents a challenge for publishing services. If services reside on internal systems, how can external clients get through the DMZ and access the service?

Clearly, you can’t simply start poking holes in firewall #2 to allow external systems to access your internal providers directly; this would defeat the entire purpose of the DMZ security model. But this is exactly what some vendors advocate. They propose that you implement local security agents that integrate into the container of the internal service provider. These agents implement policy-based security—essentially taking on the processing burden of authentication, authorization, audit, confidentiality, integrity and key management. While this may seem attractive, as it does decouple security into a purpose-built policy layer, it has some very significant drawbacks. The agent model essentially argues that once the internal policy layer is in place, the internal service provider is ready for external publication. But this implies poking holes in the DMZ, which is a bad security practice.  We have firewalls precicely because we don’t want to harden every internal system to DMZ-class resiliancy. An application-layer policy agent does nothing to defeat OS-targetted attacks, which means every service provider would need to be sufficiently locked down and maintained. This becomes unmanagable as the server volume grows, and completely erodes the integrity of firewall #2.

Furthermore, in practice, agents  just don’t scale well. Distribution of policy among a large number of distributed agents is a difficult problem to solve. Policies rapidly become unsynchronized, and internal security practices are often compromised just to get this ponderous and dependent system to work.

At Layer 7 we advocate a different approach to publishing services that is both scalable and secure. Our flagship product, the SecureSpan Gateway, is a security proxy for Web services, REST, and arbitrary XML and binary transactions. It is a hardened hardware or virtual appliance that can be safely deployed in the DMZ to govern all access to internal services. It acts as the border guard, ensuring that each transaction going in or out of the internal network conforms to corporate policy.

SecureSpan Gateways act as a policy air-gap that constrains access to back end services through a rich policy-based security model. This integrates consistently with the design philosophy of the DMZ. Appliances are hardened so they can withstand Internet-launched attacks, and optimized so they can scale to enormous traffic loads. We built full clustering into SecureSpan in the first version we released, close to eight years ago. This ensures that there is no single point of failure, and that systems can be added to accommodate increasing loads.

The separate policy layer—and the policy language that defines this—is the key to the security model and is best illustrated using a real example. Suppose I have a warehouse service in my internal network that I would like to make available to my distributors. The warehouse service has a number of simple operations, such as inventory queries and the ability to place an order. I’ll publish this to the outside world through a SecureSpan Gateway residing in the DMZ, exactly as shown in the diagram above.

SecureSpan provides a management console used to build the policies that govern access to each service. Construction of the initial policy is made simple using a wizard that bootstraps the process using the WSDL, which is a formal service description for my warehouse service. The wizards allows me to create a basic policy in three simple steps. First, I load the WSDL:

Next, I declare a basic security model. I’ll keep this simple, and just use SSL for confidentiality, integrity, and server authentication. HTTP basic authentication will carry the credentials, and I’ll only authorize access to myself:

If this policy sounds familiar, it’s because it’s the security model for most web sites. It turns out that this is a reasonable model for many XML-based Web services as well.

Finally, I’ll define a proxy routing to get to my internal service, and an access control model once there. In this example, I will just use a general account. Under this model, the service trusts the SecureSpan Gateway to authenticate and authorize users on it’s behalf:

You may have noticed that this assumes that the warehouse services doesn’t need to know the identity of the original requester-—that is, Scott. If the service did need this, there are a number of ways to communicate my identity claim downstream to the service, using techniques like SAML, IBM’s Trust Association Interceptor (TAI), proxied credentials, or various other tricks that I won’t cover here.

The wizard generates a simple policy for me that articulates my simple, web-oriented security model. Here’s what this policy looks like in the SecureSpan management console:

Policy is made up of individual assertions. These encapsulate all of the parameters that make up that operation. When a message for the warehouse service is identified, SecureSpan loads and executes the assertions in this policy, from top to bottom. Essentially, policy is an algorithm, with all of the classic elements of flow control. SecureSpan represents this graphically to make the policy simple to compose and understand. However, policy can also be rendered as an XML-based WS-Policy document. In fact, if you copy a block of graphical assertions into a text editor, they resolve as XML. Similarily, you can paste XML snippets into the policy composer and they appear as graphical assertion elements.

This policy is pretty simplistic, but it’s a good foundation to build on. I’ll add some elements that further restrict transactions and thus constrain access to the back end system the SecureSpan Gateway is protecting.

The rate limit assertion allows me to cap the number of transactions getting through to the back end. I can put an absolute quota on the throughput: say, 30,000 transaction/sec because I know that the warehouse service begins to fail once traffic exceeds this volume. But suppose I was having a problem with individual suppliers overusing particular services. I could limit use by an individual identity (as defined by an authenticated user or originating IP address) to 5,000 transasctions/sec—still a lot, but leaving headroom for other trading partners. The rate limit assertion gives me this flexibility. Here is its detailed view:

Note that if I get 5,001 transactions from a user in one second, I will buffer the last transaction until the rate drops in a subsequent time window (subject, of course, to resource availability on the gateway). This provides me with application-layer traffic shaping that is essential in industries like telco, who use this assertion extensively.

I would also like to evaluate each new transaction for threats. SecureSpan has assertions that cover a range of familar threats, such as SQL-injection (which has been around for a long time, but has become newly relevant in the SOA world), as well as a long list of new XML attacks that attempt to exploit parser infrastructure and autogenerated code. For the warehouse service, I’m concerned about code-injection attacks. Fortunately, there’s an assertion for that:

Here’s what these two assertions look like dropped into the policy:

This policy was simple to compose (especially since we had the wizard to help us). But it is also very effective. It’s a visible and understandable, which is an important and often overlooked aspect of security tooling. SOA security suffers from an almost byzantine complexity. It is much too easy to build a security model that obscures weakness behind its detail. One of the design goals we had at Layer 7 for SecureSpan was to make it easy to do the simple things that challenge us 80% of the time. However, we also wanted to provide the richness to solve the difficult problems that make up the other 20%. These are problems such as adaptation. They are the obscure impedance-mismatches between client and server security models, or fast run-time adaptation of message content to accommodate version mismatches.

In this example, it took only seven simple assertions to build a basic security policy for publishing services to the outside world. Fortunately, there are over 100 other assertions—covering everything from message-based security to transports like FTP to orchestration—that are there when you need to solve the tougher problems.

The Dust of Haiti

Yesterday I was asking Jim Brasset, our IT Manager, a few questions about monitors when I noticed the laptop open on his desk.

“What did you do to that laptop? It’s a mess.”

“It was in Haiti. I’m doing data recovery on it,” he replied.

The laptop belonged to his cousin, a nurse working in Port-au-Prince. She was evacuated on a C-130 to Montreal, and the laptop came with her as she found her way back to Vancouver. Now, only a week after the earthquake, it connects Jim to a terrible human drama.

I ran my finger across the keyboard, picking up the powdery residue of concrete smashed to ruin.

“The dust of Haiti.”

Despite all of the advances we have made in communication, there is a poignancy to physical objects that technology will never capture. Find a charity you respect, and give to help the people of Haiti.

Up is the New Up

As some of you may have already heard, 2009 was a spectacular year for Layer 7. Despite the economic downturn—leading to the “flat is the new up” quip around the Valley—we actually grew more than in any year previous. In fact, we’ve grown steadily year-over-year for the last four years. 2009, however, turned out to be special for a number of reasons.

First, Gartner placed us in the leader’s quadrant in the MQ for Integrated SOA Governance. This is important because it recognizes that SecureSpan Gateways offer a comprehensive solution for SOA governance in-a-box. Now, you will never catch me equating governance with technology; but technology is a critical component of governance, and if the technology is well-designed it will give you the tools you need to support your overall SOA governance initiative.

2009 also saw the formal launch of our cloud product line. With the Amazon AMI image of SecureSpan, we released the industry’s first—and only—SOA Governance Gateway for the cloud. We were active contributors to important efforts like the Cloud Security Alliance’s critical guidance for securing the cloud. And we successfully promoted mantra that “in the cloud, you must protect services, not networks,” at watershed industry events like GigaOm Structure and through ongoing presentations with leading thinkers like David Linthicum and Anne Thomas Manes. These days, everybody seems to be jumping onto the cloud bandwagon; what is much more rare is a company offering something besides talk and vapor. Well, talk is easy.  I’m very happy that we made real accomplishments in the cloud space in 2009.

But it’s not all about the cloud. With the release of the Oracle OSB Appliance, which came out last fall, we’re showing the continued value of high performance, secure appliances in the enterprise. Layer 7’s OSB gives you a real solution for putting important infrastructure like OSB into the DMZ. The DMZ is in our DNA, and you should expect further innovation in this area in the near future.

In the end, the measure of all of this is sales success, and we’re very happy with our results. Layer 7 recorded a 35% growth in customers over 2008 and a blow-out year for revenue. This success energizes the team and fosters a great determination to continue to win—it is tangible as you walk the floor. I couldn’t ask for anything better going into 2010, because the products and innovation that are underway will far exceed what came out in ’09. Watch this space.