Tag Archives: IaaS

Amazon’s Mensis Horribilis

Hot on the heels of Amazon Web Service’s prolonged outage late last month, Bloomberg has revealed that hackers used AWS as a launch pad for their high profile attack against Sony. In a thousand blogs and a million tweets, the Internets have been set abuzz with attention-seeking speculation about reliability and trust in the cloud. It’s a shame, because while these events are noteworthy, in the greater scheme of things they don’t mean much.

Few technologies are spared a difficult birth. But over time, with continuous refinement, they can become tremendously safe and reliable, something I’m reminded of every time I step on an airplane. It never ceases to amaze me how well the global aviation system operates. Yes, this has it’s failures—and these can be devastating; but overall the system works and we can place our trust in it. This is governance and management and engineering working at the highest levels.

Amazon has been remarkably candid about what happened during their service disruption, and it’s clear they have learned much from the incident. They are changing process, refining technology, and being uncharacteristically transparent about the event. This is the right thing to do, and it should actually give us confidence. The Amazon disruption won’t be the last service failure in the cloud, and I still believe that any enterprise with reliability concerns should deploy Cloud Service Broker (CSB) technologies. But the cloud needs failure to get better—and it is getting better.

In a similar vein, overreacting over the Sony incident is to miss what actually took place. The only cloud attribute the hackers leveraged on Amazon was convenience. This attack could have been launched from anywhere; Amazon simply provided barrier-free access to a compute platform, which is the point of cloud computing. It would be unfortunate if organizations began to blacklist general connections originating from the Amazon AWS IP range, as they already do for email originating in this domain because of an historical association with spam.  In truth this is another example of refinement by cloud providers, as effective policy control in Amazon’s data centers have now largely brought spam under control.

Negative impressions come easy in technology, and these are hard to reverse. Let’s hope that these incidents are recognized for what they are, rather than indicators of a fundamental flaw in cloud computing.

Advertisement

NIST Seeks Public Input On New Cloud Computing Guide

What is the cloud, really? Never before have we had a technology that suffers so greatly from such a completely ambiguous name. Gartner Research VP Paolo Malinverno has observed that most organizations define cloud as any application operating outside their own data centre. This is probably as lucid a definition as any I’ve heard.

More formalized attempts to describe cloud rapidly turn into essays that attempt to bridge the abstract with the very specific, and in doing seem to miss the cloud for the clouds. Certainly the most effective comprehensive definition has come from the National Institute of Standards and Technology (NIST), and most of us in the cloud community have fallen back to this most authoritative reference when clarity is important.

Now is our chance to give back to NIST. To define cloud is to accept a task that will likely never end, and the standards boffins have been working hard to continually refine their work. They’ve asked for public comment, and I would encourage everyone to review their latest draft of the Cloud Computing Synopsis and Recommendations. This new publication builds on the basic definitions offered by NIST in the past, and at around 84 pages, it dives deep into the opportunities and issues surrounding SaaS, IaaS, and PaaS. There is good material here, and with community input it can become even better.

You have until June 13, 2011 to respond.

Layer 7 to Demonstrate Cloud Network Elasticity at TMForum Management World in Dublin

I’ll be at the TMForum Management World show this May 23-26, 2011 in Dublin, Ireland to participate in the catalyst demonstrating cloud network elasticity, which is sponsored by Deutsche Telekom and the Commonwealth Bank of Australia. For those of you not yet familiar with TMForum, it is (from their web site) “the world’s leading industry association focused on enabling best-in-class IT for service providers in the communications, media, defense and cloud service markets.” We’ve been involved with the TMForum for a couple of years, and this show in Dublin is going to showcase some major breakthroughs in practical cloud computing.

TMForum offers catalysts as solution proof-of-concepts. A catalyst involves a number of vendors which partner together to demonstrate an end-to-end solution to a real problem faced by telco providers or the defense industry. This year, we’re working closely with Infonova, Zimory, and Ciena to showcase a cloud-in-a-box environment that features elastic scaling of compute resources and network bandwidth on-demand, all of which is fully integrated with an automated billing system.We think this solution will be a significant game-changer in the cloud infrastructure marketplace, and Layer 7’s CloudControl product is a part of this solution. CloudControl plays a crucial role in managing the RESTful APIs that tie together each vendor’s components.

What excites me about this catalyst is that it assembles best-of-breed vendors from the telco sector to create a truly practical elastic cloud. Zimoury contributes the management layer that transforms simple virtualized environments into clouds. We couple this with Ciena’s on-demand network bandwidth solutions, allowing users to acquire guaranteed communications capacity when they need it. Too often clouds elasticity starts and stops with CPU. Ciena’s technology ensures that the network resource factors into the elastic value proposition.

The front end is driven by Zimory’s BSS system, ensuring that all user actions are managed under a provider-grade billing framework. And finally, Layer 7’s CloudControl operates as the glue in the middle to add security and auditing, integrate disparate APIs, and provide application-layer visibility into all of the communications between different infrastructure components.

Layer 7's CloudControl acts as API glue between cloud infrastructure components.

I hope you can join me at TMForum Management World this month. We will be giving live demonstrations of the elastic cloud under real world scenarios given to us by Deutsche Telekom and Commonwealth Bank. This promises to be a very interesting show.

Why Cloud Brokers Are The Foundation For The Resilient API Network

Amazon Web Services crashed spectacularly, and with it the illusion that cloud is reliable-by-design and ready for mission-critical applications. Now everyone knows that cloud SLAs fade like the phosphor glow in a monitor when someone pulls the plug from the wall. Amazon’s failure is an unfortunate event, and the cloud will never be the same.

So what is the enterprise to do if it can’t trust its provider? The answer is to take a page from good web architecture and double up. Nobody would deploy an important web site without at least two identical web servers and a load balancer to spray traffic between them. If one server dies, its partner handles the full load until operators can restore the failed system. Sometimes the simplest patterns are the most effective.

Now take a step back and expand this model to the macro-level. Instead of pair of web servers, imagine two different cloud providers, ideally residing on separate power grids and different Internet backbones. Rather than a web server, imagine a replicated enterprise application hosting important APIs. Now replace the load balancer with a Cloud Broker—essentially an intelligent API switch that can distribute traffic between the providers based  both on provider performance and a deep understanding of the nature of each API.

It is this API-centricity that makes a Cloud Broker more than just a new deployment pattern for a conventional load balancer. Engineers design load balancers to direct traffic to Web sites, and their designs excel at this task. But while load balancers do provide rudimentary access to API parameters in a message stream, the rules languages used to articulate distribution policy are just not designed to make effective decisions about application protocols. In a pinch, you might be able to implement simple HTTP fail over between clouds, but this isn’t a very satisfactory solution.

In contrast, we design cloud brokers from the beginning to interpret application layer protocols and to use this insight to optimize API traffic management between clouds. A well-designed cloud broker abstracts existing APIs that may differ between hosts, offering a common view to clients decoupled from local dependencies. Furthermore, Cloud Brokers implement sophisticated orchestration capabilities so they can interact with cloud infrastructure through a provider’s APIs. This allows the broker to take command of applications the provider hosts. Leveraging these APIs, the broker can automatically spin up a new application instance on demand, or release under-utilized capacity. Automation of processes is one of the more important value propositions of cloud, and Cloud Brokers are means to realize this goal.

For more information about Cloud Brokers, have a look at the Cloud Broker product page at Layer 7 Technologies.

Hacking the Cloud

I’m not sure who is more excited about the cloud these days: hackers or venture capitalists. But certainly both groups smell opportunity. An interesting article published by CNET a little while back nicely illustrates the growing interest the former have with cloud computing. Fortify Software sponsored a survey of 100 hackers at last month’s Defcon. They discovered that 96% of the respondents think that the cloud creates new opportunities for hacking, and 86% believe that “cloud vendors aren’t doing enough to address cyber-security issues.”

I don’t consider myself a hacker (except maybe in the classical sense of the word, which had nothing to do with cracking systems and more about solving difficult problems with code), but I would agree with this majority opinion. In my experience, although cloud providers are fairly proficient at securing their own basic infrastructure, they usually stop there. This causes a break in the security spectrum for applications residing in the cloud.

Continuity and consistency are important principles in security. In the cloud, continuity breaks down in the hand-off of control between the provider and their customers, and potential exploits often appear at this critical transition.  Infrastructure-as-a-Service (IaaS) provides a sobering demonstration of this risk very early in the customer cycle. The pre-built OS images that most IaaS cloud providers offer are often unpatched and out-of-date. Don’t believe me? Prove it to yourself the next time you bring up an OS image in the cloud by running a security scan from a SaaS security evaluation service like CloudScan. You may find the results disturbing.

IaaS customers are faced with a dilemma. Ideally, a fresh but potentially vulnerable OS should first be brought up in a safe and isolated environment. But to effectively administer the image and load patch kits, Internet accessibility may be necessary. Too often, the solution is a race against the bad guys to secure the image before it can be compromised. To be fair, OS installations now come up in a much more resilient state than in the days of Windows XP prior to SP2 (in those days, the OS came up without a firewall enabled, leaving vulnerable system services unprotected). However, it should surprise few people that exploits have evolved in lock step, and these can find and leverage weaknesses astonishingly fast.

The world is full of ex-system administrators who honestly believed that simply having a patched, up-to-date system was an adequate security model. Hardening servers to be resilient when exposed to the open Internet is a discipline that is  time-consuming and complex. We create DMZs at our security perimeter precisely so we can concentrate our time and resources on making sure our front-line systems are able to withstand continuous and evolving attacks. Maintaining a low risk profile for these machines demands significant concentrated effort and continual ongoing monitoring.

The point so many customers miss is that cloud is the new DMZ. Every publicly accessible server must address security with the same rigor and diligence of a DMZ-based system. But ironically, the basic allure of the cloud—that it removes barriers to deployment and scales rapidly on demand—actually conspires to work against the detail-oriented process that good security demands. It is this dichotomy that is the opportunity for system crackers. Uneven security is the irresistible low-hanging fruit for the cloud hacker.

CloudProtect is a new product from Layer 7 Technologies that helps reconcile the twin conflicts of openness and security in the cloud.  CloudProtect is a secure, cloud-based virtual appliance based on RedHat Enterprise Linux (RHEL). Customers use this image as a secure baseline to deploy their own applications. CloudProtect features the hardened OS image that Layer 7 uses in its appliances. It boots in a safe and resilient mode from first use. This RHEL distribution includes a fully functioning SecureSpan Gateway – that governs all calls to an application’s APIs hosted on the secured OS. CloudProtect offers a secure console for visual policy authoring and management, allowing application developers, security administrators, and operators to completely customize the API security model based to their requirements. For example, need to add certificate-based authentication to your APIs? Simply drag and drop a single assertion into the policy and you are done. CloudProtect also offers the rich auditing features of the SecureSpan engine, which can be the input to a billing process or be leveraged in a forensic investigation.

More information about the full range of Layer 7 cloud solutions, including Single Sign-On (SSO) using SAML for SaaS applications such as Salesforce.com and Google Apps, can be found here on the Layer 7 cloud solutions page.

What, Me Worry?

According to Yahoo news Infrastructure Services Users Worry Less About Security. This article references a Yankee Group study that found although security remains a top barrier slowing the adoption of cloud services in the enterprise, most companies that have adopted Infrastructure-as-a-Service (IaaS) worry less about security once they begin using the technology.

Once they’ve made the leap into the cloud, the article suggests, users conclude that the security issues aren’t as significant as they had been led to believe. I find myself in partial agreement with this; the industry has created a level of hysteria around cloud security that isn’t necessarily productive. Taking pot shots at the security model in the cloud is pretty easy, and so many do—regardless of whether their aim is true (and for many, their aim is not).

Nevertheless, my interpretation of these results is that they are uncovering less a phenomenon of confidence genuinely earned and more a case of misplaced trust. The article makes an interesting observation about the source of this trust:

Twenty-nine percent of the companies in the survey viewed system integrators as their most trusted suppliers of cloud computing. But among early adopters of IaaS, 33 percent said they turn to telecom companies first.

Do you remember, back around the turn of the century, when large-scale PKI was first emerging? The prevailing wisdom was that state-sponsored PKI should be administered by the post offices because this organization above all was perceived as trustworthy (as well as being centralized and a national responsibility). Hong Kong, for instance, adopted this model. But in general, the postal-run PKI model didn’t take hold, and today few federal post services are in the business of administering national identity. Trust doesn’t transfer well, and trust with letters and packages doesn’t easily extend to trust with identity.

Investing generalized trust in the telcos reminds me of the early PKI experience. The market is immature, and because of this so too are our impressions. Truthfully, I think that the telcos will be good cloud providers—not because I have an inherent trust in them (I actively dislike my cell provider on most days), but because the telcos I’ve spoken to that have engaged in cloud initiatives are actually executing extremely well. Nevertheless, I don’t think I should trust them to secure my applications. This is ultimately my responsibility as a cloud customer, and because of I can’t reasonably trust any provider entirely, I must assume a highly defensive stance in how I secure my cloud-resident applications.

I hope my provider is good at security; but I need to assume he is not, and prepare myself accordingly.

How To Get Rich Quick With Cloud Computing

You know that a technology has hit the mainstream when it appears in PCWorld. Such is the case for cloud computing, a topic PCWorld considers in its recent piece Amazon Web Services Sees Infrastructure as Commodity. Despite the rather banal title, this article makes some interesting points about the nature of commoditization and the effect this will have on the pricing of services in the cloud. It’s a good article, but I would argue that it misses an important point about the evolution of cloud services.

Of the three common models of cloud–SaaS, PaaS, and IaaS–it’s the later, Infrastructure-as-a-Service (IaaS), that most captivates me. I can’t help but begin planning my next great start-up built using the virtualization infrastructure of EC2, Terramark, or Rackspace. But despite a deep personal interest in the technology underpinning the service, I think what really captures my imagination with IaaS is that it removes a long-standing barrier to application deployment. If my killer app is done, but my production network and servers aren’t ready, I simply can’t deploy. All of the momentum that I’ve built up during the powerful acceleration phase of a startup’s early days is sucked away—kind of like a sports car driving into a lake. For too many years, deployment has been that icy cold water that saps the energy out of agile when the application is nearing GA.

IaaS drains the lake. It makes production-ready servers instantly available so that teams can deploy faster, and more often. This is the real stuff of revolution. It’s not the cool technology; it’s the radical change to the software development life cycle that makes cloud seem to click.

The irony, however, is that Infrastructure-as-a-Service is itself becoming much easier to deploy. It turns out that building data centers is something people are pretty good at. Bear in mind though, that a data center is not a cloud—it takes some very sophisticated software layered over commodity hardware to deploy and manage virtualization effectively. But this management layer is rapidly becoming as simple to deploy as the hardware underlying it. Efforts such as Eucalyptus, and commercial offerings from vendors like VMWare, Citrix, and 3Tera (now CA), and others are removing the barriers that have until recently stood in the way of the general-purpose co-location facility becoming an IaaS provider.

Lowering this barrier to entry will have a profound effect on the IaaS market. Business is compelled to change whenever products, process or services edge toward commodity. IaaS, itself a textbook example of a product made possible by the process of commoditization, is set to become simply another commodity service, operating in a world of downward price-pressure, ruthless competition, and razor-thin margins. Amazon may have had this space to itself for several years, but the simple virtualization-by-the-hour marketplace is set to change forever.

The PCWorld article misses this point. It maintains that Amazon will always dominate the IaaS marketplace by virtue of its early entry and the application of scale. On this last point I disagree, and suggest that the real story of IaaS is one of competition and a future of continued change.

In tomorrow’s cloud, the money will made by those offering higher value services, which is a story as old as commerce itself. Amazon, to its credit, has always recognized that this is the case. The company is a market leader not just because of its IaaS EC2 service, but because the scope of its offering includes such higher-level services as database (SimpleDB, RDS), queuing (SQS) and Content Delivery Networks (CloudFront). These services, like basic virtualization, are the building blocks of high scalable cloud applications. They also drive communications, which is the other axis where scale matters and money can be made.

The key to the economic equation is to possess a deep understanding of just who-is-doing-what-and-when. Armed with this knowledge, a provider can bill. Some services—virtualization being an excellent example—lend themselves to simple instrumentation and measurement. However, many other services are more difficult to sample effectively. The best approach here is to bill by the transaction, and measure this by acquiring a deep understanding of all of the traffic going in or out of the service in real time. By decoupling measurement from the service, you gain the benefit of avoiding difficult and repetitive instrumentation of individual services and can increase agility through reuse.

Measuring transactions in real time demands a lot from the API management layer. In addition to needing to scale to many thousands of transactions per second, this layer needs provide sufficient flexibility to accommodate the tremendous diversity in APIs. One of the things that we’ve noticed here at Layer 7 with our cloud provider customers is that most of the APIs they are publishing use fairly simple REST-style interfaces with equally basic security requirements. I can’t help but feel a nagging sense of déjà vu. Cloud APIs today are still in the classic green field stage of any young technology. But we all know that things never stay so simple for long. Green fields always grow toward integration, and that’s when the field becomes muddy indeed. Once APIs trend toward complexity—when it not just username/passwords you have to worry about, but also SAML, OAuth variations, Kerberos and certificates—that’s when API management layer can either work for you, or against you—and this is one area where experience counts for a lot. Rolling out new and innovative value-added services is set to become the basis of every cloud provider’s competitive edge, so agility, breadth of coverage and maturity is an essential requirement in the management layer.

So to answer my original question, if you are a cloud provider, you will make money on the higher-level services. But where does that leave the rest of us? There is certainly money to be made around these services themselves, by building them, or providing the means to manage them. The cloud software vendors will make money by providing the crucial access control, monitoring, billing, and management tools that cloud providers need to run their business. That happens to be my business, and this infrastructure is exactly what Layer 7 Technologies is offering with its new CloudControl product, a part of the CloudSpan family of products. CloudControl is a provider-scale solution for managing the APIs and services that are destined to become the significant revenue stream for cloud providers—regardless of whether they are building public or private clouds.

You can learn more about CloudControl on Layer 7’s web site.

All Things Considered About Cloud Computing Risks and Challenges

Last month during the RSA show, I met with Rob Westervelt from ITKnowledgeExchange in the Starbucks across from Moscone Center. Rob recorded our discussion about the challenges of security in the cloud and turned this into a podcast. I’m quite pleased with the results. You can pick up a little Miles Davis in the background, the odd note of an espresso being drawn. Alison thinks that I sound very NPR. Having been raised on CBC Radio, I take this as a great compliment.

Pour yourself a coffee and have a listen.

Economist: Unlocking the Cloud

The Economist is now reporting on the cloud. They’ve picked up on the very real concern of vendor lock-in because of proprietary standards. The article focuses on data portability between SaaS apps, but the same issue arises in IaaS (different proprietary virtualization formats, despite what the Open Virtualization Format (OVF) promises), and in PaaS (google app engine extensions to Python which are hard to ignore and lock you into their platform).

NIST Perspective on Cloud Computing

Earlier this month, NIST went public with their perspective on cloud computing. This is important because NIST is a well-respected, public standards organization, and there is still a lot of confusion about what cloud really is (the *-aaS affect).

They released only  a short, two page document and a longer, 72 slide PowerPoint (you can find their main page here). They are offering a broad definition based on essential characteristics, delivery models, and deployment models:

Essential Characteristics:

On-demand self-service

Ubiquitous network access

Location independent resource pooling

Rapid elasticity

Measured Service

Delivery Models:

Cloud Software as a Service (SaaS)

Cloud Platform as a Service (PaaS)

Cloud Infrastructure as a Service (IaaS)

Deployment Models:

Private cloud

Community cloud

Public cloud

Hybrid cloud

There’s not really anything new here, but that’s fine; it’s more important as a validation of the emerging models and ideas from a public standards body. I do like the three pronged approach, acknowleding that cloud is a lot of things but at the same time keeping it simple and concise. Brevity is the soul of great standards.

It’s worth keeping an eye on this effort.