LulzSec Disbands

“Live Fast, die young, and leave a good-looking corpse” was first uttered by actor John Derek in Knock on any Door,a 1949 film also staring Humphrey Bogart. This irresistible catchphrase has inspired generations of rebels from film to music to out-of-control teenagers. It also seems to have been taken to heart by the hacker collective LulzSec, which after a spectacular 50-day blitz across the Internet, is dissolving back into the shadowy back alleys from which it appeared. And just as James Dean—another famous adherent to the formula—did for film, so too have LulzSec changed the face of IT security and left an inspirational challenge for hacking’s next generation.

What is interesting about LulzSec isn’t necessarily their technique but their PR. The group appeared on the heels of high profile hacks by Anonymous and fed masterfully into a media-fueled hack-steria, feeding a public imagination over-stimulated with big audacious exploits that make great copy. LulzSec was the perfectly-timed counterpoint to Anonymous—gang fights equaling news that writes itself, whether the conflict is between thugs, dancers, graffiti writers, or hackers. And slipping away before being caught (sans one alleged member) ties this story up neatly into a narrative made to entertain. I’ve no doubt the movie rights will be bid sky-high.

If LulzSec can make claim to a legacy, then surely it is that effective marketing is just as important as the hack itself. LulzSec went from zero to global brand in a scant 50 days—a success that most marketing gurus can only dream of. In its wake, the collective leaves a somewhat heightened awareness of the terrible cost of security breaches among the general public. Their means to this end, of course, remain dubious; most hackers claim the same as a knee-jerk justification of their actions, though few are as wildly successful as LulzSec has been.

Nevertheless, no CEO wants to be subject to the negative publicity endured by Sony, which has suffered wave-after-wave of successful cyber attack. It is safe to say that LulzSec has dragged Internet security back into the executive suite, something which seemed almost unthinkable only a few months ago. The intelligent response to this new attention should be an increased emphasis on basic IT security foundations.

Upcoming Cloud Identity Talk At TMForum Management World

I’ll be delivering a presentation at TMForum Management World on Wednesday, May 25, 2001 in Dublin, Ireland. My talk is the second presentation in the Carrier Grade Cloud: Secure, Robust and Billable session. I’m scheduled to speak between 5pm and 5:30, which makes it a perfect way to end the day before retiring to a fine Irish pub. This talk suffers from the rather prosaic title Implementing Identity and Access Control and Management in the Cloud, but the actual content is great, and I promise to deliver a very entertaining show. This is actually a new talk, and I was fortunate enough to have an opportunity to rehearse  last weekend for the Western Canadian Engineering Students’ Society Team (WESST), who met at Simon Fraser University. Students can be a surprisingly tough crowd, but we all had a good time and I was able to work out some of the bugs in the flow. I’m sure it will play well in Dublin.

I hope you can join me next week at TMForum. We can all sneak out afterward for a pint of the Guinness.

Amazon’s Mensis Horribilis

Hot on the heels of Amazon Web Service’s prolonged outage late last month, Bloomberg has revealed that hackers used AWS as a launch pad for their high profile attack against Sony. In a thousand blogs and a million tweets, the Internets have been set abuzz with attention-seeking speculation about reliability and trust in the cloud. It’s a shame, because while these events are noteworthy, in the greater scheme of things they don’t mean much.

Few technologies are spared a difficult birth. But over time, with continuous refinement, they can become tremendously safe and reliable, something I’m reminded of every time I step on an airplane. It never ceases to amaze me how well the global aviation system operates. Yes, this has it’s failures—and these can be devastating; but overall the system works and we can place our trust in it. This is governance and management and engineering working at the highest levels.

Amazon has been remarkably candid about what happened during their service disruption, and it’s clear they have learned much from the incident. They are changing process, refining technology, and being uncharacteristically transparent about the event. This is the right thing to do, and it should actually give us confidence. The Amazon disruption won’t be the last service failure in the cloud, and I still believe that any enterprise with reliability concerns should deploy Cloud Service Broker (CSB) technologies. But the cloud needs failure to get better—and it is getting better.

In a similar vein, overreacting over the Sony incident is to miss what actually took place. The only cloud attribute the hackers leveraged on Amazon was convenience. This attack could have been launched from anywhere; Amazon simply provided barrier-free access to a compute platform, which is the point of cloud computing. It would be unfortunate if organizations began to blacklist general connections originating from the Amazon AWS IP range, as they already do for email originating in this domain because of an historical association with spam.  In truth this is another example of refinement by cloud providers, as effective policy control in Amazon’s data centers have now largely brought spam under control.

Negative impressions come easy in technology, and these are hard to reverse. Let’s hope that these incidents are recognized for what they are, rather than indicators of a fundamental flaw in cloud computing.

NIST Seeks Public Input On New Cloud Computing Guide

What is the cloud, really? Never before have we had a technology that suffers so greatly from such a completely ambiguous name. Gartner Research VP Paolo Malinverno has observed that most organizations define cloud as any application operating outside their own data centre. This is probably as lucid a definition as any I’ve heard.

More formalized attempts to describe cloud rapidly turn into essays that attempt to bridge the abstract with the very specific, and in doing seem to miss the cloud for the clouds. Certainly the most effective comprehensive definition has come from the National Institute of Standards and Technology (NIST), and most of us in the cloud community have fallen back to this most authoritative reference when clarity is important.

Now is our chance to give back to NIST. To define cloud is to accept a task that will likely never end, and the standards boffins have been working hard to continually refine their work. They’ve asked for public comment, and I would encourage everyone to review their latest draft of the Cloud Computing Synopsis and Recommendations. This new publication builds on the basic definitions offered by NIST in the past, and at around 84 pages, it dives deep into the opportunities and issues surrounding SaaS, IaaS, and PaaS. There is good material here, and with community input it can become even better.

You have until June 13, 2011 to respond.

Layer 7 to Demonstrate Cloud Network Elasticity at TMForum Management World in Dublin

I’ll be at the TMForum Management World show this May 23-26, 2011 in Dublin, Ireland to participate in the catalyst demonstrating cloud network elasticity, which is sponsored by Deutsche Telekom and the Commonwealth Bank of Australia. For those of you not yet familiar with TMForum, it is (from their web site) “the world’s leading industry association focused on enabling best-in-class IT for service providers in the communications, media, defense and cloud service markets.” We’ve been involved with the TMForum for a couple of years, and this show in Dublin is going to showcase some major breakthroughs in practical cloud computing.

TMForum offers catalysts as solution proof-of-concepts. A catalyst involves a number of vendors which partner together to demonstrate an end-to-end solution to a real problem faced by telco providers or the defense industry. This year, we’re working closely with Infonova, Zimory, and Ciena to showcase a cloud-in-a-box environment that features elastic scaling of compute resources and network bandwidth on-demand, all of which is fully integrated with an automated billing system.We think this solution will be a significant game-changer in the cloud infrastructure marketplace, and Layer 7’s CloudControl product is a part of this solution. CloudControl plays a crucial role in managing the RESTful APIs that tie together each vendor’s components.

What excites me about this catalyst is that it assembles best-of-breed vendors from the telco sector to create a truly practical elastic cloud. Zimoury contributes the management layer that transforms simple virtualized environments into clouds. We couple this with Ciena’s on-demand network bandwidth solutions, allowing users to acquire guaranteed communications capacity when they need it. Too often clouds elasticity starts and stops with CPU. Ciena’s technology ensures that the network resource factors into the elastic value proposition.

The front end is driven by Zimory’s BSS system, ensuring that all user actions are managed under a provider-grade billing framework. And finally, Layer 7’s CloudControl operates as the glue in the middle to add security and auditing, integrate disparate APIs, and provide application-layer visibility into all of the communications between different infrastructure components.

Layer 7's CloudControl acts as API glue between cloud infrastructure components.

I hope you can join me at TMForum Management World this month. We will be giving live demonstrations of the elastic cloud under real world scenarios given to us by Deutsche Telekom and Commonwealth Bank. This promises to be a very interesting show.

VMware’s Cloud Foundry Ushers In The Era Of Open PaaS

Mention VMware to anyone in IT and their immediate thought is virtualization. So dominant is the company in this space that the very word VM has a sense of ambiguity about it: does it refer specifically to a vmdk, or another hypervisor image like Xen? As with Kool-Aid and Band-Aid, there is nothing better for a company than to contribute a word to the English lexicon, and while VMware may not completely own virtual machine, they command enough association to get passed the doorman of that enviable club.

Strong associations however, may not translate directly into revenue. From open source Xen to Microsoft’s Hyper-V, virtualization technology is rapidly commoditizing, a threat not lost on VMware. Hypervisors are now largely free, and much of the company’s continued success derives from the sophisticated management products that make mass virtualization a tractable challenge in the enterprise. But for every OpenView, there is ultimately a Nagios to content with, so the successful company is always innovating. VMware, a very successful company, is innovating by continuing its push up the stack.

Last week VMware introduced Cloud Foundry, an open Platform-as-a-Service product that represents an important step to transform the company into a dominant PaaS player. You don’t have to read any tea leaves to see this has been their focused strategy for some time; you just have to look at their acquisitions. SpringSource for Java frameworks; RabbitMQ for queuing; Gemstone for scalable, distributed persistence; and Hyperic to manage it all—it’s basically the modern developer’s shopping list of necessary application infrastructure. The only thing they are still missing is security.

Cloud Foundry assembles some components of this technology in a package that enables developers to skip the once-necessary evil of infrastructure integration and to instead concentrate fully on the business problems they’ve been tasked to solve. It is a carefully curated stack of cloud-centric frameworks and infrastructure made available by a cloud provider as a service. Right now, you can use Cloud Foundry in VMware-managed cloud; but the basic offering is available for any cloud, public or private. Applications should be easily portable between any instance of Cloud Foundry. VMware even promises a forthcoming micro-cloud VM, which makes any developer’s laptop into a cloud development environment.

All of this reduces friction in application development. Computing is full of barriers, and we often fall into the psychological trap of perceiving these to be bigger than they actually are. Barriers are the enemy of agile, and basic infrastructure is a barrier that too often saps the energy out of a new idea before it has a chance to grow. Make the plumbing available, make it simple to use, and half the battle for new apps is over. What’s left is just fun.

Cloud Foundry is important because it’s like a more open Azure. Microsoft deserves credit for keeping the PaaS dream alive with their own offering, but Azure suffers from a sense of lock-in, and it really only speaks to the Microsoft community. Plus the Microsoft ad campaign for cloud is so nauseating it might as well be bottled as a developer repellant for people who hate geeks.

Cloud Foundry, in contrast, goes far to establish its claim to openness. It references the recently announced Cloud Developer’s Bill of Rights, another initiative spearheaded by VMware. Despite being a Java-head myself, I was encouraged to learn that Cloud Foundry offered not just Spring, but Ruby on Rails, Sinatra for Ruby and Node.js. They also support Grails, as well as other frameworks based on the JVM. Persistence is handled by MySQL, MongoDB, or the Redis database, which is a decent range of options. So while VMware has’t quite opened up all their acquisition portfolio to the cloud community, they have assembled the critical pieces and seem genuine in their goal of erasing the stigma of lock-in that has tarnished previous commercial PaaS offerings.

I’m a fan of PaaS; I’m even a member of the club that believes that of the big three *-as-a-Services, PaaS is destined to be the dominant pattern. Managing and configuring infrastructure is, in my mind, pretty much on par with actually managing systems—a task I consider even less rewarding than shoveling manure. And I’m not alone in this opinion either. Once PaaS becomes open and trustworthy, it will be an automatic choice for most development. PaaS is the future of cloud, and VMware knows this.

Why Cloud Brokers Are The Foundation For The Resilient API Network

Amazon Web Services crashed spectacularly, and with it the illusion that cloud is reliable-by-design and ready for mission-critical applications. Now everyone knows that cloud SLAs fade like the phosphor glow in a monitor when someone pulls the plug from the wall. Amazon’s failure is an unfortunate event, and the cloud will never be the same.

So what is the enterprise to do if it can’t trust its provider? The answer is to take a page from good web architecture and double up. Nobody would deploy an important web site without at least two identical web servers and a load balancer to spray traffic between them. If one server dies, its partner handles the full load until operators can restore the failed system. Sometimes the simplest patterns are the most effective.

Now take a step back and expand this model to the macro-level. Instead of pair of web servers, imagine two different cloud providers, ideally residing on separate power grids and different Internet backbones. Rather than a web server, imagine a replicated enterprise application hosting important APIs. Now replace the load balancer with a Cloud Broker—essentially an intelligent API switch that can distribute traffic between the providers based  both on provider performance and a deep understanding of the nature of each API.

It is this API-centricity that makes a Cloud Broker more than just a new deployment pattern for a conventional load balancer. Engineers design load balancers to direct traffic to Web sites, and their designs excel at this task. But while load balancers do provide rudimentary access to API parameters in a message stream, the rules languages used to articulate distribution policy are just not designed to make effective decisions about application protocols. In a pinch, you might be able to implement simple HTTP fail over between clouds, but this isn’t a very satisfactory solution.

In contrast, we design cloud brokers from the beginning to interpret application layer protocols and to use this insight to optimize API traffic management between clouds. A well-designed cloud broker abstracts existing APIs that may differ between hosts, offering a common view to clients decoupled from local dependencies. Furthermore, Cloud Brokers implement sophisticated orchestration capabilities so they can interact with cloud infrastructure through a provider’s APIs. This allows the broker to take command of applications the provider hosts. Leveraging these APIs, the broker can automatically spin up a new application instance on demand, or release under-utilized capacity. Automation of processes is one of the more important value propositions of cloud, and Cloud Brokers are means to realize this goal.

For more information about Cloud Brokers, have a look at the Cloud Broker product page at Layer 7 Technologies.

Space Exploration and the Trough of Disillusionment

Hype cycles may be a largely a marketing construct, but it’s easy to forget that a lot of important engineering work gets done in the heady days of an emerging technology. I was reminded of this today when I noticed these two news items side-by-side on CNET this morning:

Rocket science just isn’t what it use to be. Must have been an amazing 20 years…

No More Iron in the Cloud

Iron Mountain, the well known information management company, is exiting the cloud storage business. The company announced yesterday that they will be phasing out their basic cloud storage services by 2013. Iron Mountain isn’t the first provider to turn its back on the cloud just as the space is getting off of the ground; but it is probably the most high profile company to exit this business.

I’ve always liked Iron Mountain because the name makes me think of the Hobbit (remember Dain of the Iron Hills?) In fact I think that Iron Mountain is one of the all time great company names, and their marketing group deserves credit for leveraging this to build a very strong brand around what is arguably a pretty dull and conventional service—that of records management. The extension of this brand into the cloud seemed obvious and fitting, so at first blush its disappointing that they’ve made a decision to reverse course.

In reality though, it seems that Iron Mountain is performing more of a realignment of its cloud strategy. Simple cloud-based storage is just not very hard to do, and so the field is rapidly becoming as crowded as the battle of five armies. Differentiation is the key to great brands, and its hard to standout from S3 or Carbonite or Mozy or any of the dozens of providers peddling mass storage services in the cloud. Iron Mountain seemed to recognize that their brand could be better served—that is, both leveraged and protected—by ducking out of the commodity bazaar and moving up the street to provide a more specialized and business-aligned service.

This is all very interesting because over the next few years we will see that brand—that most mysterious response in the consumer’s mind—is going to be the deciding factor that makes or breaks a cloud provider’s success. And as Amazon has demonstrated, cloud branding can come out of the most unlikely places.

Blowing Holes in the Web of Trust

The Register today published an excellent summary of the latest issues with SSL. In the typically blunt and mordant style for which the publication is so famous, Dan Goodin illustrates how the gossamer-thin SSL web of trust is built on a superstructure of astonishingly dubious merit. It’s a wonder the whole thing works at all.

Have a careful read of How is SSL hopelessly broken? Let us count the ways and then re-examine the cartel certs that anchor your own web browsing experience. As you roll out your API strategy, make sure you deploy your SSL endpoints with certificates that were subject to organizational or (much better) extended validation. Encourage—or if you can, demand—that your API clients limit their trust stores to a small subset containing only the most legitimate CAs.

The opportunity is largely over in the browser world; affecting massive change there will only happen when individuals personally lose money on a grand scale. But APIs still have a chance to regain some level of trust through rigorous application of SSL best practices, and API providers and developers can take the initiative here.