How To Get Rich Quick With Cloud Computing

You know that a technology has hit the mainstream when it appears in PCWorld. Such is the case for cloud computing, a topic PCWorld considers in its recent piece Amazon Web Services Sees Infrastructure as Commodity. Despite the rather banal title, this article makes some interesting points about the nature of commoditization and the effect this will have on the pricing of services in the cloud. It’s a good article, but I would argue that it misses an important point about the evolution of cloud services.

Of the three common models of cloud–SaaS, PaaS, and IaaS–it’s the later, Infrastructure-as-a-Service (IaaS), that most captivates me. I can’t help but begin planning my next great start-up built using the virtualization infrastructure of EC2, Terramark, or Rackspace. But despite a deep personal interest in the technology underpinning the service, I think what really captures my imagination with IaaS is that it removes a long-standing barrier to application deployment. If my killer app is done, but my production network and servers aren’t ready, I simply can’t deploy. All of the momentum that I’ve built up during the powerful acceleration phase of a startup’s early days is sucked away—kind of like a sports car driving into a lake. For too many years, deployment has been that icy cold water that saps the energy out of agile when the application is nearing GA.

IaaS drains the lake. It makes production-ready servers instantly available so that teams can deploy faster, and more often. This is the real stuff of revolution. It’s not the cool technology; it’s the radical change to the software development life cycle that makes cloud seem to click.

The irony, however, is that Infrastructure-as-a-Service is itself becoming much easier to deploy. It turns out that building data centers is something people are pretty good at. Bear in mind though, that a data center is not a cloud—it takes some very sophisticated software layered over commodity hardware to deploy and manage virtualization effectively. But this management layer is rapidly becoming as simple to deploy as the hardware underlying it. Efforts such as Eucalyptus, and commercial offerings from vendors like VMWare, Citrix, and 3Tera (now CA), and others are removing the barriers that have until recently stood in the way of the general-purpose co-location facility becoming an IaaS provider.

Lowering this barrier to entry will have a profound effect on the IaaS market. Business is compelled to change whenever products, process or services edge toward commodity. IaaS, itself a textbook example of a product made possible by the process of commoditization, is set to become simply another commodity service, operating in a world of downward price-pressure, ruthless competition, and razor-thin margins. Amazon may have had this space to itself for several years, but the simple virtualization-by-the-hour marketplace is set to change forever.

The PCWorld article misses this point. It maintains that Amazon will always dominate the IaaS marketplace by virtue of its early entry and the application of scale. On this last point I disagree, and suggest that the real story of IaaS is one of competition and a future of continued change.

In tomorrow’s cloud, the money will made by those offering higher value services, which is a story as old as commerce itself. Amazon, to its credit, has always recognized that this is the case. The company is a market leader not just because of its IaaS EC2 service, but because the scope of its offering includes such higher-level services as database (SimpleDB, RDS), queuing (SQS) and Content Delivery Networks (CloudFront). These services, like basic virtualization, are the building blocks of high scalable cloud applications. They also drive communications, which is the other axis where scale matters and money can be made.

The key to the economic equation is to possess a deep understanding of just who-is-doing-what-and-when. Armed with this knowledge, a provider can bill. Some services—virtualization being an excellent example—lend themselves to simple instrumentation and measurement. However, many other services are more difficult to sample effectively. The best approach here is to bill by the transaction, and measure this by acquiring a deep understanding of all of the traffic going in or out of the service in real time. By decoupling measurement from the service, you gain the benefit of avoiding difficult and repetitive instrumentation of individual services and can increase agility through reuse.

Measuring transactions in real time demands a lot from the API management layer. In addition to needing to scale to many thousands of transactions per second, this layer needs provide sufficient flexibility to accommodate the tremendous diversity in APIs. One of the things that we’ve noticed here at Layer 7 with our cloud provider customers is that most of the APIs they are publishing use fairly simple REST-style interfaces with equally basic security requirements. I can’t help but feel a nagging sense of déjà vu. Cloud APIs today are still in the classic green field stage of any young technology. But we all know that things never stay so simple for long. Green fields always grow toward integration, and that’s when the field becomes muddy indeed. Once APIs trend toward complexity—when it not just username/passwords you have to worry about, but also SAML, OAuth variations, Kerberos and certificates—that’s when API management layer can either work for you, or against you—and this is one area where experience counts for a lot. Rolling out new and innovative value-added services is set to become the basis of every cloud provider’s competitive edge, so agility, breadth of coverage and maturity is an essential requirement in the management layer.

So to answer my original question, if you are a cloud provider, you will make money on the higher-level services. But where does that leave the rest of us? There is certainly money to be made around these services themselves, by building them, or providing the means to manage them. The cloud software vendors will make money by providing the crucial access control, monitoring, billing, and management tools that cloud providers need to run their business. That happens to be my business, and this infrastructure is exactly what Layer 7 Technologies is offering with its new CloudControl product, a part of the CloudSpan family of products. CloudControl is a provider-scale solution for managing the APIs and services that are destined to become the significant revenue stream for cloud providers—regardless of whether they are building public or private clouds.

You can learn more about CloudControl on Layer 7’s web site.

Fight Night at Interop

As CTO of Layer 7 Technologies, I attend a lot of conferences. There was a time when this was all exciting and new, but I find now I’m rarely surprised by anything on the show floor. By mid-spring, I’ve collected samples of all the swag that’s new for the season, and digested all of the latest product offerings. My kids have enough flashing balls and foam thumb rockets to open a daycare.

Xirrus, a maker of high performance wifi equipment, broke the cycle of conference ennui spectacularly this year at Interop Las Vegas. They hosted boxing matches in a ring set up in the middle of the show floor. Vegas may be the home of glitz, gambling, and excess, but it’s also an important center for boxing, and the city is full of fighters. Xirrus pulled in two clubs and squared off their fighters in 3 round matches, hosting several fights a day for the duration of the show.

The fights were so engaging that I found myself setting the alarm on my phone so I could drift back in time for the next bout. Years ago I was a member of the boxing club at the University of British Columbia, and these sessions really reminded me how much I loved the sport.

Xirrus hosted a great event. It really demonstrated how doing something just a little out of the ordinary can make your company stand out. This was definitely the highlight of Interop for me—and I’ll admit that it was even better than my own session at the Enterprise Cloud Summit!

Melee at the Mandalay 2010

Azure Broke My Booth

“Get outta the way—it’s coming through.”

I love the New York accent. I think it is at its most characteristic when roared by an irritated teamster, struggling with a near-undeliverable load that was late even before the scheduled pick-up time.  In this instance, the package is a self-contained Microsoft Azure Compute Center, on its way to its temporary home in the middle of the show floor during April’s Cloud Computing Expo in the Javitts Center. Normally this wouldn’t be a problem, but by this time it was the 11th hour of vendor setup, and just about everyone on the show floor was done, leaving very little room for heavy plant to deliver a package the size of a modest RV.

The coming of Azure.

Small vendors in the tech industry have few options when juggernaut like Microsoft moves into their space. Maneuverability is always the best defense. A similar strategy is to be recommended when Azure, well, drives down the main hallway of the show floor. Not surprising, it left in its wake a volatile combination of consternation, amusement, disorganization—and a healthy determination to still win on the new business front opened up in the cloud.

The wake of Azure.

Everyone says that cloud is disruptive, but this was a little too literal for my taste.

Once delivered, an army of Microsoft staff swarmed over the box and quickly packed it with a dense array of Dell servers connected by a thick tangle of red patch cables. When all was said and done, it was hard not to be impressed with this rapid marshalling of technological firepower.

Azure data center.

Techs who work in the cloud.

Microsoft designed the Azure data center to be modular, self-contained and very green. The trick the company has employed here is to make use of outside air-cooling running through the unit to avoid expensive conventional air conditioning systems, which can typically account for half of the power consumption in a traditional data center.

The Azure center has three rooms. The air flows passes through each one, cooling the racks of equipment that separate the second and third rooms. If ambient air temperature rises too much to make this effective, normal HVAC takes up the slack; but the overall power consumption is considerably reduced.

Air intake zone.

Middle zone, showing server racks.

I’m not sure that it was a wise choice to light the middle zone in blue.

Each data center is weather hardened because Microsoft intends it to be deployed out-of-doors, and ideally in a location offering a naturally cool climate. Each unit is small enough so that it can be easily deployed in farms that integrate vast numbers of commodity servers. This is as close to cloud-in-a-box as you are ever likely to see.

Beatniks in the Cloud

I’ve always been a fan of the Beats. Back when I was young and cool, I played bass guitar in a band called the Subterraneans, inspired of course by Kerouac’s novella of his relationship in decay, set inside the jazz underworld of San Francisco. Just as punk rock was to the music of the 70s, the Beats were a necessary reaction to society and the literature of the time. They had an influence, though sadly their image has been reduced to little more than the media-drawn caricature of the Beatnik.

Beatniks, however, are a great vehicle for satire. I was greatly flattered when David Linthicum sent me a link to this video, which riffs off a blog post I did titled Visualizing the Boundaries of Control in the Cloud.

This video is one of a series that Novell has put together looking at real issues in cloud computing. There’s another great episode that picks up on a post Linthicum wrote considering the weighty topic of fear of multi-tenancy.

Well done, Novell. You have redeemed the Beatniks for me.

Live From New York, It’s… The Cloud Power Panel

Well, not really live, but definitely from New York. Just before the recent Cloud Computing Expo, Sys-Con asked me to join their 2010 Cloud Computing Power Panel, hosted by the multi-talented Jeremy Geelan. The panel consisted of me, Greg O’Connor, CEO of AppZero; Tony Bishop, CEO of Adaptivity; and Marty Gauvin, CEO of Virtual Ark. We did in fact film right above Times Square, using the Reuter’s studio. The facility was amazing, the crew was top notch, and the resulting video looks great.

You can watch the Cloud Power Panel here. We covered a range of topics, from why enterprises will inevitably end up using the cloud, to how they must think differently to be successful out there. We even found time to consider something called Father-as-a-Service (FaaS).

The Top 5 Mistakes People Make When Moving to the Cloud

Cloud is now mature enough that we can begin to identify anti-patterns associated with using these services. Keith Shaw from Network World and I spoke about worst practices in the cloud last week, and our conversation is now available as a podcast.

Come and learn how to avoid making critical mistakes as you move into the cloud.

The Swimming Pool Model of Public and Private Clouds

This morning, I recorded a podcast with Keith Shaw from Network World. Our discussion was about the 5 mistakes people make when moving out into the cloud. The podcast should be available next week, but in the meantime, I thought I would share a nice analogy that Keith came up with illustrating the difference between public and private clouds.

Clouds are like swimming pools. Private clouds are like a pool in your backyard. Every pool has a fence for reasons of practicality and liability. Since this is your pool, you get to decide who is allowed to go for a dip. Sometimes there is only one person in the pool; sometimes there’s ten—but anybody going for a swim is your responsibility. Each day you add chlorine and keep up with the cleaning. But more likely, you hire someone to do this for you.

Public clouds are like public pools. Someone else—probably the city—builds the pool and maintains it. Anyone who can pay the admission is welcome, as long as they agree to follow a few simple rules. There are lifeguards to watch over you and your kids, and you trust the pool management has checked them out to make sure they are trustworthy and posses the proper credentials. Often the public pool is crowded, and there is this annoying fat kid that keeps doing cannonballs close to where you are swimming, but overall it provides good value. True, once you came home with a strange itch, but the local public pool is certainly cheaper and a lot less work than maintaining your own.

It’s just too bad they don’t serve daiquiris.

XML Acceleration Using Virtual Appliances

I have never met Gordon Moore. But his law? Well, Moore’s Law is my best friend. My entire career, it helped to make my work run faster. On more than one occasion, I think it may have saved my job. I like Moore’s Law a lot.

Moore’s Law has had a big impact on XML acceleration. XML processing—specifically schema validation, XSLT transform and XPath query—is one of those problems that lends itself well to acceleration using specialized silicon. Tarari (now a part of LSI) is the leader in developing specialized chip sets for accelerating basic XML functions. We’ve leveraged their technology for years here at Layer 7 for our hardware appliance line, and we will continue to do so in the future. But like all specialized chip designers, Tarari’s engineers are engaged in a protracted battle with the ever-increasing capacity of general-purpose processors. Dr. Moore’s Law pursues them with a relentless pace, driving Tarari toward the breakthroughs that leave general CPUs far behind each time these begin to nip at their heels.

Silicon, however, is not the only approach to accelerating XML. Tremendous gains in XML processing can also be realized using highly optimized, pure-software algorithms that run on generic CPUs. These, of course, effortlessly ride the wave of Moore’s law. We use such algorithms at Layer 7 to provide very real XML acceleration in our virtual appliances, which obviously have no access to dedicated acceleration boards.

This fact that virtual appliances can accelerate XML processing is often missed. I was reminded of this when reading Joe McKendrick’s latest blog entry  The Case for Considering XML Appliances. Joe’s entry builds off a recent piece published by Thomas Rischbeck from IPT concerning SOA intermediaries (I’m particularly fond of the last diagram in Thomas’ article). As with all of Joe’s work, his article today is very perceptive, but I disagree with one statement he makes:

And, as is the case with appliances these days, they also come in virtual form as well. The only catch is that virtual XML appliances cannot provide XML acceleration.

I see XML acceleration as a continuum like this:

Virtual appliances may not be as fast as hardware appliances for XML acceleration, but they do accelerate processing over conventional approaches. And one of value propositions of virtual appliances is that these provide a simple means to scale horizontally (in clouds, or in conventional virtualization farms) instead of vertically. Moore’s Law is its friend too.

Layer 7 Technologies is the only SOA Gateway vendor that offers a product line that features both hardware and virtual appliances. You can buy the hardware SecureSpan Gateway appliance that includes silicon for XML acceleration. Or you can buy the virtualized SecureSpan Gateway appliance that includes our highly tuned algorithms for XML acceleration. These products offer identical functionality—so choose the one that makes the most sense in your architecture.

And don’t ever let a form factor dictate your architecture.

All Things Considered About Cloud Computing Risks and Challenges

Last month during the RSA show, I met with Rob Westervelt from ITKnowledgeExchange in the Starbucks across from Moscone Center. Rob recorded our discussion about the challenges of security in the cloud and turned this into a podcast. I’m quite pleased with the results. You can pick up a little Miles Davis in the background, the odd note of an espresso being drawn. Alison thinks that I sound very NPR. Having been raised on CBC Radio, I take this as a great compliment.

Pour yourself a coffee and have a listen.

WS-I Publishes Basic Security Profile (BSP) 1.1

This morning the Web Services Interoperability Organization (WS-I) published the Basic Security Profile (BSP), version 1.1. This is a very significant milestone, because BSP is the definitive reference that we will all use to create secure and interoperable Web services for the foreseeable future.

I have a close personal connection to Basic Security Profile. I was one of the editors of both this specification and its predecessor, BSP 1.0. It took a lot of hard work to get this far, but the results of the our working group’s labours are important for the community. We all use Web services because of their potential for interoperability, but interoperability is only practical with the formalization that WS-I offers.

People have questioned the need for BSP in a world that already has OASIS WS-Security and the handful of WS-* standards that relate to it. The truth is, formal standards like WS-Security are so broad, complex, and new that it just isn’t realistic to expect vendor implementations to integrate perfectly. The WS-I approach differs from conventional standards efforts because the focus is strictly on promoting much needed interoperability. WS-I does not define new functionality, nor does it attempt to show people the “correct” way to use existing standards; it exists to profile existing standards by refining messages, amplifying important statements, and clarifying ambiguities so that systems from different vendors can communicate securely.

WS-I promotes interoperability by providing three important components: a profile of an existing standard (or set of standards), a suite of test tools to validate conformance, and finally sample applications. Microsoft’s Paul Cotton, the chair for the BSP working group, likens this to a three-legged stool—the effort can only stand when all three legs are present. This holistic approach taken by the WS-I distinguishes it from most standards efforts by including both specification and reference.

In the case of BSP 1.1, a very important component of the effort was the vendor interop. Six companies participated in this:

This is your short list of the vendors who are serious about Web services security. Obviously, we are the odd man out in terms of size and market reach, but we believe it’s important to drive the standards, not just implement them. And this is something we will continue to do. The days of big WS-* standards efforts are over. With the release of BSP, we have what we need to secure SOA. The next big challenge will be standardization of the cloud, and Layer 7 will be there.