Category Archives: Layer 7 Technologies

Public vs. Private Clouds

Christian Perry has an article in Processor Magazine that I contributed some quotes to. The article is about the ongoing debate about the merits of public and private clouds in the enterprise.

One of the assertions that VMWare made at last week’s VMWorld conference is that secure hybrid clouds are the future for enterprise IT. This is a sentiment I agree with. But I also see the private part of the hybrid cloud as an excellent stepping stone to public clouds. Most future enterprise cloud apps will reside in the hybrid cloud; however, there will always be some applications, such as bursty web apps, that can benefit tremendously from the basic economics of public clouds.

The Top 50 Cloud Bloggers

I’m happy to learn that I’ve made Cloud Computing Journal’s list of the Top 50 Bloggers in Cloud Computing.

What, Me Worry?

According to Yahoo news Infrastructure Services Users Worry Less About Security. This article references a Yankee Group study that found although security remains a top barrier slowing the adoption of cloud services in the enterprise, most companies that have adopted Infrastructure-as-a-Service (IaaS) worry less about security once they begin using the technology.

Once they’ve made the leap into the cloud, the article suggests, users conclude that the security issues aren’t as significant as they had been led to believe. I find myself in partial agreement with this; the industry has created a level of hysteria around cloud security that isn’t necessarily productive. Taking pot shots at the security model in the cloud is pretty easy, and so many do—regardless of whether their aim is true (and for many, their aim is not).

Nevertheless, my interpretation of these results is that they are uncovering less a phenomenon of confidence genuinely earned and more a case of misplaced trust. The article makes an interesting observation about the source of this trust:

Twenty-nine percent of the companies in the survey viewed system integrators as their most trusted suppliers of cloud computing. But among early adopters of IaaS, 33 percent said they turn to telecom companies first.

Do you remember, back around the turn of the century, when large-scale PKI was first emerging? The prevailing wisdom was that state-sponsored PKI should be administered by the post offices because this organization above all was perceived as trustworthy (as well as being centralized and a national responsibility). Hong Kong, for instance, adopted this model. But in general, the postal-run PKI model didn’t take hold, and today few federal post services are in the business of administering national identity. Trust doesn’t transfer well, and trust with letters and packages doesn’t easily extend to trust with identity.

Investing generalized trust in the telcos reminds me of the early PKI experience. The market is immature, and because of this so too are our impressions. Truthfully, I think that the telcos will be good cloud providers—not because I have an inherent trust in them (I actively dislike my cell provider on most days), but because the telcos I’ve spoken to that have engaged in cloud initiatives are actually executing extremely well. Nevertheless, I don’t think I should trust them to secure my applications. This is ultimately my responsibility as a cloud customer, and because of I can’t reasonably trust any provider entirely, I must assume a highly defensive stance in how I secure my cloud-resident applications.

I hope my provider is good at security; but I need to assume he is not, and prepare myself accordingly.

My New Book, Cloud Computing: Principles, Systems and Applications, is Now Available

I’m happy to announce that I have a paper published in a new cloud computing textbook published by Springer. The book is called Cloud Computing: Principles, Systems and Applications. The paper I wrote is Technologies for Enforcement and Distribution of Policy in Cloud Architectures. If you click on the link you should be able to preview the abstract and the first few pages online.

The editors of the book are Dr. Nick Antonopoulos, who is Professor and Head of the School of Computing at the University of Derby, UK and Dr. Lee Gillam, who is a Lecturer in the Department of Computing at the University of Surrey, UK. I participated on the review committee for the text, and Drs. Antonopoulos and Gillam have pulled together an excellent compilation of work. Although this book is intended as an academic work of primary interest to researchers and students, the content is nevertheless very timely and relevant for IT professionals such as architects or CTOs.

Lately much of my writing has been for a commercial audience, so it was nice to return to a more academic style for this chapter. I’ve carefully avoided book commitments for the last few years, but the opportunity to publish in a Springer book, a publisher I’ve always considered synonymous with serious scientific media, was just too good to pass up. Every book project proves to be more work than the author first imagines, and this was no exception (something to which my family will attest). But I’m very happy with the results, and I hope that this text proves its value to the community.

Why Health Care Needs SOA

A recent article in DarkReading offers a powerful argument as to why the health care sector desperately needs to consider Service Oriented Architecture (SOA). In her piece Healthcare Suffers More Data Breaches Than Financial Services So Far This Year, Erika Chickowski cites a report indicating that security breeches in health care appear to be on the rise this year, with that sector reporting over three times more security incidents than the financial services industry.

I worked for many years in a busy research hospital, and frankly this statistic doesn’t surprise me; health care has all of the elements the lead to the perfect storm of IT risk. If there is one sector that could surely benefit from adopting SOA as a pretext to re-evaluate security as a whole, it is health care.

Hospitals and the health care eco-system that surround these are burdened with some of the most heavily siloed IT I have ever seen. There are a number of reasons why this is so, not the least of which is politics that often appear inspired by the House of Borgia. But the greatest contributing factor is the proliferation of single-purpose, closed and proprietary systems. Even the simplest portable x-ray machine has a tremendously sophisticated computer system inside of it. The Positron Emission Tomography (PET) systems that I worked on included racks of what at the time were state-of-the-art vector processors used to reconstruct massive raw data sets into understandable images. Most hospitals have been collecting systems like this for years, and are left with a curiosity cabinet of samples representing different brands and extant examples of nearly every technological fad since the 1970s.

I’m actually sympathetic to the vendors here because their products have to serve two competing interests. The vendors need to package the entire system into a cohesive whole with minimal ins and outs to ensure it can reasonably pass the rigorous compliance necessary for new equipment. The more open a system is, the harder it is to control the potential variables, which is a truism also in the security industry.  Even something as simple as OS patching needs to go through extensive validation because the stakes are so high. The best way to manage this is to close up the system as much as reasonably possible.

In the early days of medical electronics, the diagnostic systems were very much standalone and this strategy was perfectly sound. Today, however, there is a need to share and consolidate data to potentially improve diagnosis. This means opening systems up—at least to allow access to the data, which when approached from the perspective of traditional, standalone systems, usually means a pretty rudimentary export. While medical informatics has benefited some from standardization efforts, the medical systems still generally reduce to islands of data connected by awkward bridges—and it is out of this reality that security issues arise.

Chickowski’s article echos this, stating:

To prevent these kinds of glaring oversights, organizations need to find a better way to track data as it flows between the database and other systems.

SOA makes sense in health care because it allows for effective compartmentalizing of services—be these MRI scanners, lab results, or admission records—that are governed in a manner consistent with an overall security architecture. Good SOA puts security and governance upfront. It provides a consistent framework that avoids the patchwork solutions that too easily mask significant security holes.

A number of forward-looking health providers have adopted a SOA strategy with very positive results. Layer 7 Technologies partnered with the Universitry of Chicago Medical Center (UCMC) to build a SOA architecture for securely sharing clinical patient data with their research community. One of the great challenges in any medical research is to gather sample populations that are statistically significant. Hospitals collect enormous volumes of clinical data each day, but often these data cannot be shared with research groups because of issues in compliance, control of collection, patient consent, etc. UCMC uses Layer 7’s SecureSpan Gateways as part of its secure SOA fabric to isolate patient data into zones of trust. SecureSpan enforces a boundary between clinical and research zones. In situations where protocols allow clinical data to be shared with researchers, SecureSpan authorizes its use. SecureSpan even scrubs personally identifiable information from clinical data—effectively anonymizing the data set—so that it can be ethically incorporated into research protocols.

The UCMS use case is a great example of how SOA can be a protector of information, promoting the valuable use of data while ensuring that only the right people have access to the right views of that information.

To learn more about this use case, take a look at the detailed description available on the Layer 7 Technologies web site.

Timing Side Channel Attacks

I had an interesting discussion with Bob McMillian of IDG yesterday about the potential for timing attacks in the cloud. Timing attacks are a kind of side channel attack that is based on observed behavior of a cryptographic system when fed certain inputs. Given enough determinism in the response time of the system, it may be possible to crack the cryptosystem based on a statistical sampling of its response times taken over many transactions.

Bob was interested in my thoughts about the threat this attack vector represents to cloud-resident applications. It’s an interesting question, because I think that the very characteristics of the cloud that people so often criticize when discussing security—that is, multi-tenancy and the obfuscation of actual physical resources by providers—actually work to mitigate this attack because they add so much non-deterministic jitter to the system.

Bob’s excellent article got picked up by a number of sources, including ComputerWorld, LinuxSecurity, InfoWorld. It’s also been picked up by the mainstream media, including both San Francisco Chronicle and the New York Times.

How To Get Rich Quick With Cloud Computing

You know that a technology has hit the mainstream when it appears in PCWorld. Such is the case for cloud computing, a topic PCWorld considers in its recent piece Amazon Web Services Sees Infrastructure as Commodity. Despite the rather banal title, this article makes some interesting points about the nature of commoditization and the effect this will have on the pricing of services in the cloud. It’s a good article, but I would argue that it misses an important point about the evolution of cloud services.

Of the three common models of cloud–SaaS, PaaS, and IaaS–it’s the later, Infrastructure-as-a-Service (IaaS), that most captivates me. I can’t help but begin planning my next great start-up built using the virtualization infrastructure of EC2, Terramark, or Rackspace. But despite a deep personal interest in the technology underpinning the service, I think what really captures my imagination with IaaS is that it removes a long-standing barrier to application deployment. If my killer app is done, but my production network and servers aren’t ready, I simply can’t deploy. All of the momentum that I’ve built up during the powerful acceleration phase of a startup’s early days is sucked away—kind of like a sports car driving into a lake. For too many years, deployment has been that icy cold water that saps the energy out of agile when the application is nearing GA.

IaaS drains the lake. It makes production-ready servers instantly available so that teams can deploy faster, and more often. This is the real stuff of revolution. It’s not the cool technology; it’s the radical change to the software development life cycle that makes cloud seem to click.

The irony, however, is that Infrastructure-as-a-Service is itself becoming much easier to deploy. It turns out that building data centers is something people are pretty good at. Bear in mind though, that a data center is not a cloud—it takes some very sophisticated software layered over commodity hardware to deploy and manage virtualization effectively. But this management layer is rapidly becoming as simple to deploy as the hardware underlying it. Efforts such as Eucalyptus, and commercial offerings from vendors like VMWare, Citrix, and 3Tera (now CA), and others are removing the barriers that have until recently stood in the way of the general-purpose co-location facility becoming an IaaS provider.

Lowering this barrier to entry will have a profound effect on the IaaS market. Business is compelled to change whenever products, process or services edge toward commodity. IaaS, itself a textbook example of a product made possible by the process of commoditization, is set to become simply another commodity service, operating in a world of downward price-pressure, ruthless competition, and razor-thin margins. Amazon may have had this space to itself for several years, but the simple virtualization-by-the-hour marketplace is set to change forever.

The PCWorld article misses this point. It maintains that Amazon will always dominate the IaaS marketplace by virtue of its early entry and the application of scale. On this last point I disagree, and suggest that the real story of IaaS is one of competition and a future of continued change.

In tomorrow’s cloud, the money will made by those offering higher value services, which is a story as old as commerce itself. Amazon, to its credit, has always recognized that this is the case. The company is a market leader not just because of its IaaS EC2 service, but because the scope of its offering includes such higher-level services as database (SimpleDB, RDS), queuing (SQS) and Content Delivery Networks (CloudFront). These services, like basic virtualization, are the building blocks of high scalable cloud applications. They also drive communications, which is the other axis where scale matters and money can be made.

The key to the economic equation is to possess a deep understanding of just who-is-doing-what-and-when. Armed with this knowledge, a provider can bill. Some services—virtualization being an excellent example—lend themselves to simple instrumentation and measurement. However, many other services are more difficult to sample effectively. The best approach here is to bill by the transaction, and measure this by acquiring a deep understanding of all of the traffic going in or out of the service in real time. By decoupling measurement from the service, you gain the benefit of avoiding difficult and repetitive instrumentation of individual services and can increase agility through reuse.

Measuring transactions in real time demands a lot from the API management layer. In addition to needing to scale to many thousands of transactions per second, this layer needs provide sufficient flexibility to accommodate the tremendous diversity in APIs. One of the things that we’ve noticed here at Layer 7 with our cloud provider customers is that most of the APIs they are publishing use fairly simple REST-style interfaces with equally basic security requirements. I can’t help but feel a nagging sense of déjà vu. Cloud APIs today are still in the classic green field stage of any young technology. But we all know that things never stay so simple for long. Green fields always grow toward integration, and that’s when the field becomes muddy indeed. Once APIs trend toward complexity—when it not just username/passwords you have to worry about, but also SAML, OAuth variations, Kerberos and certificates—that’s when API management layer can either work for you, or against you—and this is one area where experience counts for a lot. Rolling out new and innovative value-added services is set to become the basis of every cloud provider’s competitive edge, so agility, breadth of coverage and maturity is an essential requirement in the management layer.

So to answer my original question, if you are a cloud provider, you will make money on the higher-level services. But where does that leave the rest of us? There is certainly money to be made around these services themselves, by building them, or providing the means to manage them. The cloud software vendors will make money by providing the crucial access control, monitoring, billing, and management tools that cloud providers need to run their business. That happens to be my business, and this infrastructure is exactly what Layer 7 Technologies is offering with its new CloudControl product, a part of the CloudSpan family of products. CloudControl is a provider-scale solution for managing the APIs and services that are destined to become the significant revenue stream for cloud providers—regardless of whether they are building public or private clouds.

You can learn more about CloudControl on Layer 7’s web site.

Beatniks in the Cloud

I’ve always been a fan of the Beats. Back when I was young and cool, I played bass guitar in a band called the Subterraneans, inspired of course by Kerouac’s novella of his relationship in decay, set inside the jazz underworld of San Francisco. Just as punk rock was to the music of the 70s, the Beats were a necessary reaction to society and the literature of the time. They had an influence, though sadly their image has been reduced to little more than the media-drawn caricature of the Beatnik.

Beatniks, however, are a great vehicle for satire. I was greatly flattered when David Linthicum sent me a link to this video, which riffs off a blog post I did titled Visualizing the Boundaries of Control in the Cloud.

This video is one of a series that Novell has put together looking at real issues in cloud computing. There’s another great episode that picks up on a post Linthicum wrote considering the weighty topic of fear of multi-tenancy.

Well done, Novell. You have redeemed the Beatniks for me.

Live From New York, It’s… The Cloud Power Panel

Well, not really live, but definitely from New York. Just before the recent Cloud Computing Expo, Sys-Con asked me to join their 2010 Cloud Computing Power Panel, hosted by the multi-talented Jeremy Geelan. The panel consisted of me, Greg O’Connor, CEO of AppZero; Tony Bishop, CEO of Adaptivity; and Marty Gauvin, CEO of Virtual Ark. We did in fact film right above Times Square, using the Reuter’s studio. The facility was amazing, the crew was top notch, and the resulting video looks great.

You can watch the Cloud Power Panel here. We covered a range of topics, from why enterprises will inevitably end up using the cloud, to how they must think differently to be successful out there. We even found time to consider something called Father-as-a-Service (FaaS).

The Top 5 Mistakes People Make When Moving to the Cloud

Cloud is now mature enough that we can begin to identify anti-patterns associated with using these services. Keith Shaw from Network World and I spoke about worst practices in the cloud last week, and our conversation is now available as a podcast.

Come and learn how to avoid making critical mistakes as you move into the cloud.