James Urquardt, from Cisco, published a review of the US Government’s recently announced cloud initiative. I had the pleasure of sharing a panel with James recently at GigaOm Structure, and his CNET column should be on your must-read list if clouds are of interest to you.
In this article, James makes an interesting point that the government is really following the “Adopt at your own pace mentality” with respect to cloud. Obviously this isn’t about moving IT completely into the cloud–let’s face it, governments, of all organizations, hold data that will always be inappropriate for public cloud deployment. But it does demonstrate that a perfectly reasonable strategy is to create the opportunity to move select applications into the cloud (such as blogs, as the article mentions), and provide a mechanism so that these can coexist with existing internal IT. This is the so-called hybrid approach (particularly if there is a private cloud as part of the “internal” deployment).
But hybrid clouds face a big problem. To be useful, there must be secure communications between internal applications and new services deployed in the public (or semi-public) cloud. Amazon recently announced it’s Virtual Private Cloud initiative to address this issue. I was encouraged by their efforts; clearly, Amazon is taking the hybrid model very seriously–no doubt they’ve had a lot of customers asking them to solve this problem. However, I do question the strategy of deploying a VPN tunnel between internal IT and a public cloud. Despite efforts to secure and make private the operating environment of the public cloud, the VPN solution remains a risky proposition.
The trouble with VPNs is that they are indiscriminate over traffic. The trust model of VPNs is based on both ends being equal secure. A VPN makes sense when you integrate a branch office into your central corporate network, as the later is subject to the same corporate security standards and policy. It can be dangerous if the remote site is one where you have any less control over the entire security model, as is the case in the cloud. Imbalance in security implementation is an opportunity for attack. If a single application on the cloud side is compromised, a system cracker can then leverage the VPN tunnel to get full access into the internal network. (This same problem exists with conventional VPNs and laptops, and believe me, it keeps security guys up at night.)
A better solution is to constrain communications on a service-by-service basis, managed under policy control. That way, if a system is compromised, it provides limited opportunity to launch a further attack. Here you are creating zones of trust between services, which is much more finely grained and deliberately constrained. The Layer 7 version of the secure hybrid model looks like this:
Here, virtual and physical SecureSpan appliances coordinate communications between internal applications and services residing in the cloud. All transactions are managed under policy control. They are rigorously monitored, scrubbed for threats, and constrained to the appropriate parties. Architectures like this allow organizations of any size to move at their own pace into the cloud. It’s a model we’ve been advocating for some time. SecureSpan is already the security foundation of what is arguably the largest private cloud in the world, which is an existing government initiative that predates this latest announcement.
Hello,
nice, but when I deploy the virtual SecureSpan in the Amazon Cloud (with the recently announced AMI) what is the pricing model? I was not able to get the information from the website.
Thanks
Hi Jacques:
Great question; licensing remains one of the great open issues in the world of cloud computing. At present, we’ve adopted a model where you buy our standard license (or apply to us for a 30 day eval). Then you run the AMI and pay the standard Amazon rates. We’re looking hard at models using Amazon’s DevPay, which I really believe to be the future of licensing for infrastructure like this.
Regards
Scott
Scott,
Thanks for your answer
I can envision several billing models:
– your current proposal, buy a licence: not very exciting 🙂
– pay per month : this is what Zeus proposes for their ZXTM: as soon as you load balance one request in the month, you pay for that month: better but still not so good
– pay per hour: the obvious offer, first because Amazon decided for us that hour was the good granularity, but also because it makes some sense economically; you pay per hour for your VM, per hour for your RDBMS (well, as soon as Oracle and DB2 move …), so everybody must bill per hour
So we want to pay SecureSpan “per hour”, especially if L7Tech tries to push the “one Securespan in front of each app” where the current license schemewould be unrealistic. I launch my app, I start to pay for may security proxy. I stop it, I stop paying when the hour is passed.
For a “one SecureSpan per VPC” approach, this is more static and possibly, pay-per-month is acceptable
What’s your view?
Speaking personally, I do think that pay per hour is the future for all cloud software. It aligns with the emerging models for basic service in the cloud, and it fits with the expectations of people in computing. We’ve become accustomed to instant gratification in development: today if we need a desktop app we download it and use it under a trial. If it proves its value, we buy it so we can use it again. Even the simplest barriers–like complicated asynchronous licensing models–are often enough to send us elsewhere to crush the whim. We need to make core infrastructure just as easy to try out, and nothing is easier than spinning up a pre-configured cloud instance, paying a little extra to use it, and letting it prove it’s value. Good software generally will, assuming you have a real re-occurring problem you need to solve.
Besides, if I can monetize even the tire kickers, that’s a good thing for me.
Good article… $0.12 per CPU can not be the way clouds are sold to business users, nor can it be $0.15 per GB transfered and stored as this makes it hard to compute cost and related value to the organization.
Monitized resources is a great model for big comanies for test and dev environments, outsourcing peak load from certain types of processes, etc. but not for Joe six pack. The application architectures are not mature enough to turn the rank and file at this time.
The business model needs to evolve with the technology and the technology needs to slow down until the applications can catch up!