Tag Archives: cloud computing

CES 2013 Panel: Privacy and Security in the Cloud

The Consumer Electronics Show (CES) 2013 is starting next week and cloud computing is on the agenda. You can be sure that a technology has moved out of the hype cycle and into everyday use when it shows up at a show like CES, known more for the latest TVs and phones than computing infrastructure. People don’t really need to talk about cloud any more; it’s just there, and we are using it.

Of course there will always be a place for a little more talk, and I’ll be doing some of this myself as part of the CES panel “Privacy and Security in the Cloud”. This discussion takes place Monday Jan 7 11:00am-12:00, in LVCC, North Hall N259. The panel is chaired by my good friend Jeremy Geelan, founder of Cloud Computing Expo, who honed his considerable moderation skills at the BBC.

I’m planning on exploring the intersection between the cloud and our increasingly ubiquitous consumer devices. We will highlight the opportunities created by this technological convergence, but we will also consider the implications this has for our personal privacy.

I hope you can join us.

Advertisement

Platform Comes To Washington

Everyone wants his or her government to be better. We want more services, better services, and we want it delivered cheaper. Politicians come and go, policies change, new budgets are tabled, but in the end we are left with a haunting and largely unanswerable question: are things better or worse than they were before?

One thing that is encouraging and has the potential to trigger disruptive change to the delivery of government services in the US is the recent publication Digital Government: Building a 21st Century Platform to Better Serve the American People. The word to note here is platform; it seems that government has taken a page from Facebook, Twitter, and the others and embraced the idea that efficient information delivery is not about a carefully rendered Web page, but instead is really a logical consequence of developing an open platform.

I confess to some dread on my first encounter with this report. These publications are usually a disheartening product of weaselly management consultant speak refined through the cloudy lens of a professional bureaucrat (“we will be more agile”). But in this instance, the reverse was true: this report is accessible and surprisingly insightful. The authors understand that mobility+cloud+Web API+decentralized identity is an equation of highly interrelated parts that in summation is the catalyst for the new Internet renaissance. The work is not without its platitudes, but even these it bolsters with a pragmatic road map identifying actions, parties’ responsible, and (gasp) even deadlines. It’s actually better than most business plans I’ve read.

Consider this paragraph clarifying just what the report means when it calls for an information-centric approach to architecture:

An information-centric approach decouples information from its presentation. It means beginning with the data or content, describing that information clearly, and then exposing it to other computers in a machine-readable format—commonly known as providing web APIs. In describing the information, we need to ensure it has sound taxonomy (making it searchable) and adequate metadata (making it authoritative). Once the structure of the information is sound, various mechanisms can be built to present it to customers (e g websites, mobile applications, and internal tools) or raw data can be released directly to developers and entrepreneurs outside the organization. This approach to opening data and content means organizations can consume the same web APIs to conduct their day-to-day business and operations as they do to provide services to their customers.

See what I mean? It’s well done.

The overall goal is to outline an information delivery strategy that is fundamentally device agnostic. Its authors fully recognize the growing importance of mobility, and concede that mobility means much more than the mobile platforms—iOS and Android, among others—that have commandeered the word today. Tomorrow’s mobility will describe a significant shift in the interaction pattern between producers and consumers of information. Mobility is not a technological instance in time (and in particular, today).

But what really distinguishes this report from being just a well-researched paper echoing the zeitgeist of computing’s cool kids, is how prescriptive it is in declaring how government will achieve these goals. The demand that agencies adopt Web APIs is a move that echos Jeff Bezos’ directives a decade ago within eBay (as relayed in Steve Yegge’s now infamous rant):

1) All teams will henceforth expose their data and functionality through service interfaces.

It was visionary advice then and it is even more valid now. It recognizes that the commercial successes attributed to the Web API approach suggest that just maybe we have finally hit upon a truth in how system integration should occur.

Of course, memos are easy to ignore—unless they demand concrete actions within a limited time. Here, the time frames are aggressive (and that’s a good thing). Within 6 months, the Office of Management and Budget (OMB) must “Issue government-wide open data, content, and web API policy and identify standards and best practices for improved interoperability.” Within 12 months, each government agency must “Ensure all new IT systems follow the open data, content, and web API policy and operationalize agency gov/developer pages” as well as “optimize at least two existing priority customer-facing services for mobile use and publish a plan for improving additional existing services.”

If the recent allegations regarding the origins of the Stuxnet worm are accurate, then the President clearly understands the strategic potential of the modern Internet. I would say this report is a sign his administration also clearly understands the transformational potential of APIs and mobility, when applied to government.

APIs, Cloud and Identity Tour 2012: Three Cities, Two Talks, Two Panels and a Catalyst

On May 15-16 2012, I will be at the Privacy Identity Innovation (pii2012) conference held at the Bell Harbour International Conference Center in Seattle. I will be participating on a panel moderated by Eve Maler from Forrester, titled Privacy, Zero Trust and the API Economy. It will take place at 2:55pm on Tuesday, May 15th:

The Facebook Connect model is real, it’s powerful, and now it’s everywhere. Large volumes of accurate information about individuals can now flow easily through user-authorized API calls. Zero Trust requires initial perfect distrust between disparate networked systems, but are we encouraging users to add back too much trust, too readily? What are the ways this new model can be used for “good” and “evil”, and how can we mitigate the risks?

On Thursday May 17 at 9am Pacific Time, I will be delivering a webinar on API identity technologies, once again with Eve Maler from Forrester. We are going to talk about the idea of zero trust with APIs, an important stance to adopt as we approach what Eve often calls the coming identity singularity–that is, the time when identity technologies and standards will finally line up with real and immediate need in the industry. Here is the abstract for this webinar:

Identity, Access & Privacy in the New Hybrid Enterprise

Making sense of OAuth, OpenID Connect and UMA

In the new hybrid enterprise, organizations need to manage business functions that flow across their domain boundaries in all directions: partners accessing internal applications; employees using mobile devices; internal developers mashing up Cloud services; internal business owners working with third-party app developers. Integration increasingly happens via APIs and native apps, not browsers. Zero Trust is the new starting point for security and access control and it demands Internet scale and technical simplicity – requirements the go-to Web services solutions of the past decade, like SAML and WS-Trust, struggle to solve. This webinar from Layer 7 Technologies, featuring special guest Eve Maler of Forrester Research, Inc., will:

  • Discuss emerging trends for access control inside the enterprise
  • Provide a blueprint for understanding adoption considerations
You Will Learn

  • Why access control is evolving to support mobile, Cloud and API-based interactions
  • How the new standards (OAuth, OpenID Connect and UMA) compare to technologies like SAML
  • How to implement OAuth and OpenID Connect, based on case study examples
  • Futures around UMA and enterprise-scale API access

You can sign up for this talk at the Layer 7 Technologies web site.

Next week I’m off to Dublin to participate in the TMForum Management World 2012. I wrote earlier about the defense catalyst Layer 7 is participating in that explores the problem of how to manage clouds in the face of developing physical threats. If you are at the show, you must drop by the Forumville section on the show floor and have a look. The project results are very encouraging.

I’m also doing both a presentation and participating on a panel. The presentation title is API Management: What Defense and Service Providers Need to Know. Here is the abstract:

APIs promise to revolutionize the integration of mobile devices, on-premise computing and the cloud. They are the secret sauce that allows developers to bring any systems together quickly and efficiently. Within a few years, every service provider will need a dedicated API group responsible for management, promotion, and even monetization of this important new channel to market. And in the defense arena, where agile integration is an absolute necessity, APIs cannot be overlooked.

In this talk, you will learn:

·      Why APIs are revolutionizing Internet communications
– And making it more secure
·      Why this is an important opportunity for you
·      How you can successfully manage an API program
·      Why developer outreach matters
·      What tools and technologies you must put in place

This talk takes place at the Dublin Conference Centre on Wed May 23 at 11:30am GMT.

Finally, I’m also on a panel organized by my friend Nava Levy from Cvidya. This panel is titled Cloud adoption – resolving the trust vs. uptake paradox: Understanding and addressing customers’ security and data portability concerns to drive uptake.

Here is the panel abstract:

As cloud services continue to grow 5 times faster vs. traditional IT, it seems that also concerns re security and data portability are on the rise. In this session we will explain the roots of this paradox and the opportunities that arise by resolving these trust issues. By examining the different approaches other cloud providers utilize to address these issues, we will see how service providers, by properly understanding and addressing these concerns, can use trust concerns as a competitive advantage against many cloud providers who don’t have the carrier grade trust as one of their core competencies.  We will see that by addressing fraud, security, data portability and governances risks heads on, not only the uptake of cloud services will rise to include mainstream customers and conservative verticals, but also the type of data and processes that will migrate to the cloud will become more critical to the customers

The panel is on Thursday, May 24 at 9:50am GMT.

Developers, Developers, Developers – Why API Management Should be Important To You Featuring RedMonk

It’s about developers again.

Everything in technology goes through cycles. If you stick around long enough, you begin to see patterns emerge with an almost predictable regularity. I actually find this comforting; it suggests we’re on a path of refinement of fundamental truths that date back in a continuous line though Alan Kay to Turing and beyond.

The wrong way to react to technology cycles is with the defensive-and-crusty “this is nothing new kid—we did it back in ’99 when you were stuck in the womb.” Thanks for nothing, Grandpa. A better approach is to recognize the importance of new energy and momentum to make great things happen.

The cycle that really excites me now is the new rise of the developer. Trying my best not to be crusty, there is a palatable excitement and energy out there that really does feel like it did in 1999. After years of outsourcing, after years of commoditization, developers matter again. A lot. It’s like the world has rediscovered the critical importance of this fundamentally creative endeavor.

This is a golden age of technology and possibility, one that is being driven by new blood and newer technology. The catalyst is the achingly perfect collision of cloud, mobility and social discovery with APIs, node.js, Git, NoSQL, HTML5, massive scalability… (I really could go on and on here).

Most of all, I’m excited by movements like Codecademy. This simple idea perfectly reflects the tenor of the time in which we live. People are no longer afraid of making things easy. The priesthood is gone; coding is now confident and mature.

I’ll be talking more about these topics and the important role APIs play in an upcoming webinar I will be delivering with James Governor, co-founder of Redmonk. This is the analyst firm that truly is at the heart of the new developer movement. I hope you can join us Thursday, April 19 at 9am Pacific. This one is going to be good.

The Resilient Cloud for Defense: Maintaining Service in the Face of Developing Threats

Skill at computing comes naturally to those who are adept at abstraction. The best developers can instantly change focus—one moment they are orchestrating high level connections between abstract entities; the next they are sweating through the side effects of each individual line of code. Abstraction in computing not only provides necessary containment, but also offers clear boundaries. There is also something very liberating about that line you don’t need to cross. When I write Java code I’m happy to never think about byte code (unless something is going terribly wrong). And when I did board-level digital design, I could stop at the chip and not think much about individual gates or even transistors. It is undeniably important to understand the entire stack; but nothing would ever get done without sustained focus applied to a narrow segment.

Cloud is the latest in a long line of valuable abstractions that extend the computing stack. It pushes down complex details of systems and their management under a view that promotes self-service and elastic computing. In this way, cloud is as liberating for developers as objects were over assembler.

The physical location of resources is one of the first and most important casualties of such a model. Cloud means you should never have to worry about the day a power failure hits the data center. Of course the truth is that as you move down the stack from cloud to system through transistor to electron, physical location matters a lot. So any cloud is only as good as its ability to accommodate any failure of the real systems that underpin the resource abstraction.

Layer 7 has recently become involved in an interesting project that will showcase how cloud providers (public or private) can manage cloud workloads in the face of threats to their underlying infrastructure. The inspiration for this project is the following display from ESRI, one of the world’s leading GIS vendors:

ESRI developed this display to illustrate wireless outages as a storm rips through central Florida. But suppose now that instead of a wireless base station, each green diamond represents a data center that contributes its hardware resources to a cloud. As the storm moves through the state, it may affect power, communications, and even physical premises. Work loads in the cloud, which ultimately could map to hardware hosted inside at-risk sites, must be shifted transparently to locations that are at less of a risk of a catastrophic failure.

Today, few clouds offer the mass physical dispersion of compute hardware suggested by this display. Amazon Web Services, for instance, has the concept of an availability zone, which consists of several massive data centers interconnected within a region (such as US-East, which is in the Dulles area, or EU, which is hosted in Ireland). Their cloud is designed to leverage this regional redundancy to provide continuous service in the event of a site failure.

This big data center approach makes perfect sense for a service like Amazon. There will always be a place for the large data center that leverages commodity hardware deployed on a breathtaking scale. But there is an alternative that I think is set to become increasingly important. This is the cloud composed of many smaller compute facilities. We will increasingly see large clouds coalesce out of multiple small independent hardware sites—more SETI@home than supercomputer. This is where our initiative provides real value.

These highly mobile, micro-clouds make particular sense in the defense sector. Here, compute resources can be highly mobile, and face threats more diverse and much less predictable than hurricanes. This is an arena in which the physical shape of the cloud may be in continuous change.

This project is being done as a catalyst within the TM Forum, and we will show it at the TM Forum Management World 2012 show in Dublin this May. Catalysts are projects that showcase new technology for executives in the telecommunications and defense industries. This catalyst is sponsored by Telstra, and brings together a number of important contributors, including:

Keep an eye on my blog for more information. Hope to see you in Dublin.

Security in the Clouds: The IPT Swiss IT Challenge

Probably the best part of my job as CTO of Layer 7 Technologies is having the opportunity to spend time with our customers. They challenge my assumptions, push me for commitments, and take me to task for any issues; but they also flatter the whole Layer 7 team for the many things we do right as a company. And for every good idea I think I have, I probably get two or three great ones out of each and every meeting with the people who use SecureSpan to solve real problems on a daily basis.

All of that is good, but I’ve learned that if you add skiing into the mix, it becomes even better. Layer 7 is fortunate to have an excellent partnership with IPT, a very successful IT services company out of Zug, Switzerland. Each year they hold a customer meeting up in Gstaad, which I think surely gives them an unfair advantage over their competitors in countries less naturally blessed. I finally managed to draw the long straw in our company, was able to join my colleagues from IPT at their annual event earlier this January.

Growing up in Vancouver, with Whistler practically looming in my backyard, I learned to ski early and ski well. Or so I thought, until I had to try and keep up to a crew of Swiss who surely were born with skis on their feet. But being challenged is always good, and I can say the same for what I learned from my Swiss friends about technology and its impact on the local market.

The Swiss IT market is much more diverse than people from outside of it may think. Yes, there are the famous banks; but it is also an interesting microcosm of the greater European market—albeit run with a natural attention to detail and extraordinary efficiency. It’s the different local challenges which shape technology needs and lead to different emphasis.

SOA and Web services are very mature and indeed are pushed to their limits, but the API market is still in its very early stages. The informal, wild west character of RESTful services doesn’t seem to resonate in the corridors of power in Zurich. Cloud appears in patches, but it is hampered by very real privacy concerns, and this of course represents a great opportunity. Secure private clouds are made for this place.

I always find Switzerland very compelling and difficult to leave. Perhaps it’s the miniscule drop of Swiss ancestry I can claim. But more likely it’s just that I think that the Swiss have got this life thing all worked out.

Looking forward to going back.

Clouds Down Under

When I was young I was fascinated with the idea that the Coriolis effect—the concept in physics which explains why hurricanes rotate in opposing direction in the southern and northern hemispheres—could similarly be applied to common phenomenon like water disappearing down a bathtub drain. On my first trip to Cape Town many years ago I couldn’t wait to try this out, only to realize in my hotel bathroom that I had never actually got around to checking what direction water drains in the northern hemisphere before I left. So much for the considered rigor of science.

It turns out of course that the Coriolis effect, when applied on such a small scale, becomes negligible in the presence of more important factors such as the shape of your toilet bowl. And so, yet another one of popular culture’s most cherished myths is busted, and civilization advances ever so slightly.

Something that definitely does not run opposite south of the equator turns out to be cloud computing, though to my surprise conferences down under take a turn in the positive direction. I’ve just returned from a trip to Australia where I attended the 2nd Annual Future of Cloud Computing in the Financial Services, held last week, held in both Melbourne and Sydney. What impressed me is that most of the speakers were far beyond the blah-blah-blah-cloud rhetoric we still seem to hear so much, and focused instead on their real, day-to-day experiences with using cloud in the enterprise. It was as refreshing as a spring day in Sydney.

Greg Booker, CIO of ANZ Wealth, opened the conference with a provocative question. He simply asked who in the audience was in the finance or legal departments. Not a hand came up in the room. Now bear in mind this wasn’t Microsoft BUILD—most of the audience consisted of senior management types drawn from the banking and insurance community. But obviously cloud is still not front of mind for some very critical stakeholders that we need to engage.

Booker went on to illustrate why cross-department engagement is so vital to making the cloud a success in the enterprise. ANZ uses a commercial cloud provider to serve up most of its virtual desktops. Periodically, users would complain that their displays would appear rendered in foreign languages. Upon investigation they discovered that although the provider had deployed storage in-country, some desktop processing took place on a node in Japan, making this kind of a grey-area in terms of compliance with export restrictions on customer data. To complicate matters further, the provider would not be able to make any changes until the next maintenance window—an event which happened to be weeks away. IT cannot meet this kind of challenge alone. As Randy Fennel, General Manager, Engineering and Sustainability at Westpac put it succinctly, “(cloud) is a team sport.”

I was also struck by a number of insightful comments made by the participants concerning security. Rather than being shutdown by the challenges, they adopted a very pragmatic approach and got things done. Fennel remarked that Westpac’s two most popular APIs happen to be balance inquiry, followed by their ATM locator service. You would be hard pressed to think of a pair of services with more radically different security demands; this underscores the need for highly configurable API security and governance before these services go into production. He added that security must be a built-in attribute, one that must evolve with a constantly changing threat landscape or be left behind. This thought was echoed by Scott Watters, CIO of Zurich Financial Services, who added that we need to put more thought into moving security into applications. On all of these points I would agree, with the addition that security should be close to apps and loosely coupled in a configurable policy layer so that over time, you can easily address evolving risks and ever changing business requirements.

The entire day was probably best summed up by Fennel, who observed that “you can’t outsource responsibility and accountability.” Truer words have not been said in any conference, north or south.

Clouds On A Plane: VMware’s Micro Cloud Foundry Brings PaaS To My Laptop

On the eve of this week’s VMworld conference in Las Vegas, VMware announced that Micro Cloud Foundry is finally available for general distribution. This new offering is a completely self-contained instantiation of the company’s Cloud Foundry PaaS solution, which I wrote about earlier this spring. Micro Cloud Foundry comes packaged as a virtual machine, easily distributable on a USB key (as they proved at today’s session on this topic at VMworld), or as a quick download. The distribution is designed to run locally on your laptop without any external dependencies. This allows developers to code and test Cloud Foundry apps offline, and deploy these to the cloud with little more than some simple scripting. This may be the killer app PaaS needs to be taken seriously by the development community.

The reason Micro Cloud Foundry appeals to me is that it fits well with my own coding style (at least for the small amount of development I still find time to do). My work seems to divide into two different buckets consisting of those things I do locally, and the things I do in the cloud. More often than not, things find themselves in one bucket or the other because of how well the tooling supports my work style for the task at hand.

As a case in point, I always build presentations locally using PowerPoint. If you’ve ever seen one of my presentations, you hopefully remember a lot of pictures and illustrations, and not a lot of bullet points. I’m something of a frustrated graphic designer. I lack any formal training, but I suppose that I share some of the work style of a real designer—notably intense focus, iterative development, and lots of experimentation.

Developing a highly graphic presentation is the kind of work that relies as much on tool capability as it does on user expertise. But most of all, it demands a highly responsive experience. Nothing kills my design cycle like latency. I have never seen a cloud-based tool for presentations that meets all of my needs, so for the foreseeable future, local PowerPoint will remain my illustration tool of choice.

I find that software development is a little like presentation design. It responds well to intense focus and enjoys a very iterative style. And like graphic design, coding is a discipline that demands instantaneous feedback. Sometimes I write applications in a simple text editor, but when I can, I much prefer the power of a full IDE. Sometimes I think that IntelliJ IDEA is the smartest guy in the room. So for many of the same reasons I prefer local PowerPoint for presentations, so too I prefer a local IDE with few if any external dependencies for software development.

What I’ve discovered is that I don’t want to develop in the cloud; but I do want to use cloud services and probably deploy my application into the cloud. I want a local cloud I can work on offline without any external dependency. (In truth, I really do code on airplanes—indeed some of my best work takes place at 35,000 feet.) Once I’m ready to deploy, I want to migrate my app into the cloud without modifying the underlying code.

Until recently, this was hard to do. But it sounds like Micro Cloud Foundry is just what I have been looking for. More on this topic once I’ve had a chance to dig deeply into it.

Certificate Program in Cloud Computing

This fall, the Professional and Continuing Education division at the University of Washington is introducing a new certificate program in cloud computing. It consists of three consecutive courses taken on Monday nights throughout the fall, winter and spring terms. In keeping with the cloud theme, you can attend either in person at the UW campus, or online. The cost is US $2577 for the program.

The organizers invited me on to a call this morning to learn about this new program. The curriculum looks good, covering everything from cloud fundamentals to big data. The instructors are taking a very project-based approach to teaching, which I always find is the best way to learn any technology.

It is encouraging to see continuing ed departments address the cloud space. Clearly they’ve noted a demand for more structured education in cloud technology. No doubt we will see many programs similar to this one appear in the future.

Can’t See The SOA For The Clouds

It has been quite a week for SOA. First, TheServerSide published a presentation delivered at their recent Java conference by Rod Johnson from VMware in which he essentially accused SOA of being a fad. Normally this is the kind of comment people would overlook; however, Rod, who is SVP, Middleware and GM of the SpringSource division at VMware, is very well regarded in the Java community, so his comments certainly carry weight.

First to cry foul was Dave Linthicum writing in Infoworld, who made the important point that “SOA is something you do” whereas “cloud computing is a computing model.” Joe McKendrick at ZDnet quickly followed up, adding that “too much work has gone into SOA over too many years at companies to relegate it to “artificial fad” status.“

To be fair to Rod, his actual statement is as follows:

If you look at the industry over the past few years, the way in which cloud computing is spoken of today is the way in which SOA was spoken of four or five years ago. I think with respect to SOA, it really was a fad. It was something that is very sound at an architectural practice level, but in terms of selling product, it was really an artificial, marketing created, concept.

And in many ways, it is hard to disagree with him. As a SOA vendor, I’m as guilty as anyone of… um… perhaps being overly enthusiastic in my support of SOA. So it’s perhaps not surprising that it would all lead to an eventual backlash. Anne Thomas Manes was certainly the most effective at calling us all out a few years ago.

Putting hype cycles behind us though, it would be a shame to miss the real impact that SOA has had in the enterprise. I would argue that SOA is in fact a great success, because while the term may have gone out of fashion, we have absorbed the ideas it described. I don’t need to write about the vision of SOA anymore; my customers seem to know it and practice the concepts without calling it such. And I don’t seem to need to evangelize my own SOA products the way I once did, simply because people accept SOA Gateways as the architectural best practice for run time governance.

This seems to be supported by an article Forrester analyst Randy Hefner published in CIO later in the week. In it, he describes the results of a survey they conducted earlier in the year. Randy writes:

organizations that use SOA for strategic business transformation must be on to something because they are much more satisfied with SOA than those that do not use SOA for strategic business transformation.

Randy’s report examines the use of SOA—apparently in its full three letter glory—as a tool to transform business. It’s a good article because he manages to distill so much of the theory and hand waving that turned people off into some concrete, prescriptive actions that just make sense.

He closes with this insight:

SOA policy management is an advanced area of architecture design, and policy-based control of services is the business-focused SOA practice that takes the greatest amount of SOA experience and expertise. Forrester first published its vision for SOA policy management three years ago, knowing it would take a while to mature in the industry, and indications are that interest in SOA policy increased significantly this year over previous years.

Given my own experience over the last year, I would agree entirely, and suggest that we are there.