Tag Archives: APIs

Developers, Developers, Developers – Why API Management Should be Important To You Featuring RedMonk

It’s about developers again.

Everything in technology goes through cycles. If you stick around long enough, you begin to see patterns emerge with an almost predictable regularity. I actually find this comforting; it suggests we’re on a path of refinement of fundamental truths that date back in a continuous line though Alan Kay to Turing and beyond.

The wrong way to react to technology cycles is with the defensive-and-crusty “this is nothing new kid—we did it back in ’99 when you were stuck in the womb.” Thanks for nothing, Grandpa. A better approach is to recognize the importance of new energy and momentum to make great things happen.

The cycle that really excites me now is the new rise of the developer. Trying my best not to be crusty, there is a palatable excitement and energy out there that really does feel like it did in 1999. After years of outsourcing, after years of commoditization, developers matter again. A lot. It’s like the world has rediscovered the critical importance of this fundamentally creative endeavor.

This is a golden age of technology and possibility, one that is being driven by new blood and newer technology. The catalyst is the achingly perfect collision of cloud, mobility and social discovery with APIs, node.js, Git, NoSQL, HTML5, massive scalability… (I really could go on and on here).

Most of all, I’m excited by movements like Codecademy. This simple idea perfectly reflects the tenor of the time in which we live. People are no longer afraid of making things easy. The priesthood is gone; coding is now confident and mature.

I’ll be talking more about these topics and the important role APIs play in an upcoming webinar I will be delivering with James Governor, co-founder of Redmonk. This is the analyst firm that truly is at the heart of the new developer movement. I hope you can join us Thursday, April 19 at 9am Pacific. This one is going to be good.

Advertisements

QCon London 2012 Is The Place To Be This Week

I’m off to London for QCon 2012, the 6th International Software Development Conference. I am one of the track chairs for this meeting. I’ve just learned that the show is now sold out, but there is a wait list if you have not already registered. All indications are that this is going to be an outstanding conference, so if there is any way you can possibly attend, you should make the effort.

I’m hosting a track this Friday called Industrial-Strength Architecture for Integration and Web Computing. Here is how I described the track to potential speakers:

The enterprise is demanding more from the Web than ever before. No longer content with simple web application delivery, the new enterprise web has become an integration point between mobile devices, browsers, legacy systems, and third-party web apps. It is a difficult balancing act. The new enterprise web is highly scalable, but can also reconcile the different service level expectations across each participant. At its core, it enables agile product delivery while maintaining extreme reliability. In this track, we will study the architectural challenges faced by the enterprise that needs to harness the web as a rich delivery channel—and highlight the real world solutions that address these.  We will explore the intersection where trends such as virtualization, noSQL, JSON, OAuth, APIs and mobile apps meet. Join us to understand the fine tuning between milliseconds and dollars that can make the difference between wild success and disappointing mediocrity.

I’m fortunate to have a great roster of speakers, including Theo Schlossnagle from Omniti, Paul Fremantle from WSO2, John Davies from Incept5, and finally both Marcus Kern and David Dawson from Mobile Interactive Group.

I’m also going to chair a panel titled Integration At Scale: Lessons Learned From The New Enterprise Web. This one promises to be a very interesting discussion:

The mobile device revolution has upended our traditional view of the world wide web. The enterprise web is now about integration: connecting any device to to any data, reliably and under wildly fluctuating load. How has this affected web architecture, and what changes in the day-to-day operation of the web resource? Join us for this panel of senior enterprise architects, each of whom has met the challenge of the new enterprise web.

The panel line up consists of David Laing from CityIndex, Neels Burger from MoneySuperMarket.com, Neil Pellinacci form Tanzarine Technology, and Parand Tony Darugar from Xpenser. Each brings tremendous experience to the panel, and bringing them all together is going to make for a lively and informative debate. I’m looking forward to it.

Hope to see you in London.

Gartner AADI 2011 Presentation Video: API Management, Governance & OAuth

I delivered a talk all about API governance at last week’s Gartner Application Architecture, Development and Integration (AADI) summit in Las Vegas. I was the lunch time entertainment on Wednesday. The session was packed—in fact, a large number of people were turned away because we ran out of place settings. Fortunately, a video of the session is now available, so if you were not able to attend, you can now watch it online.

In this talk I explore how governance is changing in the API world. I even do a live OAuth demonstration using people, instead of computers. Unlike the classic “swim lane” diagrams that only show how OAuth works, this one also teaches you why the protocol operates as it does. (If you want to skip directly to the OAuth component, it begins at around 22 minutes. )

Clouds Down Under

When I was young I was fascinated with the idea that the Coriolis effect—the concept in physics which explains why hurricanes rotate in opposing direction in the southern and northern hemispheres—could similarly be applied to common phenomenon like water disappearing down a bathtub drain. On my first trip to Cape Town many years ago I couldn’t wait to try this out, only to realize in my hotel bathroom that I had never actually got around to checking what direction water drains in the northern hemisphere before I left. So much for the considered rigor of science.

It turns out of course that the Coriolis effect, when applied on such a small scale, becomes negligible in the presence of more important factors such as the shape of your toilet bowl. And so, yet another one of popular culture’s most cherished myths is busted, and civilization advances ever so slightly.

Something that definitely does not run opposite south of the equator turns out to be cloud computing, though to my surprise conferences down under take a turn in the positive direction. I’ve just returned from a trip to Australia where I attended the 2nd Annual Future of Cloud Computing in the Financial Services, held last week, held in both Melbourne and Sydney. What impressed me is that most of the speakers were far beyond the blah-blah-blah-cloud rhetoric we still seem to hear so much, and focused instead on their real, day-to-day experiences with using cloud in the enterprise. It was as refreshing as a spring day in Sydney.

Greg Booker, CIO of ANZ Wealth, opened the conference with a provocative question. He simply asked who in the audience was in the finance or legal departments. Not a hand came up in the room. Now bear in mind this wasn’t Microsoft BUILD—most of the audience consisted of senior management types drawn from the banking and insurance community. But obviously cloud is still not front of mind for some very critical stakeholders that we need to engage.

Booker went on to illustrate why cross-department engagement is so vital to making the cloud a success in the enterprise. ANZ uses a commercial cloud provider to serve up most of its virtual desktops. Periodically, users would complain that their displays would appear rendered in foreign languages. Upon investigation they discovered that although the provider had deployed storage in-country, some desktop processing took place on a node in Japan, making this kind of a grey-area in terms of compliance with export restrictions on customer data. To complicate matters further, the provider would not be able to make any changes until the next maintenance window—an event which happened to be weeks away. IT cannot meet this kind of challenge alone. As Randy Fennel, General Manager, Engineering and Sustainability at Westpac put it succinctly, “(cloud) is a team sport.”

I was also struck by a number of insightful comments made by the participants concerning security. Rather than being shutdown by the challenges, they adopted a very pragmatic approach and got things done. Fennel remarked that Westpac’s two most popular APIs happen to be balance inquiry, followed by their ATM locator service. You would be hard pressed to think of a pair of services with more radically different security demands; this underscores the need for highly configurable API security and governance before these services go into production. He added that security must be a built-in attribute, one that must evolve with a constantly changing threat landscape or be left behind. This thought was echoed by Scott Watters, CIO of Zurich Financial Services, who added that we need to put more thought into moving security into applications. On all of these points I would agree, with the addition that security should be close to apps and loosely coupled in a configurable policy layer so that over time, you can easily address evolving risks and ever changing business requirements.

The entire day was probably best summed up by Fennel, who observed that “you can’t outsource responsibility and accountability.” Truer words have not been said in any conference, north or south.

Introducing Layer 7’s OAuth Toolkit

“If your tools don’t work for you, get rid of them,” is a simple creed I learned from my father in the workshop. Over the years, I have found it is just as relevant when applied to software, where virtual tools abound, but with often-dubious value.

OAuth is an emerging technology that has lately been in need of useful tools, and to fill this gap we are introducing an OAuth toolkit into Layer 7’s SecureSpan and CloudSpan Gateways.  OAuth isn’t exactly new to Layer 7; we have actually done a number of OAuth implementations with our customers over the last two years. But what we’ve discovered is that there is a lot of incompatibility between different OAuth implementations, and this is discouraging many organizations from making better use of this technology. Our goal with the toolkit was to provide a collection of intelligently parameterized components that developers can mix-and-match to reduce the friction between different implementations. And thanks to the generalization that characterize the emerging OAuth 2.0 specification, this toolkit helps to extend OAuth into interesting new use cases beyond the basic three-legged scenario of version one.

I have to admit that I was suspicious of OAuth when it first appeared a few years ago. So much effort had gone into the formal specification of SAML, from core definition to interop profiles, that I didn’t see the need for OAuth’s one use case solution and had little faith in the rigor of such a grass roots approach. But in time, OAuth won me over; it fits well with the browser-centric, simple-is-better approach of the modern Internet. The mapping to more generalized, token server-style interactions in the new version of the spec appeals to the architect in me, and the opening up of the security token payload indicates a desire to play well with existing infrastructure, which is a basic enterprise requirement.

However, adding extensibility to OAuth will also bring about this technology’s greatest challenge. The 1.0a specification benefitted enormously from laser focus on a use case so narrow that it was a wonder it gained the mindshare that it did. OAuth in 2011 has no such advantage—generalization being great for architects but hell for standards committees and vendors. It will be interesting to see how well the OAuth community satisfies the oftentimes-conflicting agendas of simple, standard, and interoperable.

Here at Layer 7 we predict a bright future for OAuth. We also think it’s very useful today, which is why we introduced a toolkit instead of a one-size-fits-one approach. We see our customers using OAuth in concert with their existing investments in Identity and Access Management (IAM) products, such as IBM’s Tivoli Access Manager  (TAM) or Microsoft’s Active Directory (AD). We see it being used to transport SAML tokens that require sophisticated interpretation to render entitlement decisions. Taking a cue from OAuth itself, the point of our toolkit is to simplify both implementation and integration. And the toolkit’s parameterization helps to insulate the application from specification change.

I’ll be at the Gartner/Burton Catalyst show this week in San Diego where we’ll be demonstrating the toolkit. I hope you can drop by and talk about how it might help you.

Why Cloud Brokers Are The Foundation For The Resilient API Network

Amazon Web Services crashed spectacularly, and with it the illusion that cloud is reliable-by-design and ready for mission-critical applications. Now everyone knows that cloud SLAs fade like the phosphor glow in a monitor when someone pulls the plug from the wall. Amazon’s failure is an unfortunate event, and the cloud will never be the same.

So what is the enterprise to do if it can’t trust its provider? The answer is to take a page from good web architecture and double up. Nobody would deploy an important web site without at least two identical web servers and a load balancer to spray traffic between them. If one server dies, its partner handles the full load until operators can restore the failed system. Sometimes the simplest patterns are the most effective.

Now take a step back and expand this model to the macro-level. Instead of pair of web servers, imagine two different cloud providers, ideally residing on separate power grids and different Internet backbones. Rather than a web server, imagine a replicated enterprise application hosting important APIs. Now replace the load balancer with a Cloud Broker—essentially an intelligent API switch that can distribute traffic between the providers based  both on provider performance and a deep understanding of the nature of each API.

It is this API-centricity that makes a Cloud Broker more than just a new deployment pattern for a conventional load balancer. Engineers design load balancers to direct traffic to Web sites, and their designs excel at this task. But while load balancers do provide rudimentary access to API parameters in a message stream, the rules languages used to articulate distribution policy are just not designed to make effective decisions about application protocols. In a pinch, you might be able to implement simple HTTP fail over between clouds, but this isn’t a very satisfactory solution.

In contrast, we design cloud brokers from the beginning to interpret application layer protocols and to use this insight to optimize API traffic management between clouds. A well-designed cloud broker abstracts existing APIs that may differ between hosts, offering a common view to clients decoupled from local dependencies. Furthermore, Cloud Brokers implement sophisticated orchestration capabilities so they can interact with cloud infrastructure through a provider’s APIs. This allows the broker to take command of applications the provider hosts. Leveraging these APIs, the broker can automatically spin up a new application instance on demand, or release under-utilized capacity. Automation of processes is one of the more important value propositions of cloud, and Cloud Brokers are means to realize this goal.

For more information about Cloud Brokers, have a look at the Cloud Broker product page at Layer 7 Technologies.

Blowing Holes in the Web of Trust

The Register today published an excellent summary of the latest issues with SSL. In the typically blunt and mordant style for which the publication is so famous, Dan Goodin illustrates how the gossamer-thin SSL web of trust is built on a superstructure of astonishingly dubious merit. It’s a wonder the whole thing works at all.

Have a careful read of How is SSL hopelessly broken? Let us count the ways and then re-examine the cartel certs that anchor your own web browsing experience. As you roll out your API strategy, make sure you deploy your SSL endpoints with certificates that were subject to organizational or (much better) extended validation. Encourage—or if you can, demand—that your API clients limit their trust stores to a small subset containing only the most legitimate CAs.

The opportunity is largely over in the browser world; affecting massive change there will only happen when individuals personally lose money on a grand scale. But APIs still have a chance to regain some level of trust through rigorous application of SSL best practices, and API providers and developers can take the initiative here.