Monthly Archives: November 2010

Securing OData

One emerging technology that has recently caught our attention here at Layer 7 is the Open Data Protocol, or OData for short. You can think of OData as JDBC/ODBC for the web. Using OData, web developers can query data sources in much the same way they would use SQL. It builds on the basic CRUD constructs of REST, adding the Atom Publishing Protocol to help flesh out the vision under a very standards-oriented approach. OData’s major promoter is Microsoft, but the company is clearly intent on making this protocol an important community-driven initiative that provides value to all platforms.

I like OData; but as with any power tool, I approach it with care and some suspicion. I definitely agree that we need a formalized approach to interacting with Web data sources. OData is not SQL, but it brings enough familiar constructs with it to make the protocol easy to pick up and tremendously versatile. But OData also raises some significant security issues that need to be carefully considered before it is deployed.

Most applications are designed to constrain a user’s view of data. Any modern relational database has the ability to apply access control and limit a user’s view to the records to which they are entitled. More often than not, however, the enforcement of these entitlements is a task delegated not to the database, but to the application that interacts with it.

Consider, for example, a scenario, where a database makes a JDBC or ODBC connection directly available to clients outside of the corporate firewall:

It can be extremely risky to permit direct outside connections into a database.

People avoid doing this for a good reason. It is true that you can secure the connection with SSL and force the incoming user to authenticate. However, if an attacker was able to compromise this connection (perhaps by stealing a password), they could explore or alter the database at will. This is a gangster’s paradise.

A simple web application is really a security buffer zone between the outside world and the database. It restricts the capabilities of the user through the constraints imposed by elements that make up each input form. Ultimately, the application tier maps user interactions to explicit SQL statements, but a well-designed system must strictly validate any incoming parameters before populating any SQL templates. From this perspective, web applications are fundamentally a highly managed buffer between the outside world and the data—a buffer that has the capability of applying a much more customizable and rigorous model of access control than a RDMS could.

The Web application tier as security buffer between the database and the Internet.

However, this is also why SQL injection can be such an effective vector of attack. An application that fails to take the necessary precautions to validate incoming data can, in effect, extend the SQL connection right outside to the user. And unconstrained SQL can open up the entire database to examination or alteration. This attack vector was very popular back in the PowerBuilder days, but lately it has made a startling resurgence because its effectiveness when applied to badly designed web apps.

OData, of course, is the data source connection, so injection isn’t an issue—just getting a hold of it in the first place is enough. So what is critically important with OData is to strictly manage what this connection is capable of doing. OData servers need to provide not just authentication, authorization, and audit of the connection, but wholesale constraint of protocol function and data views as well. Web security demands that you assume the worst—and in this case, the worst is certainly compromise of the connection. The best way to manage this risk is to limit your exposure to what an attacker can do.

In SQL-terms, this is like limiting the functions that a user can access, and restricting them to the views to which they are entitled (and they shouldn’t be entitled to much). The danger with OData is that some of the tools make it much too easy to simply open a connection to the data (“click here to make the database available using OData”); this can have widespread negative consequences if an attacker is able to compromise a legitimate user’s account. If the data source cannot itself impose the necessary constraints on the connection, then an intermediate security layer is mandatory.

This is where Layer 7 can help. CloudSpan is fully compatible with OData, and can act as an independent security layer between the OData client (which may be a browser-resident application) and the OData server. It can offer not just AAA on the connection, but can narrow the OData API or mask query results based on an individual user’s entitlement.

CloudSpan Gateways managing access to OData data sources.

Here’s a real example that Julian Phillips, one of Layer 7’s architects, put together. Jules constructed a policy using the Netflix OData API, which is an experimental service the company has made available on the net. The Netflix API allows you to browse selections in their catalog. It has it’s own constraints built in—it’s already read-only, for example—but we are going to show how CloudSpan could be deployed to further constrain the API to implement stricter security protocols, and even enforce business rules governing access.

Jules’ policy is activated on all URI’s that match the /Catalog* patternthe entry point into the Netflix OData API. This shows up in CloudSpan under the service browser:

What we are going to do here is add some security constraints, and then a business rule that restricts the ability of minors to only view movie titles with a rating of G or PG-13. Minors can build perfectly valid Netflix OData queries and submit them to the API; however, these will be trapped by the CloudSpan gateway before they get to the actual OData server.

Jules’ basic policy is quite simple. We’ve collapsed some details into folders to make the basic flow easier to understand:

First off, the policy throws an explicit audit to capture both the URI and the query string for debugging purposes. We then ensure that the connection uses SSL (and subject to the cipher suite constraints currently in effect), and we mine HTTP basic credentials from the connection. Need Kerberos or SSL client-side certificate authentication instead? Just drag the assertions implementing either of these into the policy and you are good to go.

The gateway then authenticates the user against a directory, and from this interaction we determine whether this user is an adult or a minor based on their group membership. If the user is indeed an adult, the gateway passes their OData query to the server unchanged. However, if the user is a minor, the gateway adds constraints to the query to ensure that the server will only return G or PG-13 movies. For reference, the full policy is below (click to expand):

This example is somewhat contrived, but you should be able to see how the intermediate security gateway can add critical constraint to the scope of OData protocol. OData shows a lot of promise. But like many new technologies, it needs to be managed with care. If deployed securely, OData can become an essential tool in any web developer’s toolkit.

Advertisement

Virtualization vs. the Auditors

Virtualization, in the twilight of 2010, has become at last a mainstream technology. It enjoys widespread adoption across every sector—except those, that is, in which audit plays a primary role. Fortunately, 2011 may see some of these last bastions of technological-conservatism finally accept what is arguably the most important development in computing in the 21st century.

The problem with virtualization, of course, is that it discards the simple and comfortable division between physical systems, replacing this with an abstraction that few of us fully understand. We can read all we want about the theory of virtualization and the design of a modern hypervisor, but in the end, accepting that there exists provable isolation between running images requires a leap of faith that has never sat well with the auditing community.

To be fair to the auditors, when that person’s career depends on making a judgment of the relative security risk associated with a system, there is a natural and understandable tendency toward caution. Change may be fast in matters of technology, but human process is tangled up with complexities that just take time to work through.

The payment card industry may be leading the way toward legitimizing virtualization in architectures subject to serious compliance concerns. PCI 2.0, the set of security requirements for enhancing security around payment card processing, comes into effect Jan 1, 2011, and provides specific guidance to the auditing community about how they should approach virtualization without simply rejecting it outright.

Charles Babcock, in InformationWeek, published a good overview of the issue. He writes:

Interpretation of the (PCI) version currently in force, 1.2.1, has prompted implementers to keep distinct functions on physically separate systems, each with its own random access memory, CPUs, and storage, thus imposing a tangible separation of parts. PCI didn’t require this, because the first regulation was written before the notion of virtualization became prevalent. But the auditors of PCI, who pass your system as PCI-compliant, chose to interpret the regulation as meaning physically separate.

The word that should pique interest here is interpretation. As Babcock notes, existing (pre-2.0) PCI didn’t require physical separation between systems, but because this had been the prevailing means to implement provable isolation between applications, it became the sine qua non condition in every audit.

The issue of physical isolation compared to virtual isolation isn’t restricted to financial services. The military advocates similar design patterns, with tangible isolation between entire networks that carry classified or unclassified data. I attended a very interesting panel at VMworld back in September that discussed virtualization and compliance at length. One anecdote that really struck me was made by Michael Berman, CTO of Catbird. Michael is the veteran of many ethical hacking jobs. In his experience, nearly every official inventory of applications his clients possessed was incomplete, and it is the overlooked applications that are most often the weakness to exploit. The virtualized environment, in contrast, offers an inventory of running images under control of the hypervisor that is necessarily complete. There may still be weak (or unaccounted for) applications residing on the images, but this is nonetheless a major step forward through provision of an accurate big picture for security and operations. Ironically, virtualization may be a means to more secure architectures.

I’m encouraged by the steps taken with PCI 2.0. Fighting audits is a nearly always a loosing proposition, or is at best a Pyrrhic victory. Changing the rules of engagement is the way to win this war.

How to Fail with Web Services

I’ve been asked to deliver a keynote presentation at the 8th European Conference on Web Services (ECOWS) 2010, to be held in Aiya Napa, Cyprus this Dec 1-3. My topic is an exploration of the the anti-patterns that often appear in Web services projects.

Here’s the abstract in full:

How to Fail with Web Services

Enterprise computing has finally woken up to the value of Web services. This technology has become a basic foundation of Service Oriented Architecture (SOA), which despite recent controversy is still very much the architectural approach favored by sectors as diverse as corporate IT, health care, and the military. But despite strong vision, excellent technology, and very good intentions, commercial success with SOA remains rare. Successful SOA starts with success in an actual implementation; for most organizations, this means a small proof-of-concept or a modest suite of Web services applications. This is an important first step, but it is here where most groups stumble. When SOA initiatives fail on their first real implementation, it disillusions participants, erodes the confidence of stakeholders, and even the best-designed architecture will be perceived as just another failed IT initiative. For over six years, Layer 7 has been building real Web services-based architectures for government clients and some of the world’s largest corporations. In this time, we have seen repeated patterns of bad practice, pitfalls, misinterpretations, and gaps in technology. This talk is about what happens when web Services moves out of the lab and into general use. By understanding this, we are better able to meet tomorrow’s challenges, when Web services move into the cloud.

Talk at Upcoming Gartner AADI 2010 in LA: Bridging the Enterprise and the Cloud

I’ll be speaking this Tuesday, Nov 16 at the Gartner Application Architecture, Development and Integration Summit in Los Angeles. My talk is during lunch, so if you’re at the conference and hungry, you should definitely come by and see the show. I’ll be exploring the issues architects face when integrating cloud services—including not just SaaS, but also PaaS and IaaS—with on-premise data and applications. I’ll also cover the challenges the enterprise faces when leveraging existing identity and access management systems in the cloud. I’ll even talk about the thinking behind Daryl Plummer’s Cloudstreams idea, which I wrote about last week.

Come by, say hello, and learn not just about the issues with cloud integration, but real solutions that will allow the enterprise to safely and securely integrate this resource into their IT strategy.

 

There’s a Cloudstream For That

Earlier today, Daryl Plummer introduced a new word into the cloud lexicon: the Cloudstream. Anyone who knows Daryl would agree he is one of the great taxonomists of modern computing. As Group VP and a Gartner Fellow, Darryl is in a unique position to spot trends early. But he’s also sharp enough to recognize when an emerging trend needs classification to bring it to a wider audience. Such is the case with Cloudstream.

In Daryl’s own words:

A Cloudstream is a packaged integration template that provides a description of everything necessary to govern, secure, and manage the interaction between two services at the API level.

A Cloudstream encapsulates all of the details necessary to integrate services—wherever these reside, in the enterprise or in the cloud—and manage these subject to the needs of the business. This means that Cloudstream describes not just the mechanics of integrating data and applications (which is a muddy slog no matter how effective your integration tools are), but also the aspects of security, governance, SLA, visibility, etc that underpin service integration. These are the less obvious, but nonetheless critical components of a real integration exercise. Cloudstream is an articulation of all this detail in a way that abstracts its complexity, but at the same time keeping it available for fine-tuning when it is necessary.

Cloudstream captures integration configuration for cloud brokers, an architectural model that is becoming increasingly popular. Cloud broker technology exists to add value to cloud services, and a Cloudstream neatly packages up the configuration details into something that people can appreciate outside of the narrow hallways of IT. If I interpret Daryl correctly, Cloudstreams may help IT integrate, but it is the business who is the real audience for a Cloudstream.

This implies that Cloudstream is more that simple configuration management. Really, Cloudstream is logical step in the continuing evolution of IT that began with cloud computing. Cloud is successful precisely because it is not about technology; it is about a better model for delivery of services. We technologists may spend our days arguing about the characteristics and merits of different cloud platforms, but at the end of the day cloud will win because it comes with an economic argument that resonates throughout the C-Suite with the power of a Mozart violin concerto played on a Stradivarius.

The problem Daryl identifies is that so many companies—and he names Layer 7 specifically in his list—lead with technology to solve what is fundamentally a business problem. Tech is a game of detail—and I’ve made a career out being good at the detail. But when faced with seemingly endless lists of features, most customers have a hard time distinguishing between this vendor and that. This one has Kerberos according the WS-Security Kerberos Token Profile—but that one has an extra cipher suite for SSL. Comparing feature lists alone, it’s natural to loose sight of the fact that the real problem to be solved was simple integration with Salesforce.com. Daryl intends Cloudstream to up level the integration discussion, but not at the cost of loosing the configuration details that the techies may ultimately need.

I like Daryl’s thinking, and I think he may be on to something with his Cloudstream idea. Here at Layer 7 we’ve been thinking about ways to better package and market integration profiles using our CloudSpan appliances. Appliances, of course, are the ideal platform for cloud broker technology. Daryl’s Cloudstream model might be the right approach to bundle all of the details underlying service integration into an easily deployable package for a Layer 7 CloudSpan appliance. Consider this:

The Problem: I need single sign-on to Salesforce.com.

The Old Solution: Layer 7 offers a Security Token Service (STS) as an on-premise, 1U rackmount or virtual appliance. It supports OASIS SAML browser POST profile for SSO to SaaS applications such as Salesforce.com, Google docs, etc. This product, called CloudConnect, supports initial authentication using username/password, Kerberos tickets, SAML tokens, x509.v3 certificates, or proprietary SSO tokens. It features an on-board identity provider, integration into any LDAP, as well as vendor-specific connectors into Microsoft ActiveDirectory, IBM Tivoli Access Manager, Oracle Access Manager, OpenSSO, Novell Access Manager, RSA ClearTrust, CA Netegrity…. (and so on for at least another page of excruciating detail)

The Cloudstream Solution: Layer 7 offers a CloudStream integrating the enterprise with Salesforce.com.

Which one resonates with the business?

Photo: Jonathan Ogilvie, stock.xchng