Category Archives: Layer 7 Technologies

Securing OData

One emerging technology that has recently caught our attention here at Layer 7 is the Open Data Protocol, or OData for short. You can think of OData as JDBC/ODBC for the web. Using OData, web developers can query data sources in much the same way they would use SQL. It builds on the basic CRUD constructs of REST, adding the Atom Publishing Protocol to help flesh out the vision under a very standards-oriented approach. OData’s major promoter is Microsoft, but the company is clearly intent on making this protocol an important community-driven initiative that provides value to all platforms.

I like OData; but as with any power tool, I approach it with care and some suspicion. I definitely agree that we need a formalized approach to interacting with Web data sources. OData is not SQL, but it brings enough familiar constructs with it to make the protocol easy to pick up and tremendously versatile. But OData also raises some significant security issues that need to be carefully considered before it is deployed.

Most applications are designed to constrain a user’s view of data. Any modern relational database has the ability to apply access control and limit a user’s view to the records to which they are entitled. More often than not, however, the enforcement of these entitlements is a task delegated not to the database, but to the application that interacts with it.

Consider, for example, a scenario, where a database makes a JDBC or ODBC connection directly available to clients outside of the corporate firewall:

It can be extremely risky to permit direct outside connections into a database.

People avoid doing this for a good reason. It is true that you can secure the connection with SSL and force the incoming user to authenticate. However, if an attacker was able to compromise this connection (perhaps by stealing a password), they could explore or alter the database at will. This is a gangster’s paradise.

A simple web application is really a security buffer zone between the outside world and the database. It restricts the capabilities of the user through the constraints imposed by elements that make up each input form. Ultimately, the application tier maps user interactions to explicit SQL statements, but a well-designed system must strictly validate any incoming parameters before populating any SQL templates. From this perspective, web applications are fundamentally a highly managed buffer between the outside world and the data—a buffer that has the capability of applying a much more customizable and rigorous model of access control than a RDMS could.

The Web application tier as security buffer between the database and the Internet.

However, this is also why SQL injection can be such an effective vector of attack. An application that fails to take the necessary precautions to validate incoming data can, in effect, extend the SQL connection right outside to the user. And unconstrained SQL can open up the entire database to examination or alteration. This attack vector was very popular back in the PowerBuilder days, but lately it has made a startling resurgence because its effectiveness when applied to badly designed web apps.

OData, of course, is the data source connection, so injection isn’t an issue—just getting a hold of it in the first place is enough. So what is critically important with OData is to strictly manage what this connection is capable of doing. OData servers need to provide not just authentication, authorization, and audit of the connection, but wholesale constraint of protocol function and data views as well. Web security demands that you assume the worst—and in this case, the worst is certainly compromise of the connection. The best way to manage this risk is to limit your exposure to what an attacker can do.

In SQL-terms, this is like limiting the functions that a user can access, and restricting them to the views to which they are entitled (and they shouldn’t be entitled to much). The danger with OData is that some of the tools make it much too easy to simply open a connection to the data (“click here to make the database available using OData”); this can have widespread negative consequences if an attacker is able to compromise a legitimate user’s account. If the data source cannot itself impose the necessary constraints on the connection, then an intermediate security layer is mandatory.

This is where Layer 7 can help. CloudSpan is fully compatible with OData, and can act as an independent security layer between the OData client (which may be a browser-resident application) and the OData server. It can offer not just AAA on the connection, but can narrow the OData API or mask query results based on an individual user’s entitlement.

CloudSpan Gateways managing access to OData data sources.

Here’s a real example that Julian Phillips, one of Layer 7’s architects, put together. Jules constructed a policy using the Netflix OData API, which is an experimental service the company has made available on the net. The Netflix API allows you to browse selections in their catalog. It has it’s own constraints built in—it’s already read-only, for example—but we are going to show how CloudSpan could be deployed to further constrain the API to implement stricter security protocols, and even enforce business rules governing access.

Jules’ policy is activated on all URI’s that match the /Catalog* patternthe entry point into the Netflix OData API. This shows up in CloudSpan under the service browser:

What we are going to do here is add some security constraints, and then a business rule that restricts the ability of minors to only view movie titles with a rating of G or PG-13. Minors can build perfectly valid Netflix OData queries and submit them to the API; however, these will be trapped by the CloudSpan gateway before they get to the actual OData server.

Jules’ basic policy is quite simple. We’ve collapsed some details into folders to make the basic flow easier to understand:

First off, the policy throws an explicit audit to capture both the URI and the query string for debugging purposes. We then ensure that the connection uses SSL (and subject to the cipher suite constraints currently in effect), and we mine HTTP basic credentials from the connection. Need Kerberos or SSL client-side certificate authentication instead? Just drag the assertions implementing either of these into the policy and you are good to go.

The gateway then authenticates the user against a directory, and from this interaction we determine whether this user is an adult or a minor based on their group membership. If the user is indeed an adult, the gateway passes their OData query to the server unchanged. However, if the user is a minor, the gateway adds constraints to the query to ensure that the server will only return G or PG-13 movies. For reference, the full policy is below (click to expand):

This example is somewhat contrived, but you should be able to see how the intermediate security gateway can add critical constraint to the scope of OData protocol. OData shows a lot of promise. But like many new technologies, it needs to be managed with care. If deployed securely, OData can become an essential tool in any web developer’s toolkit.

Virtualization vs. the Auditors

Virtualization, in the twilight of 2010, has become at last a mainstream technology. It enjoys widespread adoption across every sector—except those, that is, in which audit plays a primary role. Fortunately, 2011 may see some of these last bastions of technological-conservatism finally accept what is arguably the most important development in computing in the 21st century.

The problem with virtualization, of course, is that it discards the simple and comfortable division between physical systems, replacing this with an abstraction that few of us fully understand. We can read all we want about the theory of virtualization and the design of a modern hypervisor, but in the end, accepting that there exists provable isolation between running images requires a leap of faith that has never sat well with the auditing community.

To be fair to the auditors, when that person’s career depends on making a judgment of the relative security risk associated with a system, there is a natural and understandable tendency toward caution. Change may be fast in matters of technology, but human process is tangled up with complexities that just take time to work through.

The payment card industry may be leading the way toward legitimizing virtualization in architectures subject to serious compliance concerns. PCI 2.0, the set of security requirements for enhancing security around payment card processing, comes into effect Jan 1, 2011, and provides specific guidance to the auditing community about how they should approach virtualization without simply rejecting it outright.

Charles Babcock, in InformationWeek, published a good overview of the issue. He writes:

Interpretation of the (PCI) version currently in force, 1.2.1, has prompted implementers to keep distinct functions on physically separate systems, each with its own random access memory, CPUs, and storage, thus imposing a tangible separation of parts. PCI didn’t require this, because the first regulation was written before the notion of virtualization became prevalent. But the auditors of PCI, who pass your system as PCI-compliant, chose to interpret the regulation as meaning physically separate.

The word that should pique interest here is interpretation. As Babcock notes, existing (pre-2.0) PCI didn’t require physical separation between systems, but because this had been the prevailing means to implement provable isolation between applications, it became the sine qua non condition in every audit.

The issue of physical isolation compared to virtual isolation isn’t restricted to financial services. The military advocates similar design patterns, with tangible isolation between entire networks that carry classified or unclassified data. I attended a very interesting panel at VMworld back in September that discussed virtualization and compliance at length. One anecdote that really struck me was made by Michael Berman, CTO of Catbird. Michael is the veteran of many ethical hacking jobs. In his experience, nearly every official inventory of applications his clients possessed was incomplete, and it is the overlooked applications that are most often the weakness to exploit. The virtualized environment, in contrast, offers an inventory of running images under control of the hypervisor that is necessarily complete. There may still be weak (or unaccounted for) applications residing on the images, but this is nonetheless a major step forward through provision of an accurate big picture for security and operations. Ironically, virtualization may be a means to more secure architectures.

I’m encouraged by the steps taken with PCI 2.0. Fighting audits is a nearly always a loosing proposition, or is at best a Pyrrhic victory. Changing the rules of engagement is the way to win this war.

How to Fail with Web Services

I’ve been asked to deliver a keynote presentation at the 8th European Conference on Web Services (ECOWS) 2010, to be held in Aiya Napa, Cyprus this Dec 1-3. My topic is an exploration of the the anti-patterns that often appear in Web services projects.

Here’s the abstract in full:

How to Fail with Web Services

Enterprise computing has finally woken up to the value of Web services. This technology has become a basic foundation of Service Oriented Architecture (SOA), which despite recent controversy is still very much the architectural approach favored by sectors as diverse as corporate IT, health care, and the military. But despite strong vision, excellent technology, and very good intentions, commercial success with SOA remains rare. Successful SOA starts with success in an actual implementation; for most organizations, this means a small proof-of-concept or a modest suite of Web services applications. This is an important first step, but it is here where most groups stumble. When SOA initiatives fail on their first real implementation, it disillusions participants, erodes the confidence of stakeholders, and even the best-designed architecture will be perceived as just another failed IT initiative. For over six years, Layer 7 has been building real Web services-based architectures for government clients and some of the world’s largest corporations. In this time, we have seen repeated patterns of bad practice, pitfalls, misinterpretations, and gaps in technology. This talk is about what happens when web Services moves out of the lab and into general use. By understanding this, we are better able to meet tomorrow’s challenges, when Web services move into the cloud.

Talk at Upcoming Gartner AADI 2010 in LA: Bridging the Enterprise and the Cloud

I’ll be speaking this Tuesday, Nov 16 at the Gartner Application Architecture, Development and Integration Summit in Los Angeles. My talk is during lunch, so if you’re at the conference and hungry, you should definitely come by and see the show. I’ll be exploring the issues architects face when integrating cloud services—including not just SaaS, but also PaaS and IaaS—with on-premise data and applications. I’ll also cover the challenges the enterprise faces when leveraging existing identity and access management systems in the cloud. I’ll even talk about the thinking behind Daryl Plummer’s Cloudstreams idea, which I wrote about last week.

Come by, say hello, and learn not just about the issues with cloud integration, but real solutions that will allow the enterprise to safely and securely integrate this resource into their IT strategy.

 

There’s a Cloudstream For That

Earlier today, Daryl Plummer introduced a new word into the cloud lexicon: the Cloudstream. Anyone who knows Daryl would agree he is one of the great taxonomists of modern computing. As Group VP and a Gartner Fellow, Darryl is in a unique position to spot trends early. But he’s also sharp enough to recognize when an emerging trend needs classification to bring it to a wider audience. Such is the case with Cloudstream.

In Daryl’s own words:

A Cloudstream is a packaged integration template that provides a description of everything necessary to govern, secure, and manage the interaction between two services at the API level.

A Cloudstream encapsulates all of the details necessary to integrate services—wherever these reside, in the enterprise or in the cloud—and manage these subject to the needs of the business. This means that Cloudstream describes not just the mechanics of integrating data and applications (which is a muddy slog no matter how effective your integration tools are), but also the aspects of security, governance, SLA, visibility, etc that underpin service integration. These are the less obvious, but nonetheless critical components of a real integration exercise. Cloudstream is an articulation of all this detail in a way that abstracts its complexity, but at the same time keeping it available for fine-tuning when it is necessary.

Cloudstream captures integration configuration for cloud brokers, an architectural model that is becoming increasingly popular. Cloud broker technology exists to add value to cloud services, and a Cloudstream neatly packages up the configuration details into something that people can appreciate outside of the narrow hallways of IT. If I interpret Daryl correctly, Cloudstreams may help IT integrate, but it is the business who is the real audience for a Cloudstream.

This implies that Cloudstream is more that simple configuration management. Really, Cloudstream is logical step in the continuing evolution of IT that began with cloud computing. Cloud is successful precisely because it is not about technology; it is about a better model for delivery of services. We technologists may spend our days arguing about the characteristics and merits of different cloud platforms, but at the end of the day cloud will win because it comes with an economic argument that resonates throughout the C-Suite with the power of a Mozart violin concerto played on a Stradivarius.

The problem Daryl identifies is that so many companies—and he names Layer 7 specifically in his list—lead with technology to solve what is fundamentally a business problem. Tech is a game of detail—and I’ve made a career out being good at the detail. But when faced with seemingly endless lists of features, most customers have a hard time distinguishing between this vendor and that. This one has Kerberos according the WS-Security Kerberos Token Profile—but that one has an extra cipher suite for SSL. Comparing feature lists alone, it’s natural to loose sight of the fact that the real problem to be solved was simple integration with Salesforce.com. Daryl intends Cloudstream to up level the integration discussion, but not at the cost of loosing the configuration details that the techies may ultimately need.

I like Daryl’s thinking, and I think he may be on to something with his Cloudstream idea. Here at Layer 7 we’ve been thinking about ways to better package and market integration profiles using our CloudSpan appliances. Appliances, of course, are the ideal platform for cloud broker technology. Daryl’s Cloudstream model might be the right approach to bundle all of the details underlying service integration into an easily deployable package for a Layer 7 CloudSpan appliance. Consider this:

The Problem: I need single sign-on to Salesforce.com.

The Old Solution: Layer 7 offers a Security Token Service (STS) as an on-premise, 1U rackmount or virtual appliance. It supports OASIS SAML browser POST profile for SSO to SaaS applications such as Salesforce.com, Google docs, etc. This product, called CloudConnect, supports initial authentication using username/password, Kerberos tickets, SAML tokens, x509.v3 certificates, or proprietary SSO tokens. It features an on-board identity provider, integration into any LDAP, as well as vendor-specific connectors into Microsoft ActiveDirectory, IBM Tivoli Access Manager, Oracle Access Manager, OpenSSO, Novell Access Manager, RSA ClearTrust, CA Netegrity…. (and so on for at least another page of excruciating detail)

The Cloudstream Solution: Layer 7 offers a CloudStream integrating the enterprise with Salesforce.com.

Which one resonates with the business?

Photo: Jonathan Ogilvie, stock.xchng

BI is Dead. Long Live BI. The Future of Business Intelligence in the Cloud

I’ll be delivering a keynote presentation in Sydney Australia on Oct 18 at the Mastering Business Intelligence with SAP conference. I’ll also be doing a roadshow around the country with our local partner First Point Global, who really understand the business of IAM. The Australian market is very forward-looking these days, and I’ve been impressed with the vision behind the projects we’ve been involved in. If you’re in Australia, come by the conference or send me an email if you would like to meet.

Here’s the abstract in full:

BI is Dead. Long Live BI. The Future of Business Intelligence in the Cloud

Will cloud computing really change IT? Despite all of the attention that cloud computing commands, this deceptively simple question has been largely overlooked. The promise of shifting capex dollars to lower opex is certainly compelling and the overnight success of some of the large Software-as-a-Service (SaaS) vendors, such as Salesforce.com is undeniably impressive. But once the hype dies down, what will be the real impact of cloud computing to mission-critical applications such as BI?

Cloud will transform BI, much as it is currently transforming CRM. Cloud isn’t only about a cheaper new delivery model; when done right, cloud also radically changes how applications are composed and where data can reside. These changes are driven both by necessity-acknowledging the realities of latency, privacy and compliance – but also by opportunity and the rapidly evolving best practices that show us how to build applications better and deliver these faster. BI must change to be successful in the cloud and cloud is an irresistible forcing function that will make this change inevitable. If your career is centered around BI, you need to be ready for this revolution.

Virtualization’s Second Act

I was quite disappointed with the coverage and analysis of VMware’s new vCloud Director (VCD) product, which the company introduced at its annual VMworld conference earlier this month in San Francisco. I think people focused too much on the superficial message of vCD being yet another new cloud platform, but missed the more important insight into what makes this product different from the virtualization we all know so well.

I wrote up my own take on the real change vCD represents in terms of organizational behavior, work flows, and approaches to managing mass virtualization. It was published this week on the VMware blog, so I must have been at least partially right. Go have a look and tell me what you think.

Upcoming Webinar: How To Implement Enterprise-scale API Management: The secret to making your business into a platform.

Jeffery Hammond, Principal Analyst with Forrester Research and I will be jointly delivering a webinar Tuesday, Sept 28th at 9am Pacific time. The topic we are discussing is API management and security. We’ll look at why APIs are important, and discuss the best practices for effectively leveraging these in your business.

Figure 1: The role of gateways in API management.

This promises to be a very good presentation, and I’d urge you to attend. We’re doing something a little different this time and delivering a much more interactive discussion than some of my past webinars. Since Jeffery and I are both traveling over the next few weeks, we’ve run through our rehearsals early. The material is top notch; Jeffery absolutely understands the issues organizations face as they attempt to expose core business applications using APIs. We are very much on the same page, and I have a strong feeling that this is going to be a very good show. I’m looking forward to it, and I hope you can join us.

You can register for this webinar here.

The Increasing Importance of Cloud Governance

David Linthicum published a recent article in eBizQ noting the Rise of Cloud Governance. As CTO of Blue Mountain Labs, Dave is in a good position to see industry trends take shape. Lately he’s been noticing a growing interest in cloud management and governance tools. In his own words:

This is a huge hole that cloud computing has had.  Indeed, without strong governance and management strategy, and enabling technology, the path to cloud computing won’t be possible.

It’s nice to see that he explicitly names Layer 7 Technologies as one of the companies that is offering solutions today for Cloud Governance.

It turns out that cloud governance, while a logical evolution of SOA governance, has a number of unique characteristics all its own. One of these is the re-distribution of roles and responsibilities around provisioning, security, and operations. Self-service is a defining attribute of cloud computing. Cloud governance solutions need to embrace this and provide value not just for administrators, but for the users who take on a much more active role in the full life cycle of their applications.

Effective cloud governance promotes agility, not bureaucracy. And by extension, good cloud governance solutions should acknowledge the new roles and solve the new problems cloud users face.

How to Secure vCloud Director and the vCloud API

This year’s VMworld conference saw the announcement of VMware’s new vCloud Director product, a culmination of the vision for the cloud computing the company articulated last year and a significant step forward in providing a true enterprise-grade cloud. This is virtualization 2.0—a major rethink about how IT should deliver infrastructure services. VMware believes that the secure hybrid cloud is the future of enterprise IT, and given their success of late it is hard to argue against them.

vCloud Director (vCD) is interesting because it avoids the classic virtualization metaphors rooted in the physical world—hosts, SANs, and networks—and instead promotes a resource-centric view contained with the virtual datacenter (VDC). vCD pools resources into logical groupings that carry an associated cost. This ability to monetize is important not just in public clouds, but for private clouds that implement a charge back to enterprise business units.

Multi-tenancy is a basic assumption in the vCD universe, and the product leverages the new vShield suite to enforce isolation. Management of vCD is through the vCloud API, a technology VMware introduced a year ago, but which has now matured to version 1.0.

The product vision and implementation are impressive; however, a number of security professionals I spoke with expressed disappointment in the rudimentary security and management model for the vCloud API. vCloud is a RESTful API. It makes use of SSL, basic credentials and cookie-based session tokens as a basic security model. While this is adequate for some applications, many organizations demand a more sophisticated approach to governance, buttressed with customized audit for compliance purposes. This is where Layer 7 can help.

Layer 7’s CloudSpan virtual gateways are the ideal solution for protecting and managing the vCloud API, vSphere, and vCloud Director. CloudSpan provides an intuitive, drag-and-drop interface for securing vCloud services and providing the visibility the modern enterprise demands. Do you need to protect the interface with 2-factor authentication? A few simple key clicks and you add this capability instantly—to a single API, or across a group of similar services. The CloudSpan policy language gives administrators the power to customize the access control and management of vCloud to incorporate:

  • Authentication against virtually any security token (SAML, Kerberos, X.509 certificates, OAuth, etc).
  • Cloud single sign-on (SSO).
  • Fine grained authorization to individual APIs.
  • Fully customizable audit.
  • Virtualization and masking of APIs.
  • Versioning of REST and SOAP APIs beyond vCloud basic versioning.
  • Augmentation and extension of existing vCloud functions.
  • Transformation of any GET, POST, DELETE, and PUT content.
  • Orchestration to create new APIs
  • Validation of XML structures such as OVF containers.
  • Threat detection, including threats embedded in XML OVF files.
  • Automatic fail-over between hosts.
  • Mapping between SOAP and REST
  • JSON Schema validation
  • Management of federated relationships.
  • Live dashboard monitoring of API usage.
  • etc

Figure 1: vCloud Director API management and security with CloudSpan from Layer 7.

CloudSpan is the basis of real cloud governance. In contrast to other solutions that run as third party services or attempt to broker security from you own local data center, CloudSpan runs as an integral part of the vCloud Director environment. CloudSpan runs as a VMware virtual image that is easily incorporated into any VMware virtual infrastructure. At Layer 7,we fundamentally believe that the security, monitoring and visibility solution for cloud APIs must reside inside the cloud they are protecting—not off at some other location where the transactions they proxy are subject to attach as they traverse the open Internet. Local integration of the security solution as an integral part of the cloud infrastructure is the only way to properly secure cloud APIs with sophisticated access control and to offer protection against denial-of-service (DoS) attacks.

For more information about how to secure and manage the vCloud API and vCloud Director, please see the cloud solutions page at Layer 7 Technologies.