Tag Archives: security

Over to You, Rush

Last week’s New York Times article on hacking the new gadgets, including Web-enabled HDTVs, generated an enormous amount of interest. An AM radio station in Tampa Bay, 970 WFLA AM, invited me onto their morning commuter show, AM Tampa Bay with Tedd Webb and Mark Larsen (standing in for Jack Harris). This was a great opportunity to reach a much wider audience than I usually speak to. So I woke up at 4am to an icy-cold Vancouver morning, and tried my best to imagine the sunshine and palm trees in far away Tampa.

It was a great, fast paced discussion. Tedd and Mark are the real thing—listening to the interview makes me wish I was born with that radio voice. Those guys are total pros.

After my five minutes of fame, the station filled the air for the rest of the day with the giants of conservative talk radio, including Glenn Beck, Rush Limbaugh, Sean Hannity, and Mark Levin.

You can listen to the interview here.

Is Web-connected TV the New Power Play for Hackers?

Over the holidays I had the fortune of being quoted in the New York Times. This came out of an interview I had with NYT writer Ashlee Vance about the recent discovery of security weaknesses in Web-connected HDTVs. Researchers at Mocana, a security services company in the Bay Area, identified a number of  vulnerabilities in one of the most popular Internet-enabled televisions. This is the first major security incident for a product category that is very likely to become wildly popular, but I doubt it will be the last.

In the hacking community, cracked systems equal power. Such power may be tangible, such as a botnet available for hire, or simply the social power derived from compromising a particularly high profile target. But as more interesting devices appear on the Internet—such as smart phones, TVs, and even refrigerators—there will be an inevitable shift in focus within the hacking community toward these. This is because these new devices represent enormous potential for the consolidation of new power.

The motivation to attack connected devices isn’t simply to target a new platform that might contain trivial vulnerabilities (though for some, this may be enough). The real attraction here is the sheer number of nodes; this, fundamentally, is about volume. It is estimated that  by the end of 2010, Apple will have shipped around 75 million iPhones. (To put this number into perspective, by July 2010, Microsoft announced it had shipped 150M units of Windows 7.) The iPhone alone represents an enormous injection of computing power onto the Internet, delivered over the course of only 3 1/2 years.

Now, the iPhone happens to be a remarkably stable and secure platform, thanks in part to Apple’s rigid curation of the hardware, software, and surrounding app eco-system. But what is interesting to note is how quickly a new Internet platform can spread, and how much of the total global computing horsepower this can represent. The consumer world, by virtue of its size, its fads and caprice,  its unprecendented spending power, can shift the balance of computing power in months (iPads, anyone?). Today the connected-device explosion centers around mobile phones, but tomorrow it can easily be web-connected TVs, smart power meters, or iToilets. This radical change to Internet demographics—from the servers and desktop, to things and mobile—will prove irresistible to the hacking community.

What is troubling about the vulnerabilities Mocona found is how simplistic these were. Device manufacturers must place far greater emphasis on basic system security. What will happen when the next wildly-popular consumer device is exposed to the full cutting-torch of hacker attention? It is going to be an interesting decade…

Securing OData

One emerging technology that has recently caught our attention here at Layer 7 is the Open Data Protocol, or OData for short. You can think of OData as JDBC/ODBC for the web. Using OData, web developers can query data sources in much the same way they would use SQL. It builds on the basic CRUD constructs of REST, adding the Atom Publishing Protocol to help flesh out the vision under a very standards-oriented approach. OData’s major promoter is Microsoft, but the company is clearly intent on making this protocol an important community-driven initiative that provides value to all platforms.

I like OData; but as with any power tool, I approach it with care and some suspicion. I definitely agree that we need a formalized approach to interacting with Web data sources. OData is not SQL, but it brings enough familiar constructs with it to make the protocol easy to pick up and tremendously versatile. But OData also raises some significant security issues that need to be carefully considered before it is deployed.

Most applications are designed to constrain a user’s view of data. Any modern relational database has the ability to apply access control and limit a user’s view to the records to which they are entitled. More often than not, however, the enforcement of these entitlements is a task delegated not to the database, but to the application that interacts with it.

Consider, for example, a scenario, where a database makes a JDBC or ODBC connection directly available to clients outside of the corporate firewall:

It can be extremely risky to permit direct outside connections into a database.

People avoid doing this for a good reason. It is true that you can secure the connection with SSL and force the incoming user to authenticate. However, if an attacker was able to compromise this connection (perhaps by stealing a password), they could explore or alter the database at will. This is a gangster’s paradise.

A simple web application is really a security buffer zone between the outside world and the database. It restricts the capabilities of the user through the constraints imposed by elements that make up each input form. Ultimately, the application tier maps user interactions to explicit SQL statements, but a well-designed system must strictly validate any incoming parameters before populating any SQL templates. From this perspective, web applications are fundamentally a highly managed buffer between the outside world and the data—a buffer that has the capability of applying a much more customizable and rigorous model of access control than a RDMS could.

The Web application tier as security buffer between the database and the Internet.

However, this is also why SQL injection can be such an effective vector of attack. An application that fails to take the necessary precautions to validate incoming data can, in effect, extend the SQL connection right outside to the user. And unconstrained SQL can open up the entire database to examination or alteration. This attack vector was very popular back in the PowerBuilder days, but lately it has made a startling resurgence because its effectiveness when applied to badly designed web apps.

OData, of course, is the data source connection, so injection isn’t an issue—just getting a hold of it in the first place is enough. So what is critically important with OData is to strictly manage what this connection is capable of doing. OData servers need to provide not just authentication, authorization, and audit of the connection, but wholesale constraint of protocol function and data views as well. Web security demands that you assume the worst—and in this case, the worst is certainly compromise of the connection. The best way to manage this risk is to limit your exposure to what an attacker can do.

In SQL-terms, this is like limiting the functions that a user can access, and restricting them to the views to which they are entitled (and they shouldn’t be entitled to much). The danger with OData is that some of the tools make it much too easy to simply open a connection to the data (“click here to make the database available using OData”); this can have widespread negative consequences if an attacker is able to compromise a legitimate user’s account. If the data source cannot itself impose the necessary constraints on the connection, then an intermediate security layer is mandatory.

This is where Layer 7 can help. CloudSpan is fully compatible with OData, and can act as an independent security layer between the OData client (which may be a browser-resident application) and the OData server. It can offer not just AAA on the connection, but can narrow the OData API or mask query results based on an individual user’s entitlement.

CloudSpan Gateways managing access to OData data sources.

Here’s a real example that Julian Phillips, one of Layer 7’s architects, put together. Jules constructed a policy using the Netflix OData API, which is an experimental service the company has made available on the net. The Netflix API allows you to browse selections in their catalog. It has it’s own constraints built in—it’s already read-only, for example—but we are going to show how CloudSpan could be deployed to further constrain the API to implement stricter security protocols, and even enforce business rules governing access.

Jules’ policy is activated on all URI’s that match the /Catalog* patternthe entry point into the Netflix OData API. This shows up in CloudSpan under the service browser:

What we are going to do here is add some security constraints, and then a business rule that restricts the ability of minors to only view movie titles with a rating of G or PG-13. Minors can build perfectly valid Netflix OData queries and submit them to the API; however, these will be trapped by the CloudSpan gateway before they get to the actual OData server.

Jules’ basic policy is quite simple. We’ve collapsed some details into folders to make the basic flow easier to understand:

First off, the policy throws an explicit audit to capture both the URI and the query string for debugging purposes. We then ensure that the connection uses SSL (and subject to the cipher suite constraints currently in effect), and we mine HTTP basic credentials from the connection. Need Kerberos or SSL client-side certificate authentication instead? Just drag the assertions implementing either of these into the policy and you are good to go.

The gateway then authenticates the user against a directory, and from this interaction we determine whether this user is an adult or a minor based on their group membership. If the user is indeed an adult, the gateway passes their OData query to the server unchanged. However, if the user is a minor, the gateway adds constraints to the query to ensure that the server will only return G or PG-13 movies. For reference, the full policy is below (click to expand):

This example is somewhat contrived, but you should be able to see how the intermediate security gateway can add critical constraint to the scope of OData protocol. OData shows a lot of promise. But like many new technologies, it needs to be managed with care. If deployed securely, OData can become an essential tool in any web developer’s toolkit.

Upcoming Webinar: How To Implement Enterprise-scale API Management: The secret to making your business into a platform.

Jeffery Hammond, Principal Analyst with Forrester Research and I will be jointly delivering a webinar Tuesday, Sept 28th at 9am Pacific time. The topic we are discussing is API management and security. We’ll look at why APIs are important, and discuss the best practices for effectively leveraging these in your business.

Figure 1: The role of gateways in API management.

This promises to be a very good presentation, and I’d urge you to attend. We’re doing something a little different this time and delivering a much more interactive discussion than some of my past webinars. Since Jeffery and I are both traveling over the next few weeks, we’ve run through our rehearsals early. The material is top notch; Jeffery absolutely understands the issues organizations face as they attempt to expose core business applications using APIs. We are very much on the same page, and I have a strong feeling that this is going to be a very good show. I’m looking forward to it, and I hope you can join us.

You can register for this webinar here.

How to Secure vCloud Director and the vCloud API

This year’s VMworld conference saw the announcement of VMware’s new vCloud Director product, a culmination of the vision for the cloud computing the company articulated last year and a significant step forward in providing a true enterprise-grade cloud. This is virtualization 2.0—a major rethink about how IT should deliver infrastructure services. VMware believes that the secure hybrid cloud is the future of enterprise IT, and given their success of late it is hard to argue against them.

vCloud Director (vCD) is interesting because it avoids the classic virtualization metaphors rooted in the physical world—hosts, SANs, and networks—and instead promotes a resource-centric view contained with the virtual datacenter (VDC). vCD pools resources into logical groupings that carry an associated cost. This ability to monetize is important not just in public clouds, but for private clouds that implement a charge back to enterprise business units.

Multi-tenancy is a basic assumption in the vCD universe, and the product leverages the new vShield suite to enforce isolation. Management of vCD is through the vCloud API, a technology VMware introduced a year ago, but which has now matured to version 1.0.

The product vision and implementation are impressive; however, a number of security professionals I spoke with expressed disappointment in the rudimentary security and management model for the vCloud API. vCloud is a RESTful API. It makes use of SSL, basic credentials and cookie-based session tokens as a basic security model. While this is adequate for some applications, many organizations demand a more sophisticated approach to governance, buttressed with customized audit for compliance purposes. This is where Layer 7 can help.

Layer 7’s CloudSpan virtual gateways are the ideal solution for protecting and managing the vCloud API, vSphere, and vCloud Director. CloudSpan provides an intuitive, drag-and-drop interface for securing vCloud services and providing the visibility the modern enterprise demands. Do you need to protect the interface with 2-factor authentication? A few simple key clicks and you add this capability instantly—to a single API, or across a group of similar services. The CloudSpan policy language gives administrators the power to customize the access control and management of vCloud to incorporate:

  • Authentication against virtually any security token (SAML, Kerberos, X.509 certificates, OAuth, etc).
  • Cloud single sign-on (SSO).
  • Fine grained authorization to individual APIs.
  • Fully customizable audit.
  • Virtualization and masking of APIs.
  • Versioning of REST and SOAP APIs beyond vCloud basic versioning.
  • Augmentation and extension of existing vCloud functions.
  • Transformation of any GET, POST, DELETE, and PUT content.
  • Orchestration to create new APIs
  • Validation of XML structures such as OVF containers.
  • Threat detection, including threats embedded in XML OVF files.
  • Automatic fail-over between hosts.
  • Mapping between SOAP and REST
  • JSON Schema validation
  • Management of federated relationships.
  • Live dashboard monitoring of API usage.
  • etc

Figure 1: vCloud Director API management and security with CloudSpan from Layer 7.

CloudSpan is the basis of real cloud governance. In contrast to other solutions that run as third party services or attempt to broker security from you own local data center, CloudSpan runs as an integral part of the vCloud Director environment. CloudSpan runs as a VMware virtual image that is easily incorporated into any VMware virtual infrastructure. At Layer 7,we fundamentally believe that the security, monitoring and visibility solution for cloud APIs must reside inside the cloud they are protecting—not off at some other location where the transactions they proxy are subject to attach as they traverse the open Internet. Local integration of the security solution as an integral part of the cloud infrastructure is the only way to properly secure cloud APIs with sophisticated access control and to offer protection against denial-of-service (DoS) attacks.

For more information about how to secure and manage the vCloud API and vCloud Director, please see the cloud solutions page at Layer 7 Technologies.

Why Health Care Needs SOA

A recent article in DarkReading offers a powerful argument as to why the health care sector desperately needs to consider Service Oriented Architecture (SOA). In her piece Healthcare Suffers More Data Breaches Than Financial Services So Far This Year, Erika Chickowski cites a report indicating that security breeches in health care appear to be on the rise this year, with that sector reporting over three times more security incidents than the financial services industry.

I worked for many years in a busy research hospital, and frankly this statistic doesn’t surprise me; health care has all of the elements the lead to the perfect storm of IT risk. If there is one sector that could surely benefit from adopting SOA as a pretext to re-evaluate security as a whole, it is health care.

Hospitals and the health care eco-system that surround these are burdened with some of the most heavily siloed IT I have ever seen. There are a number of reasons why this is so, not the least of which is politics that often appear inspired by the House of Borgia. But the greatest contributing factor is the proliferation of single-purpose, closed and proprietary systems. Even the simplest portable x-ray machine has a tremendously sophisticated computer system inside of it. The Positron Emission Tomography (PET) systems that I worked on included racks of what at the time were state-of-the-art vector processors used to reconstruct massive raw data sets into understandable images. Most hospitals have been collecting systems like this for years, and are left with a curiosity cabinet of samples representing different brands and extant examples of nearly every technological fad since the 1970s.

I’m actually sympathetic to the vendors here because their products have to serve two competing interests. The vendors need to package the entire system into a cohesive whole with minimal ins and outs to ensure it can reasonably pass the rigorous compliance necessary for new equipment. The more open a system is, the harder it is to control the potential variables, which is a truism also in the security industry.  Even something as simple as OS patching needs to go through extensive validation because the stakes are so high. The best way to manage this is to close up the system as much as reasonably possible.

In the early days of medical electronics, the diagnostic systems were very much standalone and this strategy was perfectly sound. Today, however, there is a need to share and consolidate data to potentially improve diagnosis. This means opening systems up—at least to allow access to the data, which when approached from the perspective of traditional, standalone systems, usually means a pretty rudimentary export. While medical informatics has benefited some from standardization efforts, the medical systems still generally reduce to islands of data connected by awkward bridges—and it is out of this reality that security issues arise.

Chickowski’s article echos this, stating:

To prevent these kinds of glaring oversights, organizations need to find a better way to track data as it flows between the database and other systems.

SOA makes sense in health care because it allows for effective compartmentalizing of services—be these MRI scanners, lab results, or admission records—that are governed in a manner consistent with an overall security architecture. Good SOA puts security and governance upfront. It provides a consistent framework that avoids the patchwork solutions that too easily mask significant security holes.

A number of forward-looking health providers have adopted a SOA strategy with very positive results. Layer 7 Technologies partnered with the Universitry of Chicago Medical Center (UCMC) to build a SOA architecture for securely sharing clinical patient data with their research community. One of the great challenges in any medical research is to gather sample populations that are statistically significant. Hospitals collect enormous volumes of clinical data each day, but often these data cannot be shared with research groups because of issues in compliance, control of collection, patient consent, etc. UCMC uses Layer 7’s SecureSpan Gateways as part of its secure SOA fabric to isolate patient data into zones of trust. SecureSpan enforces a boundary between clinical and research zones. In situations where protocols allow clinical data to be shared with researchers, SecureSpan authorizes its use. SecureSpan even scrubs personally identifiable information from clinical data—effectively anonymizing the data set—so that it can be ethically incorporated into research protocols.

The UCMS use case is a great example of how SOA can be a protector of information, promoting the valuable use of data while ensuring that only the right people have access to the right views of that information.

To learn more about this use case, take a look at the detailed description available on the Layer 7 Technologies web site.

Timing Side Channel Attacks

I had an interesting discussion with Bob McMillian of IDG yesterday about the potential for timing attacks in the cloud. Timing attacks are a kind of side channel attack that is based on observed behavior of a cryptographic system when fed certain inputs. Given enough determinism in the response time of the system, it may be possible to crack the cryptosystem based on a statistical sampling of its response times taken over many transactions.

Bob was interested in my thoughts about the threat this attack vector represents to cloud-resident applications. It’s an interesting question, because I think that the very characteristics of the cloud that people so often criticize when discussing security—that is, multi-tenancy and the obfuscation of actual physical resources by providers—actually work to mitigate this attack because they add so much non-deterministic jitter to the system.

Bob’s excellent article got picked up by a number of sources, including ComputerWorld, LinuxSecurity, InfoWorld. It’s also been picked up by the mainstream media, including both San Francisco Chronicle and the New York Times.

WS-I Publishes Basic Security Profile (BSP) 1.1

This morning the Web Services Interoperability Organization (WS-I) published the Basic Security Profile (BSP), version 1.1. This is a very significant milestone, because BSP is the definitive reference that we will all use to create secure and interoperable Web services for the foreseeable future.

I have a close personal connection to Basic Security Profile. I was one of the editors of both this specification and its predecessor, BSP 1.0. It took a lot of hard work to get this far, but the results of the our working group’s labours are important for the community. We all use Web services because of their potential for interoperability, but interoperability is only practical with the formalization that WS-I offers.

People have questioned the need for BSP in a world that already has OASIS WS-Security and the handful of WS-* standards that relate to it. The truth is, formal standards like WS-Security are so broad, complex, and new that it just isn’t realistic to expect vendor implementations to integrate perfectly. The WS-I approach differs from conventional standards efforts because the focus is strictly on promoting much needed interoperability. WS-I does not define new functionality, nor does it attempt to show people the “correct” way to use existing standards; it exists to profile existing standards by refining messages, amplifying important statements, and clarifying ambiguities so that systems from different vendors can communicate securely.

WS-I promotes interoperability by providing three important components: a profile of an existing standard (or set of standards), a suite of test tools to validate conformance, and finally sample applications. Microsoft’s Paul Cotton, the chair for the BSP working group, likens this to a three-legged stool—the effort can only stand when all three legs are present. This holistic approach taken by the WS-I distinguishes it from most standards efforts by including both specification and reference.

In the case of BSP 1.1, a very important component of the effort was the vendor interop. Six companies participated in this:

This is your short list of the vendors who are serious about Web services security. Obviously, we are the odd man out in terms of size and market reach, but we believe it’s important to drive the standards, not just implement them. And this is something we will continue to do. The days of big WS-* standards efforts are over. With the release of BSP, we have what we need to secure SOA. The next big challenge will be standardization of the cloud, and Layer 7 will be there.

Upcoming Webinar: Security in the Cloud vs Security for the Cloud

I was speaking recently to Steve Coplan, Senior Analyst, Enterprise Security Practice at the 451 Group. I always enjoy talking to Steve. He has a deep understanding of technology and our business, but it’s his training as a journalist that I think sets him apart from the other analysts. His work comes through as erudite but accessible, and it is always very well written.

In our discussion, Steve was careful to make a clear distinction between between security in the cloud and security for the cloud. This intrigued me, because I think the differences are too often lost when people talk about cloud in the abstract. Steve’s point became the topic of a webinar that he and I will deliver together this Thursday, March 25, 2010 at 12:00pm EDT/9:00am PDT/4:00pm GMT.

I hope you can join us to learn why this distinction is so important. You can sign up for this webinar at the Layer 7 Technologies web site.

Why Intermediaries Matter in SOA

Last week Joe McKendrick from ZDNet asked the question are SOA anti-principles more important than success principles? The idea of anti-principles came from Steve Jones, who a few years back did some nice work documenting SOA anti-patterns. In a post published last fall, Steve builds on his ideas, observing:

The problem is that there is another concept that is rarely listed, what are your anti-principles?

which is one of those good questions that should give you pause.

Steve continues:

In the same way as Anti-Patterns give you pointers when its all gone wrong then Anti-Principles are the things that you will actively aim to avoid during the programme.

I found this interesting because one of the anti-principles the post lists is direct calling. Steve describes this bad practice as follows:

This anti-principle is all about where people just get a WSDL and consume it directly without any proxy or intermediary. It’s programme suicide and it shouldn’t be done.

Naturally, because I’m in the business of building intermediaries, this seemed perfectly reasonable to me. But on reflection, I think that the argument as to why direct calling is an anti-principle needs further explanation.

Indirection is one of the great principles of computer science. Indirection lets us decouple layers, allowing these to change independently as long as they honour the interface contract. Intermediary layers in SOA, a good example being a proxy like Layer 7’s SecureSpan Gateway, build on this concept allowing architects to decouple service providers from consumers—much as Steve advocates in his post. This layer of indirection means that we can tease out certain highly parameterizable aspects of communication—aspects such as security, monitoring, protocol adaptation, routing, etc.—into a separate policy layer that promotes consistency, affords the opportunity for reuse, and insulates clients (and servers) from change.

This is best illustrated by example. Suppose I have two services: foo and bar. Both services have a number of clients that access them. To explore the issues with direct connection, let’s consider the scenario  where all of these clients establish direct connections with my services:

The first thing you should notice is that the firewall is open to allow the external clients to make their direct connections with the service hosts. In other words, these hosts are, for all intents and purposes, deployed in the DMZ and must be hardened under that assumption. For many applications, this is a non-trivial exercise. Hopefully, your internal alarm bells are going off already.

Few applications remain completely static over their lifetime. Patches become necessary; hardware fails and must be replaced—all of this is part of the natural life cycle of software. My services foo and bar are no exception. One day, the server hosting foo starts to fail, and I find myself in the position that I need to quickly move the foo service onto a new host. Suddenly, all of my clients are broken:

Now I have a very significant problem. I need to update the URLs on every client I serve, and to do it quickly. Every minute I’m down, I’m losing business. Welcome to the pressure cooker.

This potential problem would be easy to manage if I had an intermediary, operating as a policy-driven proxy that is placed in the DMZ between my clients and my services:

This proxy now handles URL-based routing on the fly. If foo moves, it’s a simple matter of modifying the internal routing on the intermediary and voila: no client ever has a problem. My client base is completely insulated from a major structural change to my back end service hosts.

Of course there are tricks we could use employing HTTP redirects, common NAT, or more dynamic bindings to URLs to avoid such a contrived problem in the first place. But what if the change was to something less configurable using conventional means, such as the basic security model for communication? Suppose that as a corporation, we decide to mandate that all clients must now authenticate using client-side certificates under SSL? Foo is running on a Java application server, bar is on .NET; both are capable of accommodating this new model, but their administration is radically different. And to make matters worse, I have a dozen or so additional apps implemented in everything from Ruby on Rails to PHP that I also need to change. That’s a lot of work.

An intermediary would make this task trivial by insulating services from this change to policy. The strategy here is to terminate the SSL connection and authenticate the client on the intermediary instead of on the service hosts. A few clicks of a mouse later, and my job is complete for every service.

This certainly saves time and adds consistency, but the real value is in the change of responsibility. The task of implementing this security model now falls under the jurisdiction of a professional security administrator, not the developers of each separate application. In fact, no code or configuration needs to change on foo, bar, or any of my services. The security model is decoupled from the application, taken out of the hands of each developer and centralized. This is the basic value proposition of intermediaries in SOA, and this value is never realized effectively if you allow direct connections between clients and servers. This is why architectural patterns are sometimes necessary to allow us to be consistent with our principles—or our anti-principles, as the case may be.

Interested in trying an intermediary? You can get a Layer 7 SecureSpan virtual appliance to try out at http://www.layer7tech.com. Alternatively, do your evaluation completely in the cloud. Check out the SecureSpan virtual appliance gateway on the Amazon marketplace. This virtual appliance AMI runs in the EC2 cloud on Amazon Web services. It is the first and only SOA gateway to run in the cloud.