Tag Archives: Threats

What We Should Learn From the Apple SSL Bug

Two years ago a paper appeared with the provocative title “The Most Dangerous Code in the World.” Its subject? SSL, the foundation for secure e-commerce. The world’s most dangerous software, it turns out, is a technology we all use on a more or less daily basis.

The problem the paper described wasn’t an issue with the SSL protocol, which is a solid and mature technology, but with the client libraries developers use to start a session. SSL is easy to use but you must be careful to set it up properly. The authors found that many developers aren’t so careful, leaving the protocol open to exploit. Most of these mistakes are elementary, such as not fully validating server certificates and trust chains.

Another dramatic example of the pitfalls of SSL emerged this last weekend as Apple issued a warning about an issue discovered in its own SSL libraries on iOS. The problem seems to come from a spurious goto fail statement that crept into the source code, likely the result of a bad copy/paste. Ironically, fail is exactly what this extra code did. Clients using the library failed to completely validate server certificates, leaving them vulnerable to exploit.

The problem should have been caught in QA; obviously, it wasn’t. The lesson to take away from here is not that Apple is bad—they responded quickly and efficiently the way they should—but that even the best of the best sometimes make mistakes. Security is just hard.

So if security is too hard, and people will always make mistakes, how should we protect ourselves? The answer is to simplify. Complexity is the enemy of good security because complexity masks problems. We need to build our security architectures on basic principles that promote peer-reviewed validation of configuration as well as continuous audit of operation.

Despite this very public failure, it is safe to rely on SSL as a security solution, but only if you configure it correctly. SSL is a mature technology, and it is unusual for problems to appear in libraries. But this weekend’s event does highlight the uncomfortable line of trust we necessarily draw with third party code. Obviously, we need to invest our trust carefully. But we also must recognize that bugs happen, and the real test is about how effectively we respond when exploits appear and patches become available. Simple architectures work to our favour when the zero-day clock starts ticking.

On Monday at the RSA Conference, CA Technologies announced the general availability of our new SDK for securing mobile transactions. We designed this SDK with one goal: to make API security simpler for mobile developers. We do this by automating the process of authentication, and setting up secure connections with API servers. If developers are freed up from tedious security programming, they are less likely to do something wrong—however simple the configuration may appear. In this way, developers can focus on building great apps, instead of worrying about security minutia.

In addition to offering secure authentication and communications, the SDK also provides secure single sign on (SSO) across mobile apps. Say the word SSO and most people instinctively picture one browser authenticating across many web servers. This common use case defined the term. But SSO can also be applied to the client apps on a mobile device. Apps are very independent in iOS and Android, and sharing information between them, such as an authentication context, is challenging. Our SDK does this automatically, and securely, providing a VPN-like experience for apps without the very negative user experience of mobile VPNs.

Let me assure you that this is not yet another opaque, proprietary security solution. Peel back the layers of this onion and you will find a standards-based OAuth+OpenID Connect implementation. We built this solution on top of the SecureSpan Gateway’s underlying PKI system and we leveraged this to provide increased levels of trust.

If you see me in the halls of the RSA Conference, don’t hesitate to stop me and ask for a demo. Or drop by the CA Technologies booth where we can show you this exciting new technology in action.

Advertisement

Over to You, Rush

Last week’s New York Times article on hacking the new gadgets, including Web-enabled HDTVs, generated an enormous amount of interest. An AM radio station in Tampa Bay, 970 WFLA AM, invited me onto their morning commuter show, AM Tampa Bay with Tedd Webb and Mark Larsen (standing in for Jack Harris). This was a great opportunity to reach a much wider audience than I usually speak to. So I woke up at 4am to an icy-cold Vancouver morning, and tried my best to imagine the sunshine and palm trees in far away Tampa.

It was a great, fast paced discussion. Tedd and Mark are the real thing—listening to the interview makes me wish I was born with that radio voice. Those guys are total pros.

After my five minutes of fame, the station filled the air for the rest of the day with the giants of conservative talk radio, including Glenn Beck, Rush Limbaugh, Sean Hannity, and Mark Levin.

You can listen to the interview here.

Is Web-connected TV the New Power Play for Hackers?

Over the holidays I had the fortune of being quoted in the New York Times. This came out of an interview I had with NYT writer Ashlee Vance about the recent discovery of security weaknesses in Web-connected HDTVs. Researchers at Mocana, a security services company in the Bay Area, identified a number of  vulnerabilities in one of the most popular Internet-enabled televisions. This is the first major security incident for a product category that is very likely to become wildly popular, but I doubt it will be the last.

In the hacking community, cracked systems equal power. Such power may be tangible, such as a botnet available for hire, or simply the social power derived from compromising a particularly high profile target. But as more interesting devices appear on the Internet—such as smart phones, TVs, and even refrigerators—there will be an inevitable shift in focus within the hacking community toward these. This is because these new devices represent enormous potential for the consolidation of new power.

The motivation to attack connected devices isn’t simply to target a new platform that might contain trivial vulnerabilities (though for some, this may be enough). The real attraction here is the sheer number of nodes; this, fundamentally, is about volume. It is estimated that  by the end of 2010, Apple will have shipped around 75 million iPhones. (To put this number into perspective, by July 2010, Microsoft announced it had shipped 150M units of Windows 7.) The iPhone alone represents an enormous injection of computing power onto the Internet, delivered over the course of only 3 1/2 years.

Now, the iPhone happens to be a remarkably stable and secure platform, thanks in part to Apple’s rigid curation of the hardware, software, and surrounding app eco-system. But what is interesting to note is how quickly a new Internet platform can spread, and how much of the total global computing horsepower this can represent. The consumer world, by virtue of its size, its fads and caprice,  its unprecendented spending power, can shift the balance of computing power in months (iPads, anyone?). Today the connected-device explosion centers around mobile phones, but tomorrow it can easily be web-connected TVs, smart power meters, or iToilets. This radical change to Internet demographics—from the servers and desktop, to things and mobile—will prove irresistible to the hacking community.

What is troubling about the vulnerabilities Mocona found is how simplistic these were. Device manufacturers must place far greater emphasis on basic system security. What will happen when the next wildly-popular consumer device is exposed to the full cutting-torch of hacker attention? It is going to be an interesting decade…

Virtualization vs. the Auditors

Virtualization, in the twilight of 2010, has become at last a mainstream technology. It enjoys widespread adoption across every sector—except those, that is, in which audit plays a primary role. Fortunately, 2011 may see some of these last bastions of technological-conservatism finally accept what is arguably the most important development in computing in the 21st century.

The problem with virtualization, of course, is that it discards the simple and comfortable division between physical systems, replacing this with an abstraction that few of us fully understand. We can read all we want about the theory of virtualization and the design of a modern hypervisor, but in the end, accepting that there exists provable isolation between running images requires a leap of faith that has never sat well with the auditing community.

To be fair to the auditors, when that person’s career depends on making a judgment of the relative security risk associated with a system, there is a natural and understandable tendency toward caution. Change may be fast in matters of technology, but human process is tangled up with complexities that just take time to work through.

The payment card industry may be leading the way toward legitimizing virtualization in architectures subject to serious compliance concerns. PCI 2.0, the set of security requirements for enhancing security around payment card processing, comes into effect Jan 1, 2011, and provides specific guidance to the auditing community about how they should approach virtualization without simply rejecting it outright.

Charles Babcock, in InformationWeek, published a good overview of the issue. He writes:

Interpretation of the (PCI) version currently in force, 1.2.1, has prompted implementers to keep distinct functions on physically separate systems, each with its own random access memory, CPUs, and storage, thus imposing a tangible separation of parts. PCI didn’t require this, because the first regulation was written before the notion of virtualization became prevalent. But the auditors of PCI, who pass your system as PCI-compliant, chose to interpret the regulation as meaning physically separate.

The word that should pique interest here is interpretation. As Babcock notes, existing (pre-2.0) PCI didn’t require physical separation between systems, but because this had been the prevailing means to implement provable isolation between applications, it became the sine qua non condition in every audit.

The issue of physical isolation compared to virtual isolation isn’t restricted to financial services. The military advocates similar design patterns, with tangible isolation between entire networks that carry classified or unclassified data. I attended a very interesting panel at VMworld back in September that discussed virtualization and compliance at length. One anecdote that really struck me was made by Michael Berman, CTO of Catbird. Michael is the veteran of many ethical hacking jobs. In his experience, nearly every official inventory of applications his clients possessed was incomplete, and it is the overlooked applications that are most often the weakness to exploit. The virtualized environment, in contrast, offers an inventory of running images under control of the hypervisor that is necessarily complete. There may still be weak (or unaccounted for) applications residing on the images, but this is nonetheless a major step forward through provision of an accurate big picture for security and operations. Ironically, virtualization may be a means to more secure architectures.

I’m encouraged by the steps taken with PCI 2.0. Fighting audits is a nearly always a loosing proposition, or is at best a Pyrrhic victory. Changing the rules of engagement is the way to win this war.

Why Health Care Needs SOA

A recent article in DarkReading offers a powerful argument as to why the health care sector desperately needs to consider Service Oriented Architecture (SOA). In her piece Healthcare Suffers More Data Breaches Than Financial Services So Far This Year, Erika Chickowski cites a report indicating that security breeches in health care appear to be on the rise this year, with that sector reporting over three times more security incidents than the financial services industry.

I worked for many years in a busy research hospital, and frankly this statistic doesn’t surprise me; health care has all of the elements the lead to the perfect storm of IT risk. If there is one sector that could surely benefit from adopting SOA as a pretext to re-evaluate security as a whole, it is health care.

Hospitals and the health care eco-system that surround these are burdened with some of the most heavily siloed IT I have ever seen. There are a number of reasons why this is so, not the least of which is politics that often appear inspired by the House of Borgia. But the greatest contributing factor is the proliferation of single-purpose, closed and proprietary systems. Even the simplest portable x-ray machine has a tremendously sophisticated computer system inside of it. The Positron Emission Tomography (PET) systems that I worked on included racks of what at the time were state-of-the-art vector processors used to reconstruct massive raw data sets into understandable images. Most hospitals have been collecting systems like this for years, and are left with a curiosity cabinet of samples representing different brands and extant examples of nearly every technological fad since the 1970s.

I’m actually sympathetic to the vendors here because their products have to serve two competing interests. The vendors need to package the entire system into a cohesive whole with minimal ins and outs to ensure it can reasonably pass the rigorous compliance necessary for new equipment. The more open a system is, the harder it is to control the potential variables, which is a truism also in the security industry.  Even something as simple as OS patching needs to go through extensive validation because the stakes are so high. The best way to manage this is to close up the system as much as reasonably possible.

In the early days of medical electronics, the diagnostic systems were very much standalone and this strategy was perfectly sound. Today, however, there is a need to share and consolidate data to potentially improve diagnosis. This means opening systems up—at least to allow access to the data, which when approached from the perspective of traditional, standalone systems, usually means a pretty rudimentary export. While medical informatics has benefited some from standardization efforts, the medical systems still generally reduce to islands of data connected by awkward bridges—and it is out of this reality that security issues arise.

Chickowski’s article echos this, stating:

To prevent these kinds of glaring oversights, organizations need to find a better way to track data as it flows between the database and other systems.

SOA makes sense in health care because it allows for effective compartmentalizing of services—be these MRI scanners, lab results, or admission records—that are governed in a manner consistent with an overall security architecture. Good SOA puts security and governance upfront. It provides a consistent framework that avoids the patchwork solutions that too easily mask significant security holes.

A number of forward-looking health providers have adopted a SOA strategy with very positive results. Layer 7 Technologies partnered with the Universitry of Chicago Medical Center (UCMC) to build a SOA architecture for securely sharing clinical patient data with their research community. One of the great challenges in any medical research is to gather sample populations that are statistically significant. Hospitals collect enormous volumes of clinical data each day, but often these data cannot be shared with research groups because of issues in compliance, control of collection, patient consent, etc. UCMC uses Layer 7’s SecureSpan Gateways as part of its secure SOA fabric to isolate patient data into zones of trust. SecureSpan enforces a boundary between clinical and research zones. In situations where protocols allow clinical data to be shared with researchers, SecureSpan authorizes its use. SecureSpan even scrubs personally identifiable information from clinical data—effectively anonymizing the data set—so that it can be ethically incorporated into research protocols.

The UCMS use case is a great example of how SOA can be a protector of information, promoting the valuable use of data while ensuring that only the right people have access to the right views of that information.

To learn more about this use case, take a look at the detailed description available on the Layer 7 Technologies web site.

Timing Side Channel Attacks

I had an interesting discussion with Bob McMillian of IDG yesterday about the potential for timing attacks in the cloud. Timing attacks are a kind of side channel attack that is based on observed behavior of a cryptographic system when fed certain inputs. Given enough determinism in the response time of the system, it may be possible to crack the cryptosystem based on a statistical sampling of its response times taken over many transactions.

Bob was interested in my thoughts about the threat this attack vector represents to cloud-resident applications. It’s an interesting question, because I think that the very characteristics of the cloud that people so often criticize when discussing security—that is, multi-tenancy and the obfuscation of actual physical resources by providers—actually work to mitigate this attack because they add so much non-deterministic jitter to the system.

Bob’s excellent article got picked up by a number of sources, including ComputerWorld, LinuxSecurity, InfoWorld. It’s also been picked up by the mainstream media, including both San Francisco Chronicle and the New York Times.

The Seven Deadly Sins: The Cloud Security Alliance Identifies Top Cloud Security Threats

Today marks the beginning of RSA conference in San Francisco, and the Cloud Security Alliance (CSA) has been quick out of the gate with the release of its Top Threats to Cloud Computing Report. This peer-reviewed paper characterizes the top seven threats to cloud computing, offering examples and remediation steps.

The seven threats identified by the CSA are:

  1. Abuse and Nefarious Use of Cloud Computing
  2. Insecure Application Programming Interfaces
  3. Malicious Insiders
  4. Shared Technology Vulnerabilities
  5. Data Loss/Leakage
  6. Account, Service, and Traffic Hijacking
  7. Unknown Risk Profile

Some of these will certainly sound familiar, but the point is to highlight threats that may be amplified in the cloud, as well as those that are unique to the cloud environment.

This CSA threats report is a true community effort. The working group had representatives from a broad range of cloud providers, infrastructure vendors, and cloud customers, including:

  • HP
  • Oracle
  • Bank of America
  • Microsoft
  • Rackspace
  • Verizon
  • Cigital
  • Qualsys
  • Trend Micro
  • Websense
  • Zscalar
  • CloudSecurity.org
  • Cloud Security Alliance
  • Layer 7 Technologies

I represented Layer 7. I tackled Data Loss/Leakage, and performed some editorial of the paper as a whole. As working groups go, I can tell you that this one simply worked well. I’ve been involved with a number of standards groups in the past, this time we seemed to have all of the right people involved. The group converged on the key issues quickly and decisively. It was a good process, and I’m happy with the results.

We thing we did debate was how best to rate each threat. We finally agreed that the best approach was to let the community decide You may recall that last week I wrote an blog entry soliciting your input to help classify threat severity. Well, the results are in and they are certainly interesting. Perhaps not surprising, the threat of Data Loss/Leakage leads the community’s list of concerns, at around 28%. But what is more intriguing is that there really isn’t too much of a difference between the perceived impact of any threat on the list (all fall between around 8-28%). This is encouraging, as it suggests that we nailed the current zeitgeist in our list. It is just a little disconcerting that there remain seven significant threats to consider.

The latest survey results, and the threats paper itself, are available from the CSA web site. Bear in mind that is evolving work. The working group intends to update the list regularly, so if you would like to make a contribution to the cloud community, please do get involved. And remember: CSA membership is free to individuals; all you need to give us is your time and expertise.

You Can Help the Cloud Security Alliance Classify the Top Threats in the Cloud

The Cloud Security Alliance (CSA) needs your help to better understand the risk associated with cloud threats. Earlier this year, the CSA convened a working group with the mandate to identify the top threats in the cloud. This group brought together a diverse set of security and cloud experts, including myself representing Layer 7. Our group identified 7 major threats that exist in the cloud, but now we would like to gauge how the community as a whole perceives the risk these threats pose.

I would like to invite you to participate in a short survey so we can get your input. This should only take you about 5 minutes to complete. We intend to work the results of this survey into the CSA Top Threats to Cloud Computing document. This will be formally unveiled at the Cloud Security Alliance Summit, which is part of next week’s RSA conference in San Francisco.

Help us to make the cloud a safer place by identifying and characterizing its greatest threats. Share this survey link with your colleagues. The more participation we can get, the better our results will be, and the stronger the work will become.

You will find our survey here.

How to Safely Publish Internal Services to the Outside World

So you’ve bought into the idea of service-orientation. Congratulations. You’ve begun to create services throughout your internal corporate network. Some of these run on .NET servers; others are Java services; still others are Ruby-on-Rails—in fact, one day you woke up and discovered you even have a mainframe service to manage. But the question you face now is this: how can all of these services be made available to consumers on the Internet? And more important, how can you do it securely?

Most organizations buffer their contact with the outside world using a DMZ. Externally facing systems, such as web servers, live in the DMZ. They mediate access to internal resources, implementing—well, hopefully implementing—a restrictive security model. The DMZ exists to create a security air gap between protocols. The idea is that any system deployed into the DMZ is hardened, resilient, and publishes a highly constrained API (in most cases, a web form). To access internal resources, you have to go through this DMZ-based system, and this system provides a restricted view of the back-end applications and data that it fronts.

The DMZ represents a challenge for publishing services. If services reside on internal systems, how can external clients get through the DMZ and access the service?

Clearly, you can’t simply start poking holes in firewall #2 to allow external systems to access your internal providers directly; this would defeat the entire purpose of the DMZ security model. But this is exactly what some vendors advocate. They propose that you implement local security agents that integrate into the container of the internal service provider. These agents implement policy-based security—essentially taking on the processing burden of authentication, authorization, audit, confidentiality, integrity and key management. While this may seem attractive, as it does decouple security into a purpose-built policy layer, it has some very significant drawbacks. The agent model essentially argues that once the internal policy layer is in place, the internal service provider is ready for external publication. But this implies poking holes in the DMZ, which is a bad security practice.  We have firewalls precicely because we don’t want to harden every internal system to DMZ-class resiliancy. An application-layer policy agent does nothing to defeat OS-targetted attacks, which means every service provider would need to be sufficiently locked down and maintained. This becomes unmanagable as the server volume grows, and completely erodes the integrity of firewall #2.

Furthermore, in practice, agents  just don’t scale well. Distribution of policy among a large number of distributed agents is a difficult problem to solve. Policies rapidly become unsynchronized, and internal security practices are often compromised just to get this ponderous and dependent system to work.

At Layer 7 we advocate a different approach to publishing services that is both scalable and secure. Our flagship product, the SecureSpan Gateway, is a security proxy for Web services, REST, and arbitrary XML and binary transactions. It is a hardened hardware or virtual appliance that can be safely deployed in the DMZ to govern all access to internal services. It acts as the border guard, ensuring that each transaction going in or out of the internal network conforms to corporate policy.

SecureSpan Gateways act as a policy air-gap that constrains access to back end services through a rich policy-based security model. This integrates consistently with the design philosophy of the DMZ. Appliances are hardened so they can withstand Internet-launched attacks, and optimized so they can scale to enormous traffic loads. We built full clustering into SecureSpan in the first version we released, close to eight years ago. This ensures that there is no single point of failure, and that systems can be added to accommodate increasing loads.

The separate policy layer—and the policy language that defines this—is the key to the security model and is best illustrated using a real example. Suppose I have a warehouse service in my internal network that I would like to make available to my distributors. The warehouse service has a number of simple operations, such as inventory queries and the ability to place an order. I’ll publish this to the outside world through a SecureSpan Gateway residing in the DMZ, exactly as shown in the diagram above.

SecureSpan provides a management console used to build the policies that govern access to each service. Construction of the initial policy is made simple using a wizard that bootstraps the process using the WSDL, which is a formal service description for my warehouse service. The wizards allows me to create a basic policy in three simple steps. First, I load the WSDL:

Next, I declare a basic security model. I’ll keep this simple, and just use SSL for confidentiality, integrity, and server authentication. HTTP basic authentication will carry the credentials, and I’ll only authorize access to myself:

If this policy sounds familiar, it’s because it’s the security model for most web sites. It turns out that this is a reasonable model for many XML-based Web services as well.

Finally, I’ll define a proxy routing to get to my internal service, and an access control model once there. In this example, I will just use a general account. Under this model, the service trusts the SecureSpan Gateway to authenticate and authorize users on it’s behalf:

You may have noticed that this assumes that the warehouse services doesn’t need to know the identity of the original requester-—that is, Scott. If the service did need this, there are a number of ways to communicate my identity claim downstream to the service, using techniques like SAML, IBM’s Trust Association Interceptor (TAI), proxied credentials, or various other tricks that I won’t cover here.

The wizard generates a simple policy for me that articulates my simple, web-oriented security model. Here’s what this policy looks like in the SecureSpan management console:

Policy is made up of individual assertions. These encapsulate all of the parameters that make up that operation. When a message for the warehouse service is identified, SecureSpan loads and executes the assertions in this policy, from top to bottom. Essentially, policy is an algorithm, with all of the classic elements of flow control. SecureSpan represents this graphically to make the policy simple to compose and understand. However, policy can also be rendered as an XML-based WS-Policy document. In fact, if you copy a block of graphical assertions into a text editor, they resolve as XML. Similarily, you can paste XML snippets into the policy composer and they appear as graphical assertion elements.

This policy is pretty simplistic, but it’s a good foundation to build on. I’ll add some elements that further restrict transactions and thus constrain access to the back end system the SecureSpan Gateway is protecting.

The rate limit assertion allows me to cap the number of transactions getting through to the back end. I can put an absolute quota on the throughput: say, 30,000 transaction/sec because I know that the warehouse service begins to fail once traffic exceeds this volume. But suppose I was having a problem with individual suppliers overusing particular services. I could limit use by an individual identity (as defined by an authenticated user or originating IP address) to 5,000 transasctions/sec—still a lot, but leaving headroom for other trading partners. The rate limit assertion gives me this flexibility. Here is its detailed view:

Note that if I get 5,001 transactions from a user in one second, I will buffer the last transaction until the rate drops in a subsequent time window (subject, of course, to resource availability on the gateway). This provides me with application-layer traffic shaping that is essential in industries like telco, who use this assertion extensively.

I would also like to evaluate each new transaction for threats. SecureSpan has assertions that cover a range of familar threats, such as SQL-injection (which has been around for a long time, but has become newly relevant in the SOA world), as well as a long list of new XML attacks that attempt to exploit parser infrastructure and autogenerated code. For the warehouse service, I’m concerned about code-injection attacks. Fortunately, there’s an assertion for that:

Here’s what these two assertions look like dropped into the policy:

This policy was simple to compose (especially since we had the wizard to help us). But it is also very effective. It’s a visible and understandable, which is an important and often overlooked aspect of security tooling. SOA security suffers from an almost byzantine complexity. It is much too easy to build a security model that obscures weakness behind its detail. One of the design goals we had at Layer 7 for SecureSpan was to make it easy to do the simple things that challenge us 80% of the time. However, we also wanted to provide the richness to solve the difficult problems that make up the other 20%. These are problems such as adaptation. They are the obscure impedance-mismatches between client and server security models, or fast run-time adaptation of message content to accommodate version mismatches.

In this example, it took only seven simple assertions to build a basic security policy for publishing services to the outside world. Fortunately, there are over 100 other assertions—covering everything from message-based security to transports like FTP to orchestration—that are there when you need to solve the tougher problems.

SQL Attack and the Largest Data Breach in US History

CNET’s Elinor Mills wrote an article today about the indictment of three men in the largest US data breach on record. Her article details how three system crackers, two Russians and a man from Florida,  allegedly stole data relating to 130m credit and debit cards and conspired to sell these to others. The story has also been picked up by BBC News.

The hack involved using SQL injection, a technique that was pioneered back in the PowerBuilder client/server days. Many people believe that the attack reached its zenith back then, and is of little real threat today. Clearly, this is not the case.

Indeed, in the services world, SQL injection remains a powerful and often used exploit. Here at Layer 7 we developed technology to defeat this many years ago. We use the acceleration technology in SecureSpan Gateways to scan for SQL attack signatures in messages, blocking transactions that test positive for SQL attacks.

Good security should be simple to apply. If it’s easy to implement, people will use it. Here’s what a policy with SQL injection protection looks like in the SecureSpan Gateway:

SQLAttack

It doesn’t get much simpler than this–and that’s the point. Good security must be simple to comprehend, comprehensive, and broadly applicable.

Now, if you click on the SQL attack protection assertion, you can configure for particular attacks. This is important, because databases respond differently to certain signatures:

SQLAttackDetails

Can a single programmer write similar protections into his or her code? Absolutely. But do they? Well, Elinor has drawn our attention to the potential cost of not doing so. This kind of security is best applied consistently across all applications.  It’s just not realistic to assume developers will always do this correctly (or at all). Governance of services needs to be done by a dedicated security officer, one who understands the problems, and is disconnected enough from the application development process to be impartial. You separate development and QA for a good reason; sometimes you need to separate development and run time security enforcement for similar reasons.

If more organizations realized there were strong technical solutions like SecureSpan that augment their overall security and governance programs, then maybe we would hear less about massive breaches in privacy and trust like the one above.

The last word from the ever-brilliant xkcd: