Virtualization, in the twilight of 2010, has become at last a mainstream technology. It enjoys widespread adoption across every sector—except those, that is, in which audit plays a primary role. Fortunately, 2011 may see some of these last bastions of technological-conservatism finally accept what is arguably the most important development in computing in the 21st century.
The problem with virtualization, of course, is that it discards the simple and comfortable division between physical systems, replacing this with an abstraction that few of us fully understand. We can read all we want about the theory of virtualization and the design of a modern hypervisor, but in the end, accepting that there exists provable isolation between running images requires a leap of faith that has never sat well with the auditing community.
To be fair to the auditors, when that person’s career depends on making a judgment of the relative security risk associated with a system, there is a natural and understandable tendency toward caution. Change may be fast in matters of technology, but human process is tangled up with complexities that just take time to work through.
The payment card industry may be leading the way toward legitimizing virtualization in architectures subject to serious compliance concerns. PCI 2.0, the set of security requirements for enhancing security around payment card processing, comes into effect Jan 1, 2011, and provides specific guidance to the auditing community about how they should approach virtualization without simply rejecting it outright.
Charles Babcock, in InformationWeek, published a good overview of the issue. He writes:
Interpretation of the (PCI) version currently in force, 1.2.1, has prompted implementers to keep distinct functions on physically separate systems, each with its own random access memory, CPUs, and storage, thus imposing a tangible separation of parts. PCI didn’t require this, because the first regulation was written before the notion of virtualization became prevalent. But the auditors of PCI, who pass your system as PCI-compliant, chose to interpret the regulation as meaning physically separate.
The word that should pique interest here is interpretation. As Babcock notes, existing (pre-2.0) PCI didn’t require physical separation between systems, but because this had been the prevailing means to implement provable isolation between applications, it became the sine qua non condition in every audit.
The issue of physical isolation compared to virtual isolation isn’t restricted to financial services. The military advocates similar design patterns, with tangible isolation between entire networks that carry classified or unclassified data. I attended a very interesting panel at VMworld back in September that discussed virtualization and compliance at length. One anecdote that really struck me was made by Michael Berman, CTO of Catbird. Michael is the veteran of many ethical hacking jobs. In his experience, nearly every official inventory of applications his clients possessed was incomplete, and it is the overlooked applications that are most often the weakness to exploit. The virtualized environment, in contrast, offers an inventory of running images under control of the hypervisor that is necessarily complete. There may still be weak (or unaccounted for) applications residing on the images, but this is nonetheless a major step forward through provision of an accurate big picture for security and operations. Ironically, virtualization may be a means to more secure architectures.
I’m encouraged by the steps taken with PCI 2.0. Fighting audits is a nearly always a loosing proposition, or is at best a Pyrrhic victory. Changing the rules of engagement is the way to win this war.
The problem with IT security within businesses is that they can be not fully aware of all the potential risks out there