Wall Street Expanding Virtual Horizons

Virtualization, in one form or another, has been around almost as long as computers. When the term arose in the 1960s, it referred to the partitioning of mainframe computers. But virtualization soon became a way to let people stop worrying about the inner workings of their hardware.

In the earliest days, to put something in a computer’s memory you needed to know the physical location of every byte of storage. Then virtualization tools came along. Instead of deciding to store data in location #84AF, a user could enter a command–LET NAME$= “John Smith”–and allow the compiler to decide where NAME$ would be stored. Shortly after, disk operating systems were invented: Users could assign file names to programs and data; the operating system figured out where to put them.

Without an OS, you would have to manually track files’ locations, not to mention be sure to leave plenty of room between them–if you add data a file takes up more space, and without sufficient headroom other files would have to be rearranged. Today, information technology departments face a similar challenge, only on a much larger scale.

IT teams have to decide which servers will run which applications, and which devices will store which data. And they have to keep track of what goes where and be ready to shift things around if a server goes down or a hard drive breaks, or a database or application gets too large.

Ideally, virtualization eliminates the distinctions between all the hardware a company owns and keeps track of what goes where. Human operators simply make sure the total amount of available space is adequate.

Storage virtualization effectively turns multiple storage units into a single device, with the software placing files and ensuring effective use of space. A hardware virtualization platform does the same for servers, deciding which application goes where, usually by creating virtual machines for each individual piece of software to run on, then moving them between servers as needed.

Even networks can be virtualized, building virtual private channels on top of the existing infrastructure–to isolate different kinds of traffic, for example.

Various types of virtualization technology are in use on Wall Street, says Daniel Kusnetzky, president of Osprey, Fla.-based research firm Kusnetzky Group. “And examples of this type of technology have been in use for well over 30 years.”

Today’s virtualization technology is increasingly sophisticated. “Virtual processing software makes it possible to achieve one of several goals for all processing on a system, including higher levels of performance, higher levels of scalability, higher levels of reliability, consolidation, improved levels of agility of applications and even isolating applications from one another, giving the organization better control,” says Kusnetzky. “There are also ways to improve network and storage performance that are equally important.”

And these options, relatively new for Wall Street firms, are becoming more and more important.

According to research firm Gartner, virtualization will be the most important tool in technology infrastructure and operations through 2010 and will dramatically change the way IT departments work. At year-end 2006 there were over half a million virtual machines running in corporate back offices; by 2009 there will be more than 4 million, estimates Gartner.

At a conference in Sydney last year, Gartner VP Thomas Bittman predicted that virtual machine hypervisor technology will be nearly free by the end of 2008, embedded into hardware by manufacturers, and into operating systems by software vendors. “It is now less about the technology and more about process change and cultural change within organizations,” said Bittman.

New Hypervisors

Until recently, financial firms largely had one main source for virtualization technology. “One year ago, VMware had no serious competitors,” asserts Bittman in a report issued March 13. But now companies such as Citrix Systems, Microsoft Corp., Oracle Corp., Sun Microsystems and Virtual Iron have new offerings, points out Bittman.

Palo Alto, Calif.-based VMware, which claims more than 100,000 corporate customers, including all the major financial services firms, provides a traditional “bare metal” hypervisor that sits between the physical processors and operating systems, says Parag Patel, VMware’s VP of alliances.

Microsoft recently released the beta version of Hyper-V, software that also uses a bare-metal approach. The product is likely to work smoothly with virtual machines running Windows and is built into the soon-to-be-released Windows Server 2008, according to the company.

Bare-metal hypervisors are the most common, say experts. “The benefit is that it allows you to have different operating systems” on the same server, says Yiping Ding, VP of research and development for systems modeling at Bethesda, Md.-based network management technology vendor Opnet Technologies, which counts the Philadelphia Stock Exchange, Charles Schwab & Co., State Street Corp. and T. Rowe Price Associates among its clients.

A bare-metal hypervisor is more flexible than a system that runs on top of the OS, adds Ding. “If you put all the eggs in one basket–one server, one OS–if one application screws up, you bring down the whole system,” he says.

“The fact that these market leaders are using this hypervisor technology means that it’s pretty much the market standard,” says Richard Whitehead, director of product marketing at Novell, citing VMWare, Hyper V and Xen. Novell supports the open-source Xen virtualization platform, as do other third-party vendors like Citrix, which in October bought XenSource, the company that founded the Xen project and offers enterprise-level virtualization tools.

“Dell and others are working on embedding the hypervisor into the chip,” Whitehead says. “Hypervisors are here to stay. In fact, they’re commoditizing in many respects.”

Last month, Renton, Wash.-based Parallels released a beta version of its hardware virtualization product and launched a new data center management tool, Parallels Infrastructure Manager. In contrast to bare-metal hypervisors, the Parallels Virtuozzo product sits on top of the operating system. The OS-based hypervisor approach, also known as containers, is the favorite of hosting providers.

The downside is that two different operating systems can’t run on the same machine. On the other hand, a firm doesn’t need several full copies of the operating system if it’s using the same one for all its virtual machines. Installing a full copy of Windows on each machine uses significant amounts of memory and other resources. Sharing some of the OS reduces the overhead that comes with virtualization.

Typically, a securities firm consolidating multiple systems into a single data center would be working with more than one operating system and need a bare-metal hypervisor, says Corey Thomas, VP of marketing at Parallels, formerly known as SWsoft.

“But as [securities] companies start to do virtualization on a larger scale, they start to have the same problems that services providers have had,” notes Thomas.

Parallels offers both OS-based and bare-metal hypervisors, says Benjamin Rudolph, the company’s director of corporate communications. Its deployments were up seven times last year.

As virtual machines are loaded on a single server, overhead costs add up quickly.

A major Spanish bank recently implemented 1,000 virtual machines to run wealth management and trading applications. The machines sit atop 100 physical services, says Martin Migoya, CEO of Buenos Aires-based Globant, the vendor that managed the initiative. The bank spent $4 million on the project–which took about a month–a savings of approximately $2 million over a non-virtualized approach, according to Migoya.

The costs weren’t cut in half, he explains, because of the additional overhead, including OS licenses, the cost of virtualizing, and the associated services expenses. In addition, the physical servers used for the project are larger than what the company would have purchased had it not gone the virtualization route.

Virtualization–like any technology–can be abused. The ease with which virtual machines can be built could lead to an explosion if employees are allowed to create them unchecked. “Virtualization without good management is more dangerous than not using virtualization in the first place,” says Gartner’s Bittman. “Automation is the critical next step to help organizations stop virtualization sprawl.'”

Sprawl happens when IT and business managers aren’t aware of how many virtual machines are running, what’s on which machine and, most importantly, what security these machines have, says Tim Pacileo, principal consultant of Compass, a technology consultancy whose clients include Royal Bank of Scotland, Citigroup, Credit Suisse, UBS and other top-tier financial firms.

“This is a major problem in the securities industry due to the amount of information that has to be stored and managed due to regulatory and compliance requirements,” notes Pacileo.

Surrey, U.K.-based Compass has recently completed several virtualization projects, he says. “Even a very large, complex organization, such as a securities firm, can deploy ten new servers in an hour,” says Pacileo. “And when you have a much faster deployment model, the security team needs to be able to stay in front to make sure the deployments are secure.”

To help keep up with this growth, virtualization management tools will need to continue to evolve, and an increasing array of firms such as Compass will be helping companies to take advantage of them. Framingham, Mass.-based research firm Interactive Data Corp. (IDC) estimates that the virtualization services market will grow from $5.5 billion in 2006 to $11.7 million in 2011.

“Currently, the majority of the services opportunity lies in supporting customers’ initial implementations of virtualization,” says IDC analyst Matt Healey. “However, over the next several years, IT consulting and systems integration will begin to become the dominant opportunity as the technology becomes much more mainstream.”

Applications, Networks

Though servers and storage media see the most demand, applications and networks can also be virtualized. “If you take an application and put it in a virtual environment, you can start up multiple instances if you need more copies to handle the load,” says Chip Schooler, director of technology advancement at Radware.

The trick is being able to find these applications once they’re running. Radware helps firms manage the programs so that users are automatically sent to wherever the application is currently in operation. It also balances the loads if multiple instances of the application are active at the same time.

Mahwah, N.J.-based Radware has more than 5,000 customers, with financial services firms making up a large part of its business, according to Schooler.

And Wall Street has just started to embrace the virtualization of networks. In November, JP Morgan Chase & Co. installed technology from Billerica, Mass.-based Voltaire, which provides an InfiniBand-based grid backbone.

As a result of its virtualized network, JP Morgan’s data centers will evolve “from application-based silos to unified fabrics that allow for greater agility and utilization while improving the bottom line,” said Cory Shull, VP of investment architecture, in a statement.

The Voltaire switches and routers were installed in a risk analysis grid in JP Morgan’s North Harbor, U.K. data center. The compute backbone is available for JP Morgan’s internal clients, says Patrick Guay, Voltaire’s SVP of marketing. More than 35 different applications are running on it, he adds.

One of the benefits of virtualizing the network is increased security, according to Guay. “When I take a single 20 gigabit InfiniBand connection and break it up into five 4 gigabit connections, each one of those five lanes of traffic is completely separate from an OS perspective,” he says. “Even though there is only one physical wire going into the server, the data is protected.”

The isolation takes place at a lower level than that of the operating system–or of a hypervisor. As a result, the network is separated from the security flaws that are typical of operating systems, he says, and it doesn’t add costs in terms of processing resources. “We’re able to segment traffic without additional overhead,” he notes.

Article originally appeared in Securities Industry News, which has since closed down.