Peer-to-Peer Appeal for Distributed Computing Efforts

While peer-to-peer systems like Napster are the tech topic of the moment, a close cousin-peer-to-peer distributed computing-has actually shown itself to be more useful. P2P computing has been used recently for everything from looking for signs of intelligent life in outer space to helping find a cure for cancer by distributing the work load between thousands of desktop machines belonging to volunteers. It’s also becoming popular on Wall Street, as vendors hurry to make more and more applications work in this form.

Peer-to-peer’s close relative, client/server distributed computing, has been around for years. However, there were a number of problems with it that peer-to-peer networks address-such as communications bottlenecks that occur when control is centralized at one machine.

The idea of distributed computing first came out of Yale and MIT-hooking lots of small computers into a network so that they would act like one supercomputer.

Wall Street firms have hundreds of desktop computers, many of them sitting idle at any particular moment in time, so it seemed like a natural idea to try out. But the ones that did try to apply peer-to-peer distributed computing ran into management and scalability problems.

Credit Suisse First Boston starting using peer-to-peer computing more than 10 years ago. “We call it scavenger technology,” said CSFB CTO G.T. Sweeney, referring to the way that applications have to scrounge spare processing minutes from machines that have other priorities.

First Union Corp. tried it out six years ago, and linked together a group of desktop computers that would act, in effect, like one giant parallel processing machine.

“Pretty much everyone does this,” said Michael Packer, managing director of institutional e-commerce at Merrill Lynch. “It’s usually used for certain types of options and mortgage products, or in areas where you have to simulate lots and lots of different possible courses of events.”

But all the firms ran into difficulties in administering these complicated networks and had problems with expanding the networks to include more machines. In addition, whenever a firm wanted to move an application from a mainframe to a desktop-based network, the application had to be virtually rewritten to work on multiple machines simultaneously.

“It was possible to do it,” said Rob Rabchelder, an analyst at Gartner Group. “But it wasn’t possible to do it well.”

He compared creating a distributed computing network to planning a large party-the challenges involved in having 10 people over are substantially different than when 100 people are expected to show up.

Firms that wanted to extend their distributed computing networks to include more machines in order to allow for redundant processing and to tackle bigger computing tasks found themselves facing the “big-party problem” Rabchelder said.

For example, with a small network of computers, it doesn’t cause any problems if all instructions come from one central machine. But if the network grows to include hundreds of computers, then that central machine becomes a bottleneck.

It was enough to make some companies consider scrapping these systems-and, in the past few months, about a dozen firms have started to just that. Not because the idea wasn’t sound, but because it was too difficult and time-consuming to take care of all the required maintenance.

“We wanted to get out of the plumbing business,” said Joe Belciglio, trading technology chief at First Union Corp. “I wanted the technology group to concentrate on specific business problems instead of infrastructure problems.”

First Union was able to do this by hiring an outside firm-DataSynapse-to take care of installing and maintaining the peer-to-peer computing systems. The end result was cheaper and quicker than if First Union was still doing it on its own, Belciglio said.

In addition, because the DataSynapse model is based around the idea of networked computers sharing data and information between themselves-the peer-to-peer part-the system is more scalable than some homegrown alternatives. It’s also much cheaper than using a mainframe for the same computing tasks-about 100 times cheaper, according to analysts.

It takes about a week of work to move a typical application that’s amenable to parallel processing from a mainframe to a peer-to-peer network of desktops, according to DataSynapse CEO Peter Lee. That means that securities firms can now move more applications into this environment than they could when each new installation had to be hand-coded.

“That’s what our intention is,” said Belciglio. “We started on the fixed-income derivatives desk and are expanding it to other areas-market risk calculations, credit exposure calculations and mortgage calculations.”

Later on, he added, there are even more possibilities of tasks that can be done quickly and unobtrusively using idle computers, such as data mining. That will make peer-to-peer much more attractive to companies like CSFB as well.

“Developers can focus on just defining the business problem, not on making all the technology work,” said Sweeney, who added that peer-to-peer is about to become much more popular as a result.

The leading vendor of peer-to-peer computing systems, DataSynapse, brings with it deep securities industry expertise. Others, such as Parabon, United Devices and Entropia, come out of the life sciences.

These vendors are going to make it possible to Wall Street firms to roll out distributed applications quickly and easily.

“We’ve encountered pretty wide-ranging applications demand,” said DataSynapse’s Lee. “It’s not just risk management that’s important. We see the need anywhere in the derivatives area, anywhere in the risk area. We’ve expanded the product features to do distributed data mining, order management, straight-through processing, even yield management on a credit card portfolio for retail institutions.”

Are firms that don’t embrace distributed computing going to find themselves falling behind in the number-crunching arms race? Not directly, according to Merrill’s Packer.

“I don’t think distributed computing in and of itself it will provide a competitive advantage,” he said. “But it will certainly allow creative and innovative people to test their ideas faster and help in both product development and in assuring firms that the risks they are taking are acceptable. As the volatility in the markets has in many cases increased relative to what we saw several years ago on a day-by-day basis, this kind of computational approach allows you to be more thorough and explore a broader set of risk alternatives.”

Peer-to-peer computing isn’t likely to become a full-scale arms race in part because not every problem is amenable to the solution.

“Typical supercomputers solve data-intensive, tightly-coupled problems, with progressive resolution of the answer,” said Batchelder. “These kinds of problems do not play well on distributed computing because you have to distribute the data and allow the computers to talk to one another and share intermediate results. That slows down the computations.”

And the alternative to peer-to-peer computing, including mainframes and dedicated server farms, is becoming steadily cheaper. “Hardware pricing is continuing on a downward trend,” Sweeney said.

In addition, too few vendors are providing the service. The other potential vendors specializing in the life sciences haven’t been as successful as DataSynapse in selling to Wall Street. “To be successful in this field, you need domain or subject matter experience,” Batchelder said.

However, First Union found that rolling out a peer-to-peer distributed computing system did free up personnel and machines, and that’s why many firms are taking a close look at peer-to-peer.