Big Blue has inked a multi-year partnership with Xilinx aimed at paving the way for higher performance and more energy-efficient data center apps. In a nutshell, the deal -- which was inked through the OpenPower Foundation -- sets the stage for Xilinx FPGA-enabled workload acceleration on IBM Power-based systems.
The companies are working together to develop open acceleration infrastructures, software and middleware to tackle emerging apps in genomics, high performance computing, big data analytics, machine learning, and network functions virtualization.
The combined technologies will offer a “new level of accelerated computing,” said Ken King, general manager of OpenPower at IBM (pictured on left, with Hemant Dhulla, vice president of Data Center Business at Xilinx; Image credit: Xilinx). The secret sauce is the “tight integration” between IBM Power processors and Xilinx FPGAs (field-programmable gate arrays) that drive benefits from an evolving OpenPower ecosystem, King said.
Keeping Up in the Data Center
IBM is addressing a real issue in the enterprise: Heterogeneous workloads are becoming more the norm. Data centers are essentially morphing into app accelerators just to keep pace with the need for increasing throughput and low-power latency.
Xilinx appears to bring plenty to IBM’s Power table. Xilinx' field-programmable gate arrays offer a power efficiency that will help IBM compete against the likes of Intel, the companies said in a statement. The FPGA essentially makes accelerators more practical to deploy across the data center. IBM is promising a lower total cost of ownership for enterprises dealing with next generation data center workloads.
Forging ahead, IBM Systems Group developers will work to roll solutions for Power-based servers, storage and middleware systems with Xilinx FPGA accelerators. OpenStack, Docker and Spark were among the data center architectures mentioned in the announcement.
IBM Power Systems servers will eventually include qualified Xilinx accelerator boards while Xilinx works on Power-based versions of its SDAccel, a software-defined development environment, as well as OpenPower development community libraries.
Finally, the companies will keep working to leverage IBM’s Coherent Accelerator Processor Interface (CAPI) to drive “accelerated computing value” for its clients. CAPI is a feature built into IBM’s Power architecture that makes it possible for Power8-based systems to keep advancing even when Moore’s Law fails thanks to the OpenPower ecosystem’s full-stack approach. Moore's Law is the observation that, over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years.
Significant Commercial Benefits
We caught up with Charles King, principal analyst at Pund-IT, to get his thoughts on the partnership and what it means in the competitive landscape. He told us IBM has put a great deal of effort into highlighting the value that its Power Systems and Power processor technologies offer in comparison to Intel CPUs.
“The companies' announcement, centering on new products and technologies developed in concert with accelerator technologies from Nvidia with GPUs and Xilinx with FPGAs should spark interest in Power-based solutions for a wide range of high performance computing, cognitive, machine learning and big data applications,” Charles King said.
The fact that over 90 out of more than 160 members of the OpenPower Foundation are involved in its Accelerators Working Group suggests that this effort could deliver significant commercial benefits to the Power architecture, he said. "Overall, this looks like good news for IBM and for customers searching for ways to maximize accelerated compute performance," Charles King added.
Read more on: IBM
, Big Data
, Data Center
, Data Storage
, Open Source
, Tech News