While Facebook has made significant progress in its artificial intelligence (AI) research using off-the-shelf hardware, it realized it could advance even faster by using its own, purpose-built server design. Having built a system that it said is twice as fast as its previous design, Facebook is now offering up its blueprints as an open source resource for other researchers.
Code-named "Big Sur," Facebook's AI system uses eight high-performance GPUs from Nvidia to handle the complex tasks involved in training neural networks -- that is, machine learning. The hardware design enables Big Sur to "train twice as fast and explore networks twice as large," according to Facebook.
Facebook said it will open source Big Sur by submitting its design materials to the Open Compute Project, an initiative launched by the company in 2011 to promote the development of efficient hardware designs for scalable computing. The goal is to "make it a lot easier for AI researchers to share techniques and technologies," the company said.
'Almost Entirely Toolless'
"As with all hardware systems that are released into the open, it's our hope that others will be able to work with us to improve it," Kevin Lee, manager of Global Spam Operations, and site director Serkan Piantino wrote today in a post on Facebook's engineering blog. "We believe that this open collaboration helps foster innovation for future designs, putting us all one step closer to building complex AI systems that bring this kind of innovation to our users and, ultimately, help us build a more open and connected world."
Big Sur features Facebook's "newest Open Rack-compatible hardware designed for AI computing at a large scale," Lee and Piantino said. The platform is also designed to be more versatile, efficient and easy to maintain compared to past designs, they added.
In fact, Big Sur is "almost entirely toolless," according to Lee and Piantino. That means its design -- which clearly identifies all touch points for technicians in the same shade of green -- makes parts easy to identify and replace as needed. "Even the motherboard can be removed within a minute, whereas on the original AI hardware platform it would take over an hour," they said.
'Deep Learning Has Started a New Era'
Facebook is just one of many companies racing to build ever-smarter artificial intelligence for a wide variety of purposes. Other big name tech firms among the competition include Google, IBM, Apple and Microsoft.
In September, for example, IBM announced that it would build a San Francisco-based hub for its Watson cognitive computing system to better connect with other startups, developers and venture capital firms pursuing AI innovations. The company last year also invested $1 billion "to accelerate the commercialization of IBM Watson, bringing cognitive computing to more clients and partners in more industries."
"Deep learning has started a new era in computing," said Ian Buck, vice president of accelerated computing at Nvidia, in a statement. "Enabled by big data and powerful GPUs, deep learning algorithms can solve problems never possible before. Huge industries from Web services and retail to healthcare and cars will be revolutionized."
According to Nvidia, Facebook is the first company so far to use its recently launched Tesla M40 GPU accelerators to train deep neural networks. Other companies are also using Nvidia GPUs in their AI systems, including IBM, Microsoft and Google.