Visiting the Facebook Artificial Intelligence Data Center: An Engine to Promote the Progress of Deep Learning

When you land on a Facebook account in the western United States, your data is likely to be picked up by a computer that is cooled by the air of the sage-scented atmosphere of juniper and desert in central Oregon.

In Prineville, a small town with a population of about 9,000, Facebook has stored hundreds of millions of massive data. Rows of computers were placed in four huge buildings totaling 800,000 square meters. They are neatly discharged, as if the dry cold wind from the northwest blows every computer. Whenever a user logs in, likes, or sends a LOL, these servers, which glow blue and green, emit a dull groan.

Facebook recently added some new machines to the Prineville server army. At the same time, the company also installed new high-power servers designed to accelerate the training of artificial intelligence technologies such as software translation, smarter virtual assistants, and text recognition.

Facebook's new Big Sur server is designed around a high-power processor, the GPU that was originally developed for image processing. These processors reinforce a recent technological leap in artificial intelligence - deep learning. Because GPUs enable old concepts of how software is used to be applied to a wider and more complex set of data, software can become surprisingly "understanding", especially in terms of understanding images and text.

Kevin Lee, an Facebook engineer working on servers, said they are helping Facebook researchers train software by running faster and using more data. "These servers are dedicated hardware for artificial intelligence and machine learning research. The GPU can record a photo, divide it into countless small pixels, and then process it at the same time."

Every 8 GPUs are equipped with a Big Sur server. Facebook uses a GPU made by Nvidia, a semiconductor supplier specializing in image recognition. Lee does not specify exactly how many servers are configured, but according to him, there are thousands of GPUs working. Big Sur servers are installed in the company's data centers in Prineville, Ashburn and Virginia.

Because GPUs are extremely power-consuming, unlike other servers in the data center, Facebook has to loosen them to avoid creating hot spots, causing problems for the cooling system, and even more energy. Now there are only eight Big Sur servers that can be placed on every seven-foot-high shelf, and these shelves can hold up to 30 Facebook regular servers that are only responsible for doing some daily tasks such as user data processing.

Facebook is not the only company that runs big data centers and uses GPUs for machine learning research. Giants at home and abroad, such as Microsoft, Google and Baidu, also use GPUs for deep learning research.

Social networks are extraordinary. It pioneered Big Sur server design, other server designs, and a new era of building Prineville data centers. The company donated these designs and plans to a non-profit project, the Open Compute Project. This project was initiated by Facebook in 2011 to encourage computer companies to collaborate with each other to design cost-effective data center hardware. The project has so far helped several Asian hardware companies to seize the markets of some traditional suppliers such as Dell and Hewlett-Packard.

Yann LeCun, director of the Facebook AI research project, said that when the Big Sur server was announced earlier this year, he believes that after the technology is common, more organizations will build a strong machine learning infrastructure and accelerate the development of this area. process.

However, the plans for future machine learning server construction may not be GPU-centric, and many companies are now working on the design of new chips. Compared to the GPU, this chip is specially made for deep learning algorithms.

In May of this year, Google announced that it has begun using its own design of TPU chips to drive deep learning software in products such as speech recognition. After training, this generation of chips seems to be more suitable for running algorithms than the Big Sur server. The initial training step is to speed up. However, Google has already begun research on second-generation chips. Nvidia and several other new companies, including Nervana, are also developing chips for deep learning.

Eugenio Culurciello, an associate professor at Purdue University, said that the effectiveness of deep learning means that the chip will be widely used. "The market has a huge demand for this kind of chip, and this demand will only increase."

When asked if Facebook is developing a custom chip, Lee said the company is "in research."

Posted on