Google has been released its second-generation Tensor Processor Unit, which is a cloud computing hardware and software system underpins some of the most ambitious and far-reaching technologies.
The first TPU, designed for the machine learning, is used by the AlphaGo artificial intelligence system as the foundation of its predictive and decision-making skills. The computation power of TPUs also is used to enter a question into Google search engine. This technology was also applied to machine learning model used to improve Google translate, Google Photo and other software.
Normally, this work is done by using commercial available GPUs often from NVidia such as Facebook uses NVidia’s GPUs as a part of its Big Basin AI training severs. However, Google has opted over the last few years to build its own hardware.
Google says the second version of its TPU system is completely operational and being deployed across its Google Compute Engine. Google will use the system itself and it is also open the new TPU as an unrivaled resource for other companies in the future.
Google developed a way to rig 46 TPUs together into TPU Pods which effectively turning a Google server rack into a supercomputer with 11.5 petaflops of computational power. Even on their own, the second-gen TPUs can deliver a staggering 180 teraflops of computing power.
Besides speed, second-gen TPUs can allow Google’s servers to inference and training simultaneously. The first one could only do inference, on the other hand, training is how AI algorithm is developed and that takes exceptional resources.
Because this latest TPU can do both inference and training at the same time, researchers are capable of deploying more versatile AI experiments.