Google's chip layout: the next generation of TPU artificial intelligence chips on the road

Foreword:

With the departure of several manufacturers such as Nvidia, Intel, and Texas Instruments, the mobile phone chip market pattern has been formed, including Qualcomm, MediaTek, Samsung, the unique Apple A-series, and Huawei Kirin.

While chip technology barriers are getting higher and higher, there are still some ambitious manufacturers who want to climb the technological high ground.

In 2017, Google hired many big names in the chip industry from Apple, including former Apple SoC chip architect Manu Gulati, Apple chip experts John Bruno, Wonjae Choi, and Mainak Biswas, Vinod Chamarty, and Shamik Ganguly from Qualcomm.

Google is working hard to build its own chip for its Pixel smartphone.

And under the emerging trend of new architectures, Google also hopes to strengthen its self-developed chips in this area.

Google has recruited at least 16 technology veterans in Bangalore, and 4 recruiters specialize in digging talents from traditional chip companies such as Intel, Qualcomm, Broadcom, and Nvidia.

In March of this year, Google also announced that the company had hired Uri Frank, a long-time Intel executive, as a vice president to run its custom chip division.

Acquisition: Speeding up the self-developed chip schedule

In 2018, Google announced the completion of the $1.1 billion acquisition of the HTC smartphone Pixel team. Some members of HTC's mobile device department will join the Google hardware department; Google will also acquire some of HTC's non-exclusive intellectual property rights.

After acquiring HTC's team responsible for Pixel, Google's ability in independent chip research and development has been further improved.

This year, Google acquired Provino Technologies, a start-up company that develops a network-on-chip (NoC) system for machine learning, which can help the development of its TPU and thus promote the development of its AI chip in the cloud.

Compared with other designs, NoC improves the scalability of the system chip and the power of the complex system chip.

But judging from the products released by Google, their breakthrough from self-developed mobile phone chips is not the mobile phone processor chip, but the mobile phone coprocessor.

Go to the cloud: TPU extends the plan to go to the cloud

TPU is a special neural network chip launched by Google in 2015. It is built to optimize its own TensorFlow machine learning framework. Unlike GPU, Google TPU is an ASIC chip solution and a specially customized chip.

Since 2015, Google has gradually improved its cloud-to-end layout based on TPU.

In addition to the TPU and TPU POD for cloud services, Edge TPU provides end-to-end and end-to-end AI computing power is also launched, enabling predictive maintenance, fault detection, machine vision, robotics, voice recognition, and more. Scene.

Today, Googles TPU has been iterated to the fourth generation, and the average performance of its fourth-generation TPU is 2.7 times higher than that of its third-generation TPU.

With the excellent performance of the TPU chip, Google has also become a representative player of dedicated AI chips. The new architecture he introduced has also brought new inspiration to the new wave of artificial intelligence.

Google also plans to apply its TPU in the EDA field, using cloud resources for chip verification, which can also greatly accelerate the time for chip development.

Google also gradually brought its TPU chip to the edge and launched Google Edge TPU in 2018. It is a supplement to Cloud TPU and Google Cloud services, providing end-to-end, cloud-to-edge, "hardware + software" infrastructure, which can assist customers in deploying AI-based solutions.

In the history of AI chip development, whether it is from on-chip memory or programmability, Google TPU is a rare technological innovation, breaking the monopoly of GPU, and opening up a new competitive landscape for cloud AI chips.

Open source: the first open-source PDK lowers barriers to entry

Last year, Google announced the first open-source PDK-SkyWater PDK. The selected companies do not need to bear expensive manufacturing costs, and Google will provide them with a completely free chip manufacturing process.

This is the first open-source processing tool of its kind. Using this PDK, chips can be produced at the SkyWater fab at the 130nm node.

If the open-source PDK model is successful, this will lower the barriers for companies to enter the semiconductor industry.

The next generation of TPU artificial intelligence chips are on the road

In terms of AI hardware, Google recently announced that it will launch the next-generation customized tensor processing unit (TPU) artificial intelligence chip TPUv4 Pods artificial intelligence chip.

The computing speed of the TPUv4 Pods artificial intelligence chip is twice that of the previous version, and quantum computing will challenge the scale of 1 million qubit computing, which is the fastest generation system currently deployed by Google.

The TPUv4 launched this time optimizes the interconnection speed and architecture within the system to further improve the interconnection speed. It is reported that the interconnection bandwidth of the TPUv4 cluster is 10 times that of most other network technologies, and it can provide exaflop-level computing power.

In the second half of this year, Google plans to provide the chip to developers as part of its cloud platform.

AI chips that cannot be bypassed

Although TPU is not an AI chip used in mobile phones, and in deep learning tasks, compared with CPU, GPU, FPGA, task flexibility is low. But in any case, Google's ambition to enter the AI ​​field has become clear.

AI applications (speech recognition, image processing, etc.) on personal mobile terminals have such broad development prospects and market potential, Google will naturally not turn a blind eye, and the upgrade of Android has long become a key point of the platform.

On the already-launched Pixel phones, Google has equipped the dedicated AI chip Visual Core for image processing, which is 5 times faster than the application processor for compiling HDR + images and consumes only 1/10 of its power.

Visual Core also handles complex imaging and machine learning tasks related to cameras, including scene-based automatic image adjustments and other uses.

Now, the chip is under development and will make its debut on the Pixel 6 smartphone and another device that will be available later this year.

The 5-nanometer process chip code-named Whitechapel will provide kinetic energy for the next generation of Pixel phones. It is called GS101-Google Silicon chip internally.

This advanced chip will set up three clusters through TPU, bringing stronger machine learning capabilities to smartphones, so that these modern applications can get a better AI experience from it.

The Whitechapel chip will use a customized neural processing unit and image signal processor. The use of artificial intelligence and machine learning may not only be used to improve the camera but to improve the performance standards of Pixel 6 and Pixel 6 Pro.

end:

From the cloud to the edge end and mobile smart terminals, Google's layout around AI chips is getting wider and wider. Judging from these layouts of Google, Google's plans in the chip field seem to be more ambitious.