According to the company, Google’s supercomputers have introduced Google updates for training AI models to be faster and more energy-efficient.
Google, a subsidiary of Alphabet Inc., unveiled new information regarding the supercomputers it employs to train its AI models. The company stated that these systems are more rapid and energy-efficient than similar systems offered by Nvidia Corp.
Google has created a specialized chip called the Tensor Processing Unit, or TPU. It is used in over 90% of the company’s work on AI training. This involves the process of providing data through models to make them applicable for tasks like producing images or responding to queries using text similar to that of humans.
Introduction To Google Update:
Google’s TPU is presently in its fourth iteration. On Tuesday, the company issued a scientific paper that elucidated how it interconnected over 4,000 of these chips to create a supercomputer. Google utilized custom-developed optical switches to connect individual machines.
Enhancing these connections has emerged as a crucial factor in the rivalry among corporations involved in constructing AI supercomputers. This is because “large language models” that fuel technologies such as Google’s Bard or OpenAI’s ChatGPT have expanded enormously in size. It is making it impossible to store them on a single chip.
The models cannot be stored on a single chip because it has large size. That is why it must be separated across thousands of chips that work together for several weeks to train the model. Google’s PaLM model, its most extensive language model made public so far. It will be trained by partitioning it across two 4,000-chip supercomputers for more than 50 days.
Google claims that its supercomputers are designed for flexible reconfiguration of connections between chips while operating. They are making it simpler to avoid issues and adjust for increased performance.
Google Fellow Norm Jouppi and Google Distinguished Engineer David Patterson stated, “Circuit switching makes it simple to reroute around failed components. This adaptability enables us even to modify the supercomputer interconnect topology to improve the performance of a machine learning model.”
Google is not only now divulging specifics about its supercomputer, but it has also been online at a data center in Mayes County, Oklahoma, inside the corporation since 2020. Google reported that startup Midjourney employed the system to train its model, which generates new images when given a few words of text.
Also read: Ticktock Update: TikTok introduces new features for feed recommendations
Can you be more specific about the content of your article? After reading it, I still have some doubts. Hope you can help me.