Google announces the next-generation AI architecture “Pathways”

0

Google Research today announced a next-generation AI architecture called “Pathways”. This “new way of thinking about AI” aims to address “current weaknesses in existing systems”.

Google says Pathways can “train one model to do thousands or millions of things” compared to the current, highly individualized approach. The old method takes a lot of time and “a lot more data” because it starts from scratch every time.

Rather than extending existing models to learn new tasks, we train each new model from scratch to do one thing and only one thing (or sometimes we specialize a general model to a specific task). The result is that we end up developing thousands of models for thousands of individual tasks.

Pathways can “leverage and combine existing skills to learn new tasks faster and more effectively.” Similar to the way humans – especially mammalian brains – operate, this results in an AI model that can handle many different tasks.

As Google works with MUM and Lens next year, Pathways “could enable multimodal models that encompass seeing, hearing and understanding language simultaneously,” again like a human using multiple senses to perceive the world. For the moment, AI models choose one corpus to analyze at a time: text, images or speech.

So whether the model is dealing with the word “leopard”, the sound of someone saying “leopard” or a video of a running leopard, the same response is activated internally: the concept of a leopard. The result is a more insightful model that is less prone to errors and bias.

More abstract forms of data can also be used for analysis:

And of course, an AI model doesn’t have to be limited to these familiar senses; Pathways could process more abstract forms of data, helping to find useful patterns that have eluded human scientists in complex systems such as climate dynamics.

In addition to generalization, Google says Pathways allows for a degree of specialization with AI models that are “sparse and efficient” by not needing to activate an entire neural network to accomplish a simple task:

We can build a single model that is activated in a “sparse” fashion, which means that only small pathways across the network are activated as needed. In fact, the model dynamically learns which parts of the network are good for which tasks – it learns to route tasks through the most relevant parts of the model. A big advantage of this type of architecture is that it not only has a greater ability to learn a variety of tasks, but it is also faster and much more energy efficient, because we do not activate the entire network for each task.

Update…

Learn more about Google AI:

FTC: We use automatic affiliate links which generate income. Following.


Check out 9to5Google on YouTube for more information:

Leave A Reply

Your email address will not be published.