Analysis
Exploring AGI, the challenges of scaling and the way forward for multimodal generative AI
Subsequent week the factitious intelligence (AI) group will come collectively for the 2024 International Conference on Machine Learning (ICML). Working from July 21-27 in Vienna, Austria, the convention is a world platform for showcasing the most recent advances, exchanging concepts and shaping the way forward for AI analysis.
This yr, groups from throughout Google DeepMind will current greater than 80 analysis papers. At our sales space, we’ll additionally showcase our multimodal on-device mannequin, Gemini Nano, our new household of AI fashions for training known as LearnLM and we’ll demo TacticAI, an AI assistant that may assist with soccer ways.
Right here we introduce a few of our oral, highlight and poster displays:
Defining the trail to AGI
What’s synthetic normal intelligence (AGI)? The phrase describes an AI system that’s no less than as succesful as a human at most duties. As AI fashions proceed to advance, defining what AGI might seem like in apply will turn out to be more and more essential.
We’ll current a framework for classifying the capabilities and behaviors of AGI models. Relying on their efficiency, generality and autonomy, our paper categorizes techniques starting from non-AI calculators to rising AI fashions and different novel applied sciences.
We’ll additionally present that open-endedness is critical to building generalized AI that goes past human capabilities. Whereas many current AI advances had been pushed by current Web-scale knowledge, open-ended techniques can generate new discoveries that stretch human information.
Scaling AI techniques effectively and responsibly
Creating bigger, extra succesful AI fashions requires extra environment friendly coaching strategies, nearer alignment with human preferences and higher privateness safeguards.
We’ll present how utilizing classification instead of regression techniques makes it simpler to scale deep reinforcement studying techniques and obtain state-of-the-art efficiency throughout completely different domains. Moreover, we suggest a novel method that predicts the distribution of consequences of a reinforcement learning agent’s actions, serving to quickly consider new situations.
Our researchers current an alignment-maintaining approach that reduces the necessity for human oversight, and a new approach to fine-tuning large language models (LLMs), primarily based on sport principle, higher aligns a LLM’s output with human preferences.
We critique the approach of training models on public data and only fine-tuning with “differentially private” training, and argue this method might not supply the privateness or utility that’s typically claimed it does.
New approaches in generative AI and multimodality
Generative AI applied sciences and multimodal capabilities are increasing the artistic potentialities of digital media.
We’ll current VideoPoet, which makes use of an LLM to generate state-of-the-art video and audio from multimodal inputs together with photographs, textual content, audio and different video.
And share Genie (generative interactive environments), which may generate a variety of playable environments for coaching AI brokers, primarily based on textual content prompts, photographs, pictures, or sketches.
Lastly, we introduce MagicLens, a novel picture retrieval system that makes use of textual content directions to retrieve photographs with richer relations past visible similarity.
Supporting the AI group
We’re proud to sponsor ICML and foster a various group in AI and machine studying by supporting initiatives led by Disability in AI, Queer in AI, LatinX in AI and Women in Machine Learning.
In case you’re on the convention, go to the Google DeepMind and Google Analysis cubicles to fulfill our groups, see stay demos and discover out extra about our analysis.