Today marks the debut of a set of artificial intelligence models developed by engineers at Alphabet Inc.’s Intrinsic unit, which creates technology that makes programming industrial robots easier.
The Automate 2024 robotics event, held this week in Chicago, featured a presentation by executives describing the AI models. Nvidia Corp. and Google DeepMind, the search engine giant’s AI research division, collaborated to develop some of the neural networks, while others were developed independently.
It used to require a lot of custom code to teach industrial robots how to perform tasks like packing goods into boxes. Sometimes the programming required is so complex that it gets in the way of manufacturers’ attempts to automate their factories. In 2021, Alphabet established Intrinsic with the goal of creating software that would simplify the process of programming robots and increase accessibility to the technology.
A robotic arm must first identify the presence of an object and then carry out “3D pose estimation” before it can pick it up. Finding an object’s location and facing direction is the task at hand. With this information, the robotic arm can determine the best angle to pick up the object from in order to reduce the possibility of falls, collisions with other objects, and other related problems.
An object can be identified and its pose estimated in a matter of seconds by the first AI model that Intrinsic described today. The model was pre-trained by Alphabet’s engineers to interact with over 130,000 different types of objects, the company said. Furthermore, the AI is able to adjust to changes in its working environment, like when lightning patterns shift or the camera that a robotic arm tracks objects with is replaced.
Intrinsic CEO Wendy Tan White explained in a blog post that “the model is fast, generalized, and accurate.” “We are working to make this and similar features easier to develop, deploy, and use by adding them to the Intrinsic platform as new capabilities.”
Today at Automate 2024, the Alphabet division presented two AI projects that were conducted in conjunction with Google DeepMind. The goal of both was to maximize the motion of industrial robots.
As per Intrinsic, the initial endeavor yielded an artificial intelligence instrument capable of simplifying “motion planning.” That involves figuring out the best possible series of movements a robot should make in order to finish a task. The artificial intelligence tool is designed for scenarios in which several autonomous machines must operate in tandem and avoid colliding with one another.
The software receives input in the form of measurements, motion patterns, and tasks assigned to the robot. Then, in order to minimize the need for manual coding, it automatically creates motion plans. The AI tool was able to achieve a 25% improvement over traditional motion planning methods in a simulation involving four robots working together on a virtual welding project.
Optimizing scenarios where two robotic hands collaborate on the same task was the focus of Intrinsic’s other joint project with Google DeepMind. The latter group’s researchers used Intrinsic’s technical resources to create AI software that was optimized for these kinds of use cases. Tan White wrote, “One of Google DeepMind’s methods of training a model—based on human input using remote devices—benefits from Intrinsic’s management of high-frequency real-time controls infrastructure, sensor data, and real-world data enablement.”
At the event, Intrinsic also disclosed a partnership with Nvidia centered on robot grasping accuracy. Previously, the software code that dictates a robotic arm’s method of object pickup required customization for every kind of object the arm came into contact with. That required a substantial amount of labor-intensive manual labor.
Using Nvidia’s robot simulation platform, Isaac Sim, Intrinsic built an AI system capable of automating the procedure. It can produce the code needed for a robot to pick up an object without the need for human input. Additionally, the AI is able to modify this code to account for the reality that various robotic arms frequently pick up objects with various kinds of gripping devices.