DM is a Google company that studies and creates AI and deep reinforcement machine learning. While the company’s AlphaGo AI captured global notice in 2015 when it defeated the professional (human) world champion in the board game Go, its AI ambitions extend beyond this.
Its artificial intelligence techniques, which rely on deep reinforcement learning, have been used in academic and industrial settings. As the company’s AI acquires new, more complicated talents, it can facilitate annual improvements.
The DM AI Learned to Walk on Its Own
A video of a DeepMind neural network learning to walk is one of the most iconic images to emerge from the company’s artificial intelligence (AI) accomplishments.
Various characteristics for torque-controlled virtual bodies were provided to the AI, such as the number of joints, the degree of freedom of limbs, and the impediments that needed to be avoided. Obstacles like walls and gaps were included in these procedurally produced worlds. And without being shown how to go around them, the AI had to figure out how to travel around in its environment. This knowledge was enough for the AI to teach itself how to walk in a humanoid, bipedal, and four-legged body.
It not only learned to walk and run but also to navigate its virtual environs, including hazards like gaps and cliffs, with ease. Seeing it utilize its limbs in novel ways like this was also a lot of fun.
This DM AI Can Make Its Pictures
Creating new, realistic graphics from scratch is a fascinating skill added to DM’s AI. Researchers utilized ImageNet, a database containing examples of photographs from the actual world, to train the AI. And after being fed data, the neural network was taught to produce images and recognize the difference between computer-generated and real-world pictures.
Generative adversarial networks (GANs) are established artificial intelligence algorithms used in this procedure. What sets DM AI’s image creation apart, though, are the many enhancements and improvements it has made. Images generated by DM’s AI fared significantly better than those generated by other methods across a wide range of quality parameters.
Robots Trained by DM Can Outsmart Humans in Strategic Thinking
The realization that DM AI has already learned to tactically out-think human opponents is the single most terrifying thought that brings to mind thoughts of SkyNet. DM AI has been in the news recently for its dominance over human players in board games, but it also has learned how to collaborate effectively with others.
Using artificial intelligence developed by DM, computers can now win Capture The Flag battles against human opponents in Quake III Arena. The bot has shown an ability to collaborate with other AI and human teams to achieve victory.
DM claimed that their agents had reached human levels of performance in the 3D first-person multiplayer game Quake III Arena Capture the Flag, thanks to advancements in reinforcement learning. These agents have shown that they can effectively collaborate with other artificial agents and human players.
DM AI Learned to Find Its Way Around Without a Guidebook
Being able to go around a city without a map is one of DeepMind AI’s most impressive feats. Instead, the AI uses the experience to learn, which people frequently do; therefore, it must be relatively easy. However, the intricate brain systems that allow humans to do this are a source of wonder and awe.
DM AI had to travel through major cities without a map to reach its destination. The AI explored its simulated world using a first-person perspective based on Google Street View photos.