Robots that sense pain and an AI that predicts footballers’ movements – TechCrunch

The research in the area of ​​machine learning and AI, now a key technology in virtually every industry and business, is far too voluminous for anyone to read it all. This column, Perceptron (formerly deep sciences), aims to bring together some of the most relevant recent findings and papers – particularly, but not limited to, artificial intelligence – and explain why they matter.

This week in AI, a team of engineers from the University of Glasgow have developed “artificial skin” that can learn to feel and react to simulated pain. Elsewhere, researchers at DeepMind have developed a machine learning system that predicts where soccer players will run on a pitch, while groups from the Chinese University of Hong Kong (CUHK) and Tsinghua University have created algorithms capable of generating realistic photos – and even videos – of human beings. models.

According to a press release, the Glasgow team artificial skin leveraged a new type of processing system based on “synaptic transistors” designed to mimic neural pathways in the brain. The transistors, made from zinc oxide nanowires printed on the surface of a flexible plastic, are connected to a skin sensor that registers changes in electrical resistance.

artificial skin

Picture credits: University of Glasgow

Although artificial skin has been attempted before, the team says their design differed in that it used a system-integrated circuit to act as an “artificial synapse” – reducing the input to a voltage spike. This sped up the treatment and allowed the team to ‘teach’ the skin how to respond to simulated pain by setting an input voltage threshold that varied in frequency depending on the level of pressure applied to the skin.

The team sees the skin used in robotics, where it could, for example, prevent a robotic arm from coming into contact with dangerously high temperatures.

Tangentially related to robotics, DeepMind claims to have developed an AI model, Impute Chart, which can predict football player movements using video recordings of only a subset of players. More impressively, the system can make predictions about players beyond the view of the camera, allowing it to track the position of most, if not all, players on the field quite accurately.

Impute DeepMind Chart

Picture credits: DeepMind

Graph Imputer is not perfect. But DeepMind researchers say it could be used for applications such as modeling pitch control or the likelihood that a player can control the ball assuming it is in a given location. (Several top Premier League teams use pitch control patterns during matches, as well as in pre-match and post-match analysis.) Beyond football and other sports analysis, DeepMind expects the techniques behind Graph Imputer are applicable to areas such as modeling pedestrians on roads and modeling crowds in stadiums.

While artificial skin and motion prediction systems are impressive, photo and video generation systems are certainly advancing at a rapid pace. Obviously, there are highly publicized works like those of OpenAI Dall-E 2 and that of Google Imagen. But look Text2Humandeveloped by CUHK’s multimedia lab, which can translate a caption like “the lady is wearing a short-sleeved t-shirt with a pure color pattern, and a short, denim skirt” into an image of a person who does not does not really exist.

In partnership with the Beijing Academy of Artificial Intelligence, Tsinghua University has created an even more ambitious model called CogVideo that can generate video clips from text (e.g., “a man on skis”, “a lion drink water”). The clips are full of artifacts and other visual quirks, but given that these are completely fictional scenes, it’s hard to criticize. too hard.

Machine learning is often used in drug discovery, where the near-infinite variety of molecules that appear in literature and theory must be sorted and characterized in order to find potentially beneficial effects. But the volume of data is so large and the cost of false positives potentially so high (it’s time-consuming and expensive to track down leads) that even 99% accuracy isn’t good enough. This is especially the case with untagged molecular data, by far the bulk of what exists (compared to molecules that have been studied manually over the years).

Schematic of an AI model's sorting method for molecules.

Picture credits: CMU

CMU researchers worked to create a model to sort through billions of uncharacterized molecules by training it to make sense of them without any additional information. It does this by making slight changes to the structure of the (virtual) molecule, such as hiding an atom or removing a bond, and observing how the resulting molecule changes. This allows it to learn the intrinsic properties of how these molecules form and behave – and has led it to outperform other AI models in identifying toxic chemicals in a database of tests.

Molecular signatures are also key to diagnosing the disease – two patients may have similar symptoms, but careful analysis of their lab results shows they have very different conditions. Of course, this is standard medical practice, but as data from multiple tests and scans accumulates, it becomes difficult to track all the correlations. The Technical University of Munich is working on a kind of clinical meta-algorithm which integrates multiple data sources (including other algorithms) to differentiate certain liver diseases with similar presentations. While such models will not replace physicians, they will continue to help manage growing volumes of data that even specialists may not have the time or expertise to interpret.

Leave a Reply

Your email address will not be published.