Robotics

Drones navigate invisible environments with liquid neural networks


Makram Chahine, a PhD student in electrical engineering and computer science and an MIT CSAIL affiliate, led the drone used to test liquid neural networks. Photo: Mike Grimmett/MIT CSAIL

By Rachel Gordon | MIT CSAIL

In the vast, vast sky where birds once reigned supreme, a new group of aviators are flying. These air pioneers are not sentient beings, but the product of a deliberate innovation: drones. But these aren’t your typical flying bots, humming like a mechanical bee. Rather, they are marvels inspired by birds flying through the skies, guided by fluid neural networks to navigate their ever-changing and unseen environments with precision and ease.

Inspired by the adaptable nature of the organic brain, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have introduced a method for powerful flight navigation agents to master vision-based fly-to-target tasks in complex and unfamiliar environments. . Fluid neural networks, which can continuously adapt to new input data, demonstrate prowess in making reliable decisions in unknown domains such as forests, urban landscapes, and environments with added noise, rotation, and occlusion. This adaptable model, which outperforms many advanced counterparts in navigational tasks, can enable potential real-world drone applications such as search and rescue, delivery and wildlife monitoring.

The researchers’ latest study, published in Science Robotics, detailing how these new types of agents can adapt to significant shifts in distribution, long-standing challenges in the field. However, a new class of machine learning algorithms captures the causal structure of tasks from high-dimensional, unstructured data, such as pixel input from drone-mounted cameras. This network can then extract the important aspects of a task (i.e., understand the task at hand) and ignore irrelevant features, enabling acquired navigational skills to seamlessly transfer targets to new environments.

Drones navigate invisible environments with liquid neural networks.

“We are excited about the huge potential of our control-based learning approach for robots, as it lays the foundation for solving problems that arise when training in one environment and are applied in completely different environments without additional training,” said Daniela Rus, CSAIL director and Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT. “Our experiments show that we can effectively teach drones to find objects in the forest during the summer, and then apply that model in the winter, in very different environments, or even in urban environments, with a variety of tasks such as search and follow. This adaptability is made possible by the causal basis of our solutions. These flexible algorithms could one day aid in decision making based on data streams that change over time, such as medical diagnoses and autonomous driving applications.”

A daunting challenge lies at the forefront: Does the machine learning system understand the assigned task from the data when flying the drone over an unlabeled object? And will they be able to transfer the skills and tasks they learn to new environments with drastic changes in scenery, such as flying from a forest to an urban landscape? What’s more, unlike the extraordinary capabilities of our biological brains, deep learning systems have difficulty capturing causality, often over-adjust their training data and fail to adapt to new environments or changing conditions. This is especially troubling for resource-constrained embedded systems, such as aerial drones, which need to traverse multiple environments and respond instantaneously to obstacles.

Liquid networks, in contrast, offer promising early indications of their capacity to overcome this crucial weakness in deep learning systems. The team’s system was first trained on data collected by human pilots, to see how they would transfer learned navigational skills to new environments under drastic changes in scenery and conditions. Unlike traditional neural networks which only learn during the training phase, liquid neural network parameters can change over time, making them not only interpretable, but also more resilient to unexpected or noisy data.

In a series of quadrotor closed-loop control experiments, the drone underwent range tests, pressure tests, target rotation and occlusion, enemy hiking, triangular loops between objects, and dynamic target tracking. They track moving targets, and execute multi-step loops between objects in never-before-seen environments, outperforming their other state-of-the-art counterparts.

The team believes that the ability to learn from limited expert data and understand a given task while generalizing to new environments can make autonomous drone deployments more efficient, cost-effective and reliable. Liquid neural networks, they note, could enable autonomous aerial mobility drones to be used for environmental monitoring, package delivery, autonomous vehicles and robotic assistants.

“The experimental setup presented in our work tests the reasoning capabilities of various deep learning systems in controlled, hands-on scenarios,” said MIT’s CSAIL Research Affiliate. Ramin Hassani. “There is still a lot of room left for future research and development on more complex reasoning challenges for AI systems in autonomous navigation applications, which must be tested before we can deploy them safely in our society.”

“Robust learning and performance in tasks and scenarios beyond distribution are some of the key issues that machine learning and autonomous robotic systems must conquer in order to make further inroads in societal critical applications,” said Alessio Lomuscio, professor of AI security in the Department of Computing at Imperial. College London. “In this context, the performance of fluid neural networks, a new brain-inspired paradigm developed by the authors at MIT, reported in this study is simply outstanding. If these results are confirmed in other experiments, the paradigm developed here will contribute to making AI and robotic systems more reliable, robust and efficient.”

Obviously, the sky is no longer the limit, but rather a vast playground for the limitless possibilities of this aerial miracle.

HASAN and PhD student Makram Chahine; Patrick Kao ’22, MEng ’22; and PhD student Aaron Ray SM ’21 co-authored paper with Ryan Shubert ’20, MEng ’22; MIT postdoctoral Mathias Lechner and Alexander Amin; And Daniel Rus.

This research is supported, in part, by Schmidt Futures, the US Air Force Research Laboratory, the US Air Force Artificial Intelligence Accelerator, and Boeing Co.


MIT News



Source link

Related Articles

Back to top button