We reach more than 65,000 registered users in Dec!! Register Now
Reinforcement learning allows underwater robots to locate and track objects underwater
- November 15, 2024
- 2 Views
- 0 Likes
- 0 Comment
This is the main conclusion of a study led by the ICM-CSIC that demonstrates, for the first time, how an underwater robot is able to learn the optimal trajectory to monitor the seabed and track species.
The study tests were carried out with the AUV Sparus II in the port of Sant Feliu de Guíxols, in the Baix Empordà, and in Monterey Bay (California) / VICOROB.A team led by the Institut de Ciències del Mar (ICM-CSIC) in Barcelona in collaboration with the Monterey Bay Aquarium Research Institute (MBARI) in Califòrnia, the Universitat Politècnica de Catalunya (UPC) and the Universitat de Girona (UdG), proves for the first time that reinforcement learning -i.e., a neural network that learns the best action to perform at each moment based on a series of rewards- allows autonomous vehicles and underwater robots to locate and carefully track marine objects and animals. The details are reported in a paper published in the prestigious journal Science Robotics, the leading scientific journal in the field of robotics.
Currently, underwater robotics is emerging as a key tool for improving knowledge of the oceans in the face of the many difficulties in exploring them, with vehicles capable of descending to depths of up to 4,000 meters. In addition, the in-situ data they provide help to complement other data, such as that obtained from satellites. This technology makes it possible to study small-scale phenomena, such as CO2 capture by marine organisms, which helps to regulate climate change.
Specifically, this new work reveals that reinforcement learning, widely used in the field of control and robotics, as well as in the development of tools related to natural language processing such as ChatGPT, allows underwater robots to learn what actions to perform at any given time to achieve a specific goal. These action policies match, or even improve in certain circumstances, traditional methods based on analytical development.
"This type of learning allows us to train a neural network to optimize a specific task, which would be very difficult to achieve otherwise. For example, we have been able to demonstrate that it is possible to optimize the trajectory of a vehicle to locate and track objects moving underwater", explains Ivan Masmitjà, the lead author of the study, who has worked between ICM-CSIC and MBARI.
This "will allow us to deepen the study of ecological phenomena such as migration or movement at small and large scales of a multitude of marine species using autonomous robots. In addition, these advances will make it possible to monitor other oceanographic instruments in real-time through a network of robots, where some can be on the surface monitoring and transmitting by satellite the actions performed by other robotic platforms on the seabed", points out the ICM-CSIC researcher Joan Navarro, who also participated in the study.
To carry out this work, researchers used range of acoustic techniques, which allow estimating the position of an object considering distance measurements taken at different points. However, this fact makes the accuracy in locating the object highly dependent on the place where the acoustic range measurements are taken. And this is where the application of artificial intelligence and, specifically, reinforcement learning, which allows the identification of the best points and, therefore, the optimal trajectory to be performed by the robot, becomes important.
Neural networks were trained, in part, using the computer cluster at the Barcelona Supercomputing Center (BSC-CNS), where the most powerful supercomputer in Spain and one of the most powerful in Europe are located. "This made it possible to adjust the parameters of different algorithms much faster than using conventional computers", indicates Prof. Mario Martin, from the Computer Science Department of the UPC and author of the study.
Once trained, the algorithms were tested on different autonomous vehicles, including the AUV Sparus II developed by VICOROB, in a series of experimental missions developed in the port of Sant Feliu de Guíxols, in the Baix Empordà, and in Monterey Bay (California), in collaboration with the principal investigator of the Bioinspiration Lab at MBARI, Kakani Katija.
"Our simulation environment incorporates the control architecture of real vehicles, which allowed us to implement the algorithms efficiently before going to sea", explains Narcís Palomeras, from the UdG.
For future research, the team will study the possibility of applying the same algorithms to solve more complicated missions. For example, the use of multiple vehicles to locate objects, detect fronts and thermoclines or cooperative algae upwelling through multi-platform reinforcement learning techniques.
This research has been carried out thanks to the prestigious European Marie Curie Individual Fellowship won by the researcher Ivan Masmitjà in 2020 and the BITER project, funded by the Ministry of Science and Innovation of the Government of Spain, which is currently under implementation.
List of Referenes
- I. Masmitja, M. Martin, T. O’Reilly, B. Kieft, N. Palomeras, J. Navarro, K. Katija. Dynamic robotic tracking of underwater targets using reinforcement learning. Science Robotics, 2023; 8 (80) DOI: 10.1126/scirobotics.ade7811
Cite This Article as
No tags found for this post