Autonomous navigation requires an elaborated and accurate algorithmic stack to guide robots through cluttered, unstructured, and dynamic environments. Global and local path planning, localization and mapping, and perception are only some of the required layers that undergo heavy research from the scientific community to achieve fully-functional autonomous navigation requirements. In the last years, data-driven Machine Learning models have demonstrated advantageous performances in robotic control and perception tasks.
More in detail, Deep Reinforcement Learning (DRL) has proven to be a competitive paradigm to develop short-range guidance system solutions. Sensorimotor DRL agents innovate the traditional navigation stack, directly mapping the perceived sensor data to robot actions, representing a lightweight, power-efficient solution for point-to-point local planners.
The development and comparison of navigation policies learned with different sensor data is one of the central goals of this PhD. Raw color or depth images, segmented images, and LiDAR points are specific sensor data used in both indoor and outdoor environments for service robotic tasks such as obstacle avoidance, person following, or vineyards navigation.
One of the main strengths of this approach is the possibility to train a DRL agent in a simulated environment that encapsulates robot dynamics and task constraints and then deploy its learned navigation policy in a real testing scenario.
However, Deep Neural Networks trained with synthetic images often present poor generalization properties and consequently show degraded performances on real-world data, depending on unseen visual features related to texture, lighting conditions, and real sensors noise. Therefore, this PhD’s second objective is to analyze the Domain Generalization properties of learned models obtained adopting different types of data, architectures, and training paradigms, in conjunction with the investigation of algorithms that aims at reducing the domain gap existing from training to test settings.
- Investigate Machine Learning paradigms and models suitable for service robotics tasks such as perception and human-aware autonomous navigation in challenging cluttered environments.
- Develop and compare navigation policies obtained with different robotics platforms and sensor data, typically visual data such as color and depth images or LiDAR point clouds.
- Develop and optimize the virtual framework necessary to train autonomous agents in simulation.
- Study novel approaches for generalization of Neural Networks performance in unseen domains.