Visual Navigation: from frame-based cameras to neuromorphic sensing
Service robotics is a growing research area whose main goal is to develop autonomous robots capable of helping people in doing complex, dangerous and fatiguing tasks to improve the quality of their every-day life. A robotic platform can be defined autonomous if it is able to effectively navigate in known or unknown environments, to build a map of the scene and to localize itself in it. However, autonomous navigation is a very challenging aspect and there are still open problems to be addressed, such as high dynamic and cluttered environments and variable lighting conditions.
In recent years, there has been an increasing interest in neuromorphic approaches within the robotic field, as they offer potential solutions to some of the open problems mentioned earlier. In particular, since sight is widely used by animals and humans to perceive the environment in which they navigate, visual cues have gained a lot of importance. For example, two possible strategies to acquire neuromorphic inspired visual information are:
– using standard frame-based cameras and then processing the data to extract bio-inspired visual cues.
– using novel vision sensors, called event-based cameras, in which each pixel works in logarithmic scale, and it independently sends an event every time it detects a brightness change in the scene.
In my research, I explore both approaches. Initially, I used frame-based cameras to develop a control pipeline based on bio-inspired visual cues, time-to-transit (tau), to enable a robotic platform to navigate in unknown environments. Now, I’m focusing on utilizing event-based cameras to investigate how this sensors’ multiple benefits can improve navigation strategies but also enhance the performances in different tasks which are fundamental in service robotics, such as person detection.
Department: PhD in Eletrical, Eletronics and Communication Engineering
Supervisor: Prof. Gianluca Setti | Marcello Chiaberge