Visual Navigation: from frame-based cameras to neuromorphic sensing
Service robotics is a growing research area whose main goal is to develop autonomous robots capable of helping people in doing complex, dangerous and fatiguing tasks to improve the quality of their every-day life. A robotic platform can be defined autonomous if it is able to effectively navigate in known or unknown environments, to build a map of the scene and to localize itself in it. However, autonomous navigation is a very challenging aspect and there are still open problems to be addressed, such as high dynamic and cluttered environments and variable lighting conditions.
In recent years, there has been an increasing interest in neuromorphic approaches within the robotic field, as they offer potential solutions to some of the open problems mentioned earlier. In particular, since sight is widely used by animals and humans to perceive the environment in which they navigate, visual cues have gained a lot of importance. For example, two possible strategies to acquire neuromorphic inspired visual information are:
– using standard frame-based cameras and then processing the data to extract bio-inspired visual cues.
– using novel vision sensors, called event-based cameras, in which each pixel works in logarithmic scale, and it independently sends an event every time it detects a brightness change in the scene.
In my research, I explore both approaches. Initially, I used frame-based cameras to develop a control pipeline based on bio-inspired visual cues, time-to-transit (tau), to enable a robotic platform to navigate in unknown environments. Now, I’m focusing on utilizing event-based cameras to investigate how this sensors’ multiple benefits can improve navigation strategies but also enhance the performances in different tasks which are fundamental in service robotics, such as person detection.
Improvement of the control pipeline based on bio-inspired visual cues, time-to-transit (tau), to navigate in unknown environments. Since now, the entire algorithm has been tested on a real platform proving the effectiveness of the control strategy, but the idea is to introduce Deep Learning to enhance the robustness of the navigation strategy.
Development and Implementation of a fast-moving objects (person) detection pipeline which exploits streams of events, acquired by an event-based camera, as input. For the implementation part, the target could be a hardware constrained platform which could lead also to prune the object detection neural network to make the entire pipeline more efficient.
Development of a navigation algorithm which leverage on event-based person detection. The algorithm should operate in different lighting conditions and the goal should be to follow a person and/or to avoid moving obstacles in a highly dynamic environment.
Department: PhD in Eletrical, Eletronics and Communication Engineering
Supervisor: Prof. Gianluca Setti | Marcello Chiaberge