Indoor Localization of Wheeled Robots using Multi-sensor Data Fusion with Event-based Measurements

PhD Candidate Payam Nazemzadeh
1 December 2016
December 1, 2016

Time: December 1, 2016, h. 2:30 pm
Location: Room Ofek, Polo scientifico e tecnologico “Fabio Ferrari”, Building Povo 1 - Povo (Trento)

PhD Candidate

Dr. Payam Nazemzadeh 

Abstract of Dissertation

In the era in which the robots have started to live and work everywhere and in close contact with humans, they should accurately know their own location at any time to be able to move and perform safely. In particular, large and crowded indoor environments are challenging scenarios for robots' accurate and robust localization. The theory and the results presented in this dissertation intend to address the crucial issue of wheeled robots indoor localization by proposing some novel solutions in three complementary ways, i.e. improving robots self-localization through data fusion, adopting collaborative localization (e.g. using the position information from other robots) and finally optimizing the placement of landmarks in the environment once the detection range of the chosen sensors is known.

As far as the first subject is concerned, a robot should be able to localize itself in a given reference frame. This problem is studied in detail to achieve a proper and affordable technique for selflocalization, regardless of specific environmental features. The proposed solution relies on the integration of relative and absolute position measurements. The former are based on odometry and on an inertial measurement unit. The absolute position and heading data instead are measured sporadically anytime some landmark spread in the environment is detected. Due to the event-based nature of such measurement data, the robot can work autonomously most of time, even if accuracy degrades. Of course, in order to keep positioning uncertainty bounded, it is important that absolute and relative position data are fused properly. For this reason, four different techniques are analyzed and compared in the dissertation.

Once the local kinematic state of each robot is estimated, a group of them moving in the same environment and able to detect and communicate with one another can also collaborate to share their position information to refine self-localization results. In the dissertation, it will be shown that this approach can provide some benefits, although performances strongly depend on the metrological features of the adopted sensors as well as on the communication range. Finally, as far as the problem of optimal landmark placement is concerned, this is addressed by suggesting a novel and easy-to-use geometrical criterion to maximize the distance between the landmarks deployed over a triangular lattice grid, while ensuring that the absolute position measurement sensors can always detect at least one landmark.