Robotics projects


Autonomous navigation with LIDAR (2010)

This project provided mapping and trajectory planning features to the R-trooper, an autonomous vehicle designed by Thalès in collaboration with the LAAS. Using a lidar system, a software was designed to create a realtime map of the environment and generate trajectories by doing assumptions about unseen zones and taking into account the mechanical constraints of the robot.


Self localization and mapping (2011)

Integration of a then state-of-the-art self localization and mapping algorithm into a robotic suite for Awabot's Emox robotic suite. Emox is an educational package built around the Emox/Sparx robotic platform. It includes modules to create augmented reality, programming and navigation applications.

IV-devs' role was to create a simultaneous localization and mapping module using the PTAMM algorithm. This allowed autonomous navigation and extended greatly the robot's capacities.


Crowd counting using HD video streams (2011)

The goal of this open source project was to evaluate the number of persons in a crowd. There was at the time no fully automated software for this task.

HeadCounter uses a Haar classifier to find faces in a crowd and tracks them individually to evaluate the rate of the flow and the total size of the crowd.

HeadCounter is available on Github


Vision and navigation in elevators for a Fetch robot (2017)

Fetch robots are originally designed for warehouses. If they ever need to change floor, they typically have to be linked to a centralized elevator controller. This project demonstrated that using vision and the robot's arm, regular elevators could be used normally. It used the Fetch's internal 3D camera for navigation and a deep learning model (YOLOv2) for recognition of elevator buttons.

Vision and movement logic for a Cobotta arm robot (2019)

The Cobotta is a light collaborative robot arm with a high movement precision. It was used in a demonstration to show how easily new deep learning models could be plugged into a production line. It used DeepLearning4Java and a cutomized YOLOv2 model.

Fae-bot: an agriculture automation experiment (on-going)

The goal of this project is to explore the usability of suspended robots in agriculture automation. Suspended robots offer a compromize between wheeled robots and multicopters as well as being potentially more affordable than either option. This project led to a colaboration with Machine Learning Tokyo, who subsequently published a paper about a deeplearning control model for suspended robots (paper).