Skip to Content

Autonomous landing based on a trinocular ground system.

 

In this research, we propose a vision-based landing and take-off platform using a ground camera system. This strategy allows separating the main objective of the UAV's mission from the common tasks of the mission: take-off and landing. This latter advantage will allow that not additional onboard-sensors will be required for the exclusive purpose of achieving these common tasks.

Additionally, this research has been motivated by the autonomous operation of UAVs on ships at sea, that requires the UAV to land on moving ships using only passive sensors installed in the UAV.

The first results have been achieved using color landmarks onboard the UAV to facilitate the detection of the helicopter, focusing the work in the pose estimation problem. 

 

Vision-based Landing

The vision-based estimation (red line) is used to generate control commands (yellow line) to control the altitude of the UAV. The green line represents the altitude estimation using the onboard sensors (GPS/IMU), as can be seen, the GPS-based estimation does not reflect that the helicopter is on the ground, whereas the visual estimation does reflect it has landed.

From the image sequence, it can be seen that the visual estimation notably improves the height estimation.