Skip to Content

UECIMUAVS "USA and Europe Cooperation in Mini UAVs"

Company/Entity: 
This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 269276

IRSES  project within the Marie Curie Program FP7

 Partners:

1.- Universidad Politécnica de Madrid (UPM), coordinator, Spain

2.- Linköping University (LiU), Sweden

3.- Arizona State University (ASU), USA

 

Project summary:

Unmanned Aerial Systems have been an active area of research in recent years all around the world. Top research centres in the word are working on new applications, improving previous ones, solving specific problems in the different areas. Each centre has taken a special area of the whole field into focus and they are taking specialization on it. It makes a huge scattered network of knowledge.

The project that we present here is an attempt to connect some relevant nodes of this network putting in contact 3 leading research centres: The Arizona State University (Robotics Laboratory), the Linköping University (Artificial Intelligence & Integrated Computer Systems Division)  and the Universidad Politécnica Madrid (Computer Vision Group)  to develop common objectives that involves adding vision sensors in UAVs, making it more powerful and more effective in different tasks. Each partner has experience in different field that could be connected and spared among the consortium, because these fields are complementary, but not superposed. Thus, the opportunities and strength of each partner will be increased.

 

 Global objectives:

  1. Putting in real contact researches with different background in the field of UAV, mix it and produce better researches that can tackle each university get into more complex project and go ahead in their own investigation lines.
  2. Getting experience in fields that each partner doesn’t have so strong knowledge and it is necessary to go forward in UAVs systems.
  3. Pushing up the global “State of the art” in UAS. Providing new applications.
  4. Creating multidisciplinary (and multicultural) groups that involves several disciplines from aerospace (guidance and navigation), mechanical (dynamics and kinematics), electrical (control theory) and computer science (estimation, computer vision), with the aim to get better results. 

Particular objectives:

  1. Continuing being forefront organizations in UAS.
  2. Taking a quick and so valuable experience in areas that exit a lack of knowledge in the organization.
  3. Having the best researches as possible.

Technical objectives:

  1. Develop UAS frameworks for integration and use into civilian space under a regulated context.
  2. Perform research with the aim of integrate computer vision approaches and multisensory fusion technologies into UAS.
  3. To carry out trade-studies to generate business models and UAS applicability for an effective technology transfer of UAS to society.

 

Starting date: 
2012-01-05
Finishing date: 
2016-01-04
Research Lines: 

Autonomous landing based on a trinocular ground system.

Autonomous landing based on a trinocular ground system.
In this research, we propose a vision-based landing and take-off platform using a ground camera system. This strategy allows separating the main objective of the UAV's mission from the common tasks of the mission: take-off and landing. This latter advantage will allow that not additional onboard-sensors will be required for the exclusive purpose of achieving these common tasks.
Views

 

3D Object Following based on Visual Information

3D Object Following based on Visual Information
A system and a control strategy for visual following of 3D moving object by an Unmanned Aerial Vehicles UAV. The presented strategy is based only on the visual information given by an adaptive tracking method based on the color information, which jointly with the dynamics of a camera fixed to a rotary wind UAV are used to develop an Image-based visual servoing IBVS system. This system is focused on continuously following a 3D moving target object, maintaining it with a fixed distance and centered on the image plane.
Views

A system and a control strategy for visual following of 3D moving object by an Unmanned Aerial Vehicles UAV. The presented strategy is based only on the visual information given by an adaptive tracking method based on the color information, which jointly with the dynamics of a camera fixed to a rotary wind UAV are used to develop an Image-based visual servoing IBVS system. This system is focused on continuously following a 3D moving target object, maintaining it with a fixed distance and centered on the image plane.

A Ground-Truth Video Dataset for the Evaluation of Vision-based Sense-and-Avoid systems

A Ground-Truth Video Dataset for the Evaluation of Vision-based Sense-and-Avoid systems
Views

A Ground-Truth Video Dataset for the Evaluation of Vision-based Sense-and-Avoid systems

The development and evaluation of vision-based Sense-and-Avoid systems demand flight scenario images which are expensive and risky to obtain. Nowadays Augmented Reality techniques allow the compositing of real flight scenario images with 3D aircraft models to produce useful realistic images for system development and benchmarking purposes at a much lower cost and risk.

Researches: 

A System for the Design and Development of Vision-based Multi-robot Quadrotor Swarms

A System for the Design and Development of Vision-based Multi-robot Quadrotor Swarms
We present a cost-effective framework for the prototyping of vision-based quadrotor multi-robot systems, which core characteristics are: modularity, compatibility with different platforms and being flight-proven. The framework is fully operative and was demonstrated with the participation in the 2013 International Micro Air Vehicle Indoor Flight Competition (Toulouse, France) where it was awarded with the First Prize in the Indoors Autonomy Challenge.
Views

We present a cost-effective framework for the prototyping of vision-based quadrotor multi-robot systems, which core characteristics are: modularity, compatibility with different platforms and being flight-proven. The framework is fully operative and was demonstrated with the participation in the 2013 International Micro Air Vehicle Indoor Flight Competition (Toulouse, France) where it was awarded with the First Prize in the Indoors Autonomy Challenge.

  

A Vision-based Quadrotor Swarm for the participation in the 2013 International Micro Air Vehicle Competition (Toulouse, France)

A Vision-based Quadrotor Swarm for the participation in the 2013 International Micro Air Vehicle Competition (Toulouse, France)
We present a completely autonomous solution to participate in the 2013 International Micro Air Vehicle Indoor Flight Competition (IMAV2013). Our proposal is a modular multi-robot swarm architecture, based on the Robot Operating System (ROS) software framework, where the only information shared among swarm agents is each robot's position. In order to present a completely visual-based solution the localization problem is simplified by the usage of visual markers.
Views

We present a completely autonomous solution to participate in the 2013 International Micro Air Vehicle Indoor Flight Competition (IMAV2013). Our proposal is a modular multi-robot swarm architecture, based on the Robot Operating System (ROS) software framework, where the only information shared among swarm agents is each robot's position. In order to present a completely visual-based solution the localization problem is simplified by the usage of visual markers.

1. Abstract

Asynchronous Extended Kalman Filter

Asynchronous Extended Kalman Filter
This Asynchronous Extended Kalman Filter is an extension of the EKF that can easily combine measures received at different rates.
Views

 

The Extended Kalman Filter (EKF) allow us to estimate the state of a system, known the inputs of the system and the measurements of the outputs of the system. A non-linear model can be used with EKFs. However, if the outputs of the system don't have the same rate, a non-linear model for each measurements state is needed.

This problem can be easily solved by modifying the equations of the Output Prediction, Output Matching and State Correction Steps, by introducing a binary vector of the enabled outputs measures.

Autonomous Landing of an Unmanned Aerial Vehicle using Image-Based Fuzzy Control

Autonomous Landing of an Unmanned Aerial Vehicle using Image-Based Fuzzy Control
Abstract: This paper presents a vision based autonomous landing control approach for unmanned aerial vehicles (UAV). The 3D position of an unmanned helicopter is estimated based on the homographies estimated of a known landmark. The translation and altitude estimation of the helicopter against the helipad position are the only information that is used to control the longitudinal, lateral and descend speeds of the vehicle. The control system approach consists in three Fuzzy controllers to manage the speeds of each 3D axis of the aircraft's coordinate system. The 3D position estimation was proven
Views
autonomous landing:
GPS reconstruction

 

Autonomous landing using onboard visual systems:

Autonomous landing using Fuzzy Logic Controller and homography information
The visual homography estimation information of a know helipad is used by the MOFS-Fuzzy Controllers to generated velocity commands and get a successfully autolanding of the helicopter.
Views

 

Catadioptric System and Omnidirectional vision

Catadioptric System and Omnidirectional vision
Catadioptric system is a method used on robotics in order to extend the field of view given by conventional cameras. It is achieved using conic mirrors with know properties (hyperbolic, parabolic and spheric shapes) and the use of the geometric properties, in order to obtain clear images that can be modeled through the unitary sphere model.
Views


Catadioptric system is a method used  on robotics in order to extend the field of view given by conventional cameras. It  is achieved using conic mirrors with know properties (hyperbolic, parabolic and spheric shapes) and the use of the geometric properties, in order to obtain clear images that can be modeled through the unitary sphere model.

 

Computer Vision control for multi-rotor UAVs

Computer Vision control for multi-rotor UAVs
The communication protocol establishes a link between a ground station laptop, the on-board Intel Atom computer, and the Pelican flight controller. The on-board computer can receive commands through a WLAN, whether they are from a ground-based source or an on-board control algorithm. These commands are interpreted by the on-board computer and communicated directly to the flight controller as navigation commands. The system allows for vision-based control algorithms to issue navigation controls to the Pelican according to the images it receives from an on-board camera.
Views

Designed and implemented a custom communication protocol for the AscTec Pelican quadrotor.

Hierarchical Multi-Parametric and Multi-Resolution Strategy (HMPMR) for Tracking

Hierarchical Multi-Parametric and Multi-Resolution Strategy (HMPMR) for Tracking
A strategy for improving the object tracking problem using direct methods and achieving real-time frame rates is proposed. The algorithm is based on a hierarchical strategy in terms of image resolution and number of parameters estimated in each resolution, that we call H_MPMR.
Views


A strategy for improving the object tracking problem using direct methods is proposed. It is focused on being robust under partial occlusions, large frame to frame motions, vibrations, and 3D changes, accomplishing the tracking task at real-time frame rates.

IMAV 2013 Competition

IMAV 2013 Competition
IMAV 2013 Competition
Views

 

 We are developing algorithms and preparing our quadrotors to participate in the IMAV 2013 indoors competition: http://www.imav2013.org/.

Object Detection and Tracking for Unmanned Aerial Vehicle

Object Detection and Tracking for Unmanned Aerial Vehicle
Object Detection and Tracking for Unmanned Aerial Vehicle
Views

Objects: Face, Hand, Car, Plane, Bird, People, Logo, Number and Letter, Barcode ...
 
Template-matching Algorithm; Speeded Up Robust Features (SURF) Algorithm; Scale Invariant Feature Transform (SIFT) Algorithm; Haar-like Features Algorithm; Color-based Histrogram Algorithm; Optical Character Recognition Algorithm; ...
 
Kalman Filter, Extended Kalman Filter(EKF), Unscented Kalman Filter(UKF); (Unscented) Particle Filter;  Grey System Model_GM(n,m); ...

Objects: Face, Hand, Car, Plane, Bird, People, Logo, Number and Letter, Barcode ...

 

Template-matching Algorithm; Speeded Up Robust Features (SURF) Algorithm; Scale Invariant Feature Transform (SIFT) Algorithm; Haar-like Features Algorithm; Color-based Histrogram Algorithm; Optical Character Recognition (OCR) Algorithm; ...

   

Kalman Filter, Extended Kalman Filter(EKF), Unscented Kalman Filter(UKF); (Unscented) Particle Filter;  Grey System Model_GM(n,m); ...

 

Detection and Tracking in the Night Mode

Researches: 

Omnidirectional Vision for Attitude estimation

Omnidirectional Vision for Attitude estimation
Using omnidirectional images the attitude information (Roll, Pitch, Yaw) of the UAV is obtained. For this a catadrioptic system is employed to get a omnidirectional image. Spherical backprojected and panoramic images are obtained and used to estimate the UAV attitude
Views

Using omnidirectional images the attitude information  (Roll, Pitch, Yaw) of the  UAV is obtained. For this a catadrioptic system is employed to get a omnidirectional image. Spherical backprojected and panoramic images are obtained and used to estimate the UAV attitude

Onboard visual system

Onboard visual system
Onboard pose estimation is a complementary way to obtain the relative position of the UAV with respect to known landmarks. Using the onboard camera and a tracking system based on visual features, it is possible to obtain the orientation (rotations) and position of a landmark with respect to the UAV and camera coordinate system. This information is useful for accurate positioning task and landing and also is a complementary information to the one estimated by the UAV inertial system.
Views

Optical flow (sparse L-K) and features based matching methods

Optical flow (sparse L-K) and features based matching methods
Optical Flow and features based matching methods are used on different methods like visual tracking and servoing, homography calculation, video stabilization and mosaics building among others.
Views

 

Optical Flow and features based matching methods are used on different methods like visual tracking and servoing, homography calculation, video stabilization and mosaics building among others. 

 

We use a sparse pyramidal Lucas Kanade optical flow, as well as matching method using SURF keypoints and SURF features. 

Robust Dynamic RGB-D Localization and Mapping for UAVs

Robust Dynamic RGB-D Localization and Mapping for UAVs
This reseach aims to provide the fast and robust visual algorithm for UAV to fly in the GPS-denied environment with high speed. The whole system consists of Asctec Pelican or DJI F550 UAV platform (hexcopter), Pixhawk, IntelNUC (Odroid) and RGB-D Sensor (Asus Xtion Pro Live). The real-time 6D pose is estimated by visual SLAM-based algorithm. The test environment includes: (1) Corridors; (2) Square with Obstacles; (3) Lab (Long-term); (4) School Entrance (Long-term); (5) Parking Places et al. The comparison with VICON and real flight show that the visual SLAM algorithm is accurate and robust.
Views

Robust Dynamic RGB-D Localization and Mapping for UAVs

 

Asctec Pelican Quadrotor

   ...

 

Researches: 

Robust Real-time Discriminative method-based Visual Tracking for UAVs

Robust Real-time Discriminative method-based Visual Tracking for UAVs
Discriminative Visual Tracking (DVT) takes the tracking problem as a binary classification task to separate target from its surrounding background. It trains a classifier in an online method using positive and negative samples extracted from the current frame. When the next frame is coming, the samples around the old target location are extracted at this frame, and then the afore-trained classifier is applied to these samples. The location of the sample with the maximum classifier score, i.e. the most correct sample, is the new target location at this frame.
Views

 

 

 
 
 
 

 

 
 

Researches: 

Robust Stereo Visual Odometry and SLAM for Unmanned Aerial Vehicles

Robust Stereo Visual Odometry and SLAM for Unmanned Aerial Vehicles
Robust Stereo Visual Odometry and SLAM for Unmanned Aerial Vehicles
Views

Stereo Visual Odometry and SLAM

 

A new light small-scale low-cost ARM-based Stereo Vision Pre-Processing System is designed.

            

           

Researches: 

See & Avoid for Light UAV

See & Avoid for UAV
This work presents a collision avoidance approach based on omnidirectional cameras that does not require the estimation of range between two platforms to resolve a collision encounter. It guarantees a minimum separation between the two vehicles involved by maximising the view-angle given by the omnidirectional sensor. Only visual information is used to achieve avoidance under a bearing-only visual servoing approach.
Views

  See & Avoid for UAV based on visua

A robust real time method for UASs see and avoid based only in the visual information  provided by an omnidirectional camera sensor. The system was tested on Pelican UAV  flights with collision scenarios showing that the proposed method is an efficient technique for real time  evasion.

See and Avoid using a Fuzzy Controller

See and Avoid quadcopter using a Fuzzy Controller
A real quadcopter is controlled using Fuzzy Logic to avoid obstacles. The system use a low cost camera to get the visual information of the environment. The controller sends heading commands to the quadcopter in order to modify the trajectory.
Views

 

 

 

 

See and Avoid with a Fuzzy controller optimised using Cross-Entropy method

See and Avoid: Using Cross-Entropy method for optimize the Fuzzy controller of the quadcopter heading.
A visual servoing system with a controller based on fuzzy logic has been implemented for avoid obstacle task. The 3 gains of this controller were optimized using the Cross-Entropy method working under the ROS-Gazebo simulation.
Views

 

 

See and Follow: Object Following using Soft Computing Control Techniques

See and Follow: MUAV Object Following using soft computing control techniques
A control strategy is developed based on visual information given by an adaptive tracking method based on color information. A visual fuzzy servoing system has been developed control a quadcopter (MUAV), that also considers its own dynamics. This system is focused on continuously following of an aerial moving target object, maintaining it with a fixed safe distance and centered on the image plane. The control behavior is validated on real flights on outdoors scenarios, showing the robustness of the proposed systems against winds perturbations that affect not just to the quadcopter but also to the object to follow.
Views

 

Ship deck simulation for autonomous landing of VTOL RPAS

Ship deck simulation for autonomous landing of VTOL RPAS
The autonomous landing of VTOL PRAS on Ships is a challenger problem. To test a controller, the first step is develop a ship deck simulator with the six degrees of freedom.
Views

 

To design and test a controller for autonomous landing of VTOL UAVs on ships, it is necessary to calculate how the ship moves in the Sea, using the whole six degrees of freedom:

State estimation for the path-Following control problem of visual unmanned ground systems

State estimation for the path-Following of visual unmanned ground systems
An state estimator for the path-Following control problem of unmanned car-like visual systems was developed. This state estimator is composed by five modules each one with a different mission.
Views

 

The state estimator for visually-guided car like mobile robots has the objective of analyse the measurements of provided by the unmaned ground platform and calculate the better estimation of the state of the vehicle to send it to the controller who closes the loop. This is one of the main systems of the Path following control system for visually guided unmanned ground systems.

Vision based GPS-denied Object Tracking and Following for Unmanned Aerial Vehicles

Vision based GPS-denied Object Tracking and Following for Unmanned Aerial Vehicles
This paper presents a robotic system that performs visual based object tracking and following using a GPS-denied Unmanned Aerial Vehicle. The robot tracks and follows a user specified object keeping a constant distance to the target. During object following, our solution requires only forward-facing camera images and IMU data. When the target tracking is lost the system starts to hover and waits for an autonomous second target detection, additionally utilizing odometry measurements for self stabilization.
Views

 

1. Abstract

Vision-based pose estimation using 3D markers

Vision-based pose estimation using 3D markers
Sometimes, UAVs fly in GPS-denied environments, where there is no easy way for them to determine their own pose, relative to their surroundings. This research line tries to address this problem using visual information from an onboard camera.
Views

Unmanned Aerial Vehicles (UAVs) already have many applications and, as a good platform for service robotics, their number is rapidly increasing. Sometimes, UAVs must fly in GPS-denied environments, where there is no easy way for them to determine their own pose, relative to their surroundings. This research line tries to address this problem using visual information from an onboard camera.

Researches: 

Visual Autonomous Landing of a VTOL UAV on a ship deck platform

Visual Autonomous Landing of a VTOL UAV on a ship deck platform
The autonomous landing of a VTOL UAV on a ship deck platform is a challenge nowadays. We are trying to solve this problem with a visual VTOL RPAS.
Views

 

The autonomous landing of a Vertical Take Off and Landing (VTOL) Unmanned Aerial Vehicle (UAV)  on a ship deck is a challenge nowadays, that we want to achieve.

We have a Rotomotion Inc, SR200 helicopter, equipped with an autopilot, an inertial measure unit (IMU), a GPS sensor, and a small computer that simplifies the control task and is ideal for the development of autonomous capabilities for UAVs.

Visual Control of UAV`s by using soft computing

Visual Control of UAV`s by using soft computing
...God say “let there be light” and one trillion evolutive brain-eyes systems began their journey toward intelligence... Flying is a wonderful resource and a difficult one too. Trying to control a scale helicopter flying in a windy day is as difficult as trying to keep a broomstick on balance in one of your fingers in the same windy day. And yet there exist wonderful, neurally controlled small beings which master the delights of flying since the early creation. We talk of course about insects, first to conquer our gas filled atmosphere. In one sense insect are bio-machines controlled by finite, neural machines that have self learned to process complex vision-to-control problems. We are thus motivated to continue this strategy and define self-learned neural controller capable of flying real world machines using real world images.
Views

Researches: