Autonomous Landing of a Multirotor Micro Air Vehicle on a High Velocity Ground Vehicle ⋆ Alexandre Borowczyk ∗Duc-Tien Nguyen ∗ Andr´e Phu-Van Nguyen ∗Dang Quang Nguyen ∗ David Saussi´e ∗Jerome Le Ny ∗ ∗Mobile Robotics and Autonomous Systems Laboratory, Polytechnique Montreal and GERAD, Montreal, Canada (e-mail: {alexandre.borowczyk, duc-tien.nguyen, andre-phu-van.nguyen, dang-quang.nguyen, d.saussie, jerome.le-ny}@polymtl.ca). Abstract: While autonomous multirotor micro aerial vehicles (MAVs) are uniquely well suited for certain types of missions benefiting from stationary flight capabilities, their more widespread usage still faces many hurdles, due in particular to their limited range and the difficulty of fully automating the deployment and retrieval. In this paper we address these issues by solving the problem of the automated landing of a quadcopter on a ground vehicle moving at relatively high speed. We present our system architecture, including the structure of our Kalman filter for the estimation of the relative position and velocity between the quadcopter and the landing pad, as well as our controller design for the full rendezvous and landing maneuvers. The system is experimentally validated by successfully landing in multiple trials a commercial quadcopter on the roof of a car moving at speeds of up to 50 km/h. Keywords: Kalman filters, Autonomous vehicles, Mobile robots, Guidance systems, Computer vision, Aerospace control 1. INTRODUCTION The ability of multirotor micro aerial vehicles (MAVs) to perform stationary hover flight makes them particularly interesting for a wide variety of applications, e.g., site surveillance, parcel delivery, or search and rescue opera- tions. At the same time however, they are challenging to use on their own because of their relatively short battery life and short range. Deploying and recovering MAVs from mobile Ground Vehicles (GVs) could alleviate this issue and allow more efficient deployment and recovery in the field. For example, delivery trucks, public buses or marine carriers could be used to transport MAVs between loca- tions of interest and allow them to recharge periodically (Garone et al., 2014; Mathew et al., 2015). For search and rescue operations, the synergy between ground and air vehicles could help save precious mission time and would pave the way for the efficient deployment of large fleets of autonomous MAVs. The idea of better integrating GVs and MAVs has indeed already attracted the attention of multiple car and MAV manufacturers (Kolodny, 2016; Lardinois, 2016). Research groups have previously considered the problem of landing a MAV on a mobile platform, but most of the existing work is concerned with landing on a marine platform or precision landing on a static or slowly moving ground target. Lange et al. (2009) provide an early example, ⋆This work was partially supported by CFI JELF Award 32848 and a hardware donation from DJI. where a custom visual marker made of concentric rings was created to allow for relative pose estimation, and control was performed using optical flow and velocity commands. More recently, Yang et al. (2015) used the ArUco library from Garrido-Jurado et al. (2014) as a visual fiducial and IMU measurements fused in a Square Root Unscented Kalman Filter for relative pose estimation. The system however still relies on optical flow for accurate velocity estimation. This becomes problematic as soon as the MAV aligns itself with a moving ground platform, at which point the optical flow camera suddenly measures the velocity of the platform relative to the MAV instead of the velocity of the MAV in the ground frame. Muskardin et al. (2016) developed a system to land a fixed wing MAV on top of a moving GV. However, their approach requires that the GV cooperates with the MAV during the landing maneuver and makes use of expensive RTK-GPS units. Kim et al. (2014) show that it is pos- sible to land on a moving target using simple color blob detection and a non-linear Kalman filter, but test their solution only for speeds of less than 1 m/s. Most notably, Ling (2014) shows that it is possible to use low cost sensors combined with an AprilTag fiducial marker (Olson, 2011) to land on a small ground robot. He further demonstrates different methods to help accelerating the AprilTag detec- tion. He notes in particular that as a quadcopter pitches forward to follow the ground platform, the bottom facing camera frequently loses track of the visual target, which stresses the importance of a model-based estimator such as a Kalman filter to compensate. arXiv:1611.07329v1 [cs.RO] 22 Nov 2016 The references above address the terminal landing phase of the MAV on a moving platform, but a complete system must also include a strategy to guide the MAV towards the GV during its approach phase. Proportional Navigation (PN) (Kabamba and Girard, 2014) is most commonly known as a guidance law for ballistic missiles, but can also been used for UAV guidance. Holt and Beard (2010) develop a form of PN specifically for road following by a fixed-wing vehicle and show that it is suitable for use with visual feedback coming from a gimbaled camera. Gautam et al. (2015) compare pure pursuit, line-of-sight and PN guidance laws to show that out of the three, PN is the most efficient in terms of the total required acceleration and the time required to reach the target. On the other hand, within close range of the target PN becomes inefficient. To alleviate this problem, Tan and Kumar (2014) propose a switching strategy to move from PN to a PD controller. Finally, to maximize the likelihood of visual target acquisition for a smooth transition from PN to PD, it is possible to follow the strategy of Lin and Yang (2014) to point a gimbaled camera towards a target. Contributions and organization of the paper. We describe a complete system allowing a multirotor MAV to land au- tonomously on a moving ground platform at relatively high speed, using only commercially available and relatively low-cost sensors. The system architecture is described in Section 2. Our algorithms combine a Kalman filter for relative position and velocity estimation, described in Sec- tion 3, with a PN-based guidance law for the approach phase and a PID controller for the terminal phase. Both controllers are implemented using only acceleration and attitude controls, as discussed in Section 4. Our design was tested both in simulations and through extensive ex- periments with a commercially available MAV, as Section 5 illustrates. To the best of our knowledge, we demonstrate experimentally automatic landing of a multirotor MAV on a moving GV traveling at the highest speed to date, with successful tests carried up to a speed of 50 km/h. 2. SYSTEM ARCHITECTURE In this section we describe the basic elements of our system architecture, both for the GV and the MAV. Specific details for the hardware used in our experiments are given in Section 5. The GV is equipped with a landing pad, on which we place a 30 × 30 cm visual fiducial named AprilTag designed by Olson (2011), see Fig. 4. This allows us to visually measure the 6 Degrees of Freedom (DOF) pose of the landing pad using onboard cameras. In addition, we use position and acceleration measurements for the GV. In practice, low quality sensors are enough for this purpose, and in our experiments we simply place a mobile phone on the landing pad, which can transmit its GPS data to the MAV at 1 Hz and its Inertial Measurement Unit (IMU) data at 25 Hz at most. We can also integrate the rough heading and velocity estimates typically returned by basic GPS units, based simply on successive position measurements. The MAV is equipped with an Inertial Navigation System (INS), an orientable 3-axis gimbaled camera (with separate IMU) for target tracking purposes, as well as a camera with a wide angle lens pointing down, which allows us to keep track of the AprilTag even at close range during the very last instants of the landing maneuver. The approach phase can also benefit from having an additional velocity sensor on board. Many commercial MAVs are equipped with velocity sensors relying on optical flow methods, which visually estimate velocity by computing the movement of features in successive images, see, e.g., (Zhou et al., 2015). Four main coordinate frames are defined and illustrated in Fig. 1. The global North-East-Down (NED) frame, denoted {N}, is located at the first point detected by the MAV. The MAV body frame {B} is chosen according to the cross “×” configuration, i.e., its forward x-axis points between two of the arms and its y-axis points to the right. The gimbaled camera frame {G} is attached to the lens center of the moving camera. Its forward z-axis is perpendicular to the image plane and its x-axis points to the right of the gimbal frame. Finally, the bottom facing rigid camera frame {C} is obtained from the MAV body frame by a 90◦rotation around the zB axis and its origin is located at the optical center of camera. {N} xN yN zN zB Horizontal plane zG xG yG AprilTag {G} Gimbaled camera {C} zR Rigid Camera xR xB Global NED {B} M1 M2 M3 M4 yB Image plane Quadrotor yR Fig. 1. Frames of reference used 3. KALMAN FILTER Estimation of the relative position, velocity and accelera- tion between the MAV and the landing pad, as required by our guidance and control system, is performed by a Kalman filter running at 100 Hz. The architecture of this filter is shown in Fig. 2 and described in the following paragraphs. 3.1 Process model In order to land on a moving target, the system estimates the three dimensional position p(t), linear velocity v(t) and Kalman filter GPSa Transformation Gimbal Camera Gimbal IMU Transformation Bottom Camera MAV INS Mobile phone IMU Transformation Mobile phone GPS Prediction: 100Hz 𝑝𝑎𝐶 𝑅𝐶 𝑁 𝑝𝑎𝑁−𝑝𝑚𝑁 𝑝𝑎𝑁−𝑝𝑚𝑁 𝑝𝑎𝑁, 𝑣𝑎𝑁 𝑅𝐺 𝑁 𝑝𝑎𝐺 𝑎𝑎𝑁 𝑝 𝑚𝑁, 𝑣 𝑚𝑁, 𝑎 𝑚 𝑁 𝑝 𝑎𝑁, 𝑣 𝑎𝑁, 𝑎 𝑎𝑁 20Hz 30Hz 1Hz 25Hz 𝑝𝑚 𝑁, 𝑣𝑚 𝑁, 𝑎𝑚 𝑁 100Hz Fig. 2. Kalman filter architecture acceleration a(t) of the MAV and of the GV. Thus, the state variables can be written as x = [x⊤ m x⊤ a ]⊤∈R18 (1) where xm =  p⊤ m v⊤ m a⊤ m ⊤and xa =  p⊤ a v⊤ a a⊤ a ⊤are respectively the state vectors for the MAV and AprilTag, expressed in the NED frame. The superscript ⊤denotes the matrix transpose operation. We use a simple second-order kinematic model for the MAV and the GV dynamics a(t) = ¨p(t) = w(t) (2) where w(t) is a white noise process with power spectral density (PSD) qw. The corresponding discrete time model using zero-order hold (ZOH) sampling is given by xk+1 = Fxk + wk (3) F = " Fm 0 0 Fa # , Fm = Fa =   1 Ts T 2 s 2 0 1 Ts 0 0 1  ⊗I3 where Ts is the sampling period, ⊗denotes the Kronecker product and I3 is the 3 × 3 identity matrix. The process noise wk is assumed to be a Gaussian white noise with a covariance matrix Q given by Q =  qwm 0 0 qwa  ⊗Q0, Q0 =   T 5 s 20 T 4 s 8 T 3 s 6 T 4 s 8 T 3 s 3 T 2 s 2 T 3 s 6 T 2 s 2 Ts   ⊗I3 where qwm and qwa are the PSD of the MAV and GV acceleration, respectively. These parameters are generally chosen a priori as part of an empirical tuning process. 3.2 Measurement Model In general, the measurement vector at time k is given by zk = Hxk + vk, (4) where H is a known matrix, vk is a Gaussian white noise with covariance matrices Vk and not correlated with the process noise wk. The following subsections describe the rows of the matrix H for the various kinds of sensor measurements, which we simply call H for simplicity of notation. MAV position, velocity and acceleration from INS The INS of our MAV combines IMU, GPS and visual measure- ments to provide us with position, velocity and gravity compensated acceleration data directly expressed in the global NED frame zk =  p⊤ m v⊤ m a⊤ m ⊤, H = [I9 09×9] (5) As mentioned previously, the velocity measurement relying on optical flow methods is not correct when the MAV flies above a moving platform. Therefore, we increase the standard deviation of the velocity noise from 0.1 m/s in the approach phase to 10 m/s in the landing phase. Target’s GPS measurements The GPS unit of the mobile phone on the GV provides position, speed and heading measurements. This information is sent to the MAV on- board computer (OBC) via a wireless link, which gives access to the landing pad’s position in the global NED frame {N} xN a ≈(la −la0)RE , yN a ≈(lo −lo0) cos(la)RE zN a = al0 −al, where RE = 6378137 m is the Earth radius, la, lo, al are the latitude, longitude (both in radians) and altitude of the landing pad respectively. The subscripts 0 corresponds to the starting point. The above equations are valid when the current position is not too far from the starting point and under a spherical Earth assumption, but more precise transformations could be used (Groves, 2013). The GPS heading ψa and speed Ua are also used to calculate the current velocity in the global NED frame ˙xN a = Ua cos(ψa), ˙yN a = Ua sin(ψa). So the measurement model (4) is expressed as follows zk =  xN a yN a zN a ˙xN a ˙yN a ⊤, H = [05×9 I5 05×4] However, because the GPS heading measurements have poor accuracy at low speed, we discard them if Ua < 2.5 m/s, in which case zk =  xN a yN a zN a ⊤, H = [05×9 I3 05×6] The measurement noise covariance matrix Vk is provided by the GPS device itself. Our GPS receiver is a low-cost device with output rate of about 1 Hz. This source of data is only used to approach the target but is insufficient for landing on the moving GV. For the landing phase, it is necessary to use the AprilTag detection with the gimbaled and bottom facing cameras. Gimbaled Camera measurements This camera provides measurements of the relative position between the MAV and the landing pad with centimeter accuracy, at range up to 5 m. This information is converted into the global NED frame {N} by pN m −pN a = RN GpG m/a where RN G is the rotation matrix from {N} to {G} returned by the gimbal IMU. Therefore, the observation model (4) corresponds to zk =  xN m −xN a yN m −yN a zN m −zN a ⊤ H = [I3 03×6 −I3 03×6] . Here the standard deviation of the measurement noise is empirically set to 0.2 m. To reduce the chances of target loss, the gimbaled camera centers the image onto the AprilTag as soon as visual detection is achieved. When the AprilTag cannot be detected, we follow the control scheme proposed by Lin and Yang (2014) to point the camera towards the landing pad using the estimated line-of-sight (LOS) information obtained from the Kalman filter. Bottom camera measurements The bottom facing cam- era is used to assist the last moments of the landing, when the MAV is close to the landing pad, yet too far to cut offthe motors. At that moment, the gimbaled camera can not perceive the whole AprilTag but a wide angle camera like the mvBlueFOX can still provide measurements. This camera measures the target position in the frame {C}, as illustrated in Fig. 1. Hence, the observation model is the same as (3.2.3), except for the transformation to the global NED frame pN m −pN a = RN C pC m/a where RN C denotes the rotation matrix from {N} to {C}. Landing pad acceleration using the mobile phone’s IMU Finally, since most mobile phones also contain an IMU, we leverage this sensor to estimate the GV’s acceleration. zk =  ¨xN a ¨yN a ¨zN a ⊤, H = [03×15 I3] Vk = diag(0.62, 0.62, 0.62) m/s2. The Kalman filter algorithm follows the standard two steps, with the prediction step running at 100 Hz and the measurement update step executed as soon as new measurements become available. The output of this filter is the input to the guidance and control system described in the next section, which is used by the MAV to approach and land safely on the moving platform. 4. GUIDANCE AND CONTROL SYSTEM For GV tracking by the MAV, we use a guidance strategy switching between a Proportional Navigation (PN) law (Kabamba and Girard, 2014) for the approach phase and a PID for the landing phase, which is similar in spirit to the approach in (Tan and Kumar, 2014). The approach phase is characterized by a large distance between the MAV and the GV and the absence of visual localization data. Hence, in this phase, the MAV has to rely on the data transmitted by the GV’s GPS and IMU. The goal of the controller for this phase is to follow an efficient “pursuit” trajectory, which is achieved here by a PN controller augmented with a closing velocity controller. In contrast, the landing phase is characterized by a relatively close proximity between the MAV and the GV and the availability of visual feedback to determine the target’s position. This phase requires a higher level of accuracy and faster response time from the controller, and a PID controller can be more easily tuned to meet these requirements than a PN controller. In addition, the system should transition from one controller to the other seamlessly, avoiding discontinuity in the commands sent to the MAV. Proportional Navigation Guidance The well-known PN guidance law uses the fact that two vehicles are on a collision course if their LOS remains at a constant angle in order to steer the MAV toward the GV. It works by keeping rotation of the velocity vector proportional to the rotation of the LOS vector. Our PN controller provides an acceleration command that is normal to the instantaneous LOS a⊥= −λ| ˙u| u |u| × Ω, with Ω= u × ˙u u · u , (6) where λ is a gain parameter, u = pN a −pN m and ˙u = vN a − vN m are obtained from the Kalman filter and represent the LOS vector and its derivative expressed in the NED frame, and Ωis the rotation vector of the LOS. We then supplement the PN guidance law (6) with an approach velocity controller determining the acceleration a∥along the LOS direction, which in particular allows us to specify a high enough velocity required to properly overtake the target. This acceleration component is computed using the following PD structure a∥= Kp∥u + Kd∥˙u where Kp∥and Ki∥are constant gains. The total accelera- tion command is obtained by combining both components a = a⊥+ a∥. As only the horizontal control is of interest, the accelera- tion along z-axis is disregarded. The desired acceleration then needs to be converted to attitude control inputs that are more compatible with the MAV input format. In frame {N}, the quadrotor dynamic equations of translation read as follows mam = " 0 0 mg # + RN B " 0 0 −T # + FD where RN B denotes the rotation matrix from {N} to {B}, RN B = "cθcψ sφsθcψ −cφsψ cφsθcψ + sφsψ cθsψ sφsθsψ + cφcψ cφsθsψ −sφcψ −sθ sφcθ cφcθ # , T the total thrust created by rotors, FD the drag force, m the MAV mass and g the standard gravity. The force equations simplify as m "¨xm ¨ym ¨zm # = " 0 0 mg # − "cφsθcψ + sφsψ cφsθsψ −sφcψ cφcθ # T + "−kd ˙xm| ˙xm| −kd ˙ym| ˙ym| −kd ˙zm| ˙zm| # where the drag is roughly modeled as a force proportional to the signed quadratic velocity in each direction and kd is constant, which we estimated by recording the terminal velocity for a range of attitude controls at level flight and performing a least squares regression on the data. For constant flight altitude, T = mg/cφcθ and assuming ψ = 0, it yields m  ¨xm ¨ym  = mg  −tan θ tan φ/ cos θ  −  kd ˙xm| ˙xm| kd ˙ym| ˙ym|  The following relations can then be obtained: θ = −arctan ((m¨xm + kd ˙xm| ˙xm|)/mg) φ = arctan (cos θ (m¨ym + kd ˙ym| ˙ym|) /mg) , where θ and φ are the desired pitch and roll angles for specific acceleration demands. PID controller The landing phase is handled by a PID controller, with the desired acceleration computed as a = Kpu + Ki Z u + Kd ˙u, where Kp, Ki and Kd are constant gains. The tuning for the PID controller was selected to provide aggressive dynamic path following, promoting a quick disturbance rejection. The controller was first tuned in simulation and then the settings were manually adjusted in flight. Controller switching The controller switching scheme cho- sen is a simple fixed distance switching condition with a slight hysteresis. The switching distance selected was 6 m to allow for any perturbation due to the switching to dissipate before reaching the landing platform. Vertical control The entire approach phase is done at a constant altitude, which is handled by the internal vertical position controller of the MAV. The descent is initiated once the quadrotor has stabilized over the landing platform. A constant vertical velocity command is then issued to the MAV and maintained until it reaches a height of 0.2 m above the landing platform at which point the motors are disarmed. 5. EXPERIMENTAL VALIDATION 5.1 System Description We implemented our system on a commercial off-the-shelf DJI Matrice 100 (M100) quadcopter shown in Fig. 3. All computations are performed on the standard OBC for this platform (DJI Manifold), which contains an Nvidia Tegra K1 SoC. The 3-axis gimbaled camera is a Zenmuse X3, from which we receive 720p YUV color images at 30 Hz. To reduce computations, we drop the U and V channels and downsample the images to obtain 640 × 360 monochrome images. We modified the M100 to rigidly attach a down- ward facing Matrix Vision mvBlueFOX camera, equipped with an ultra-wide angle Sunex DSL224D lens with a diagonal field of view of 176 degrees. The M100 is also equipped with the DJI Guidance module, an array of up to five stereo cameras, which are meant to help develop- ers create mapping and obstacle avoidance solutions for robotics applications. This module seamlessly integrates with the INS to provide us with position, velocity and acceleration measurements for the M100 using a fusion of on-board sensors described in Zhou et al. (2015). This information is used in our Kalman filter in equation (5). Our algorithms were implemented in C++ using ROS (Quigley et al., 2009). Using an open source implemen- tation of the AprilTag library based on OpenCV and with accelerations provided by OpenCV4Tegra, we can run the tag detection at a full 30 fps using the X3 camera and 20 fps on the BlueFOX camera. As mentioned in Section 4, we implemented our control system using pure attitude control in the xy axes and velocity control in the z axis. The reason for this is that the internal velocity estimator of the M100 fuses optical flow measurements from the Guidance system. These measure- ments become extremely inaccurate once the quadcopter flies over the landing platform. Although optical flow could be used to measure the relative velocity of the car, it would be difficult to pinpoint the moment where flow measurements transition from being with respect to the ground to being with respect to the moving car. Fig. 3. The M100 quadcopter. Note that all side facing cameras of the Guidance module were removed and the down facing BlueFox camera sits behind the bottom guidance sensor. 5.2 Experimental Results Fig. 4. Experimental setup showing the required equip- ment on the car. In practice the mobile phone could also be held inside the car as long as GPS reception is still good. A first experiment was done using only data from the mobile phone (no visual feedback) to prove the validity of our Proportional Navigation controller for the long range approach. Figure 5 shows the output of our EKF estimating the MAV’s and the target’s positions. With the target and the MAV starting at about 30 meters from each other, we see the MAV fly a smooth rendezvous trajectory eventually ending on top of the target. The close range system was experimentally validated with the GV moving at speeds as low as a human jogging and on a private race track at 30, 40 and 50 km/h with successful landings in each case. Videos of our experiments can be found at https://youtu.be/ILQqD2xQ4tg. Figure 6 shows the landing sequence where the quadrotor takes off, tracks and lands on the landing pad. The curves gain in altitude as the trajectory progresses because of the elevation profile of the race track. The effect is seen more clearly in Fig. 7, where we can also see the filtered AprilTag altitude rise, thanks to visual data and the M100’s internal altitude estimator, even before the phone’s GPS data indicates a change in altitude. Furthermore, we can see in Fig. 7 how the M100 closely matches the velocity of the AprilTag to perform the landing maneuver. The two peaks at the 24 and 27 second marks are strongly correlated to the visual loss of the tag by the BlueFOX camera which we can observe in Fig. 8. The descent starts at the 24 second mark slightly before the car hits its designated velocity of 14 m/s or 50.4 km/h. We can also see in Fig. 7 how the M100’s velocity estimation from the INS becomes incorrect when it is on top of the car. Which explains why we decided to dynamically increase the standard deviation of the measurement as described in Section 3.2. Figure 9 shows the quadcopter’s attitude during the flight. Notice how the roll is stable close to 0◦while the pitch stays between −10◦and −25◦for consistent forward flight. Finally, the yaw changes drastically after 10 seconds at the moment the car starts moving. 40 20 South-North (meters) 0 0 40 -20 30 West-East (meters) 20 10 -40 0 Proportional navigation trajectory in ENU 5 Up (meters) 10 MAV target Fig. 5. The PN controller efficiently catches up with the target at long range even when the only source of information is the mobile phone’s GPS and IMU. M100 AprilTag 0 20 2 200 Up (meters) 0 4 150 3D Landing trajectory in ENU West-East (meters) South-North (meters) 100 6 -20 50 -40 0 Platform starts moving Descent Motors Cutoff Fig. 6. Landing trajectory at 50km/h using the PID controller. 6. CONCLUSION The problem of the automatic landing of a MAV on a moving vehicle was solved with experimental tests going up to 50 km/h. A Proportional Navigation controller was 0 5 10 15 20 25 30 time (s) 0 5 10 15 speed (m/s) Speed M100_INS M100_KF AprilTag_KF 0 5 10 15 20 25 30 time (s) 0 1 2 3 4 5 6 altitude (m) Altitude M100_INS M100_KF AprilTag_GPS AprilTag_KF Fig. 7. Motions of M100 and AprilTag. 0 10 20 30 time (s) -1 0 1 2 x (m) Estimated LOS X 0 10 20 30 time (s) -1 -0.5 0 0.5 y (m) Estimated LOS Y 0 10 20 30 time (s) 0 2 4 6 z (m) Estimated LOS Z 0 10 20 30 time (s) 0 2 4 6 Distance (m) Distance KF X3 Bluefox Fig. 8. LOS and distance (coordinates in NED frame). 0 10 20 30 time (s) -40 -20 0 20 degree M100 roll 0 10 20 30 time (s) -100 -50 0 50 degree M100 pitch 0 10 20 30 time (s) -100 -50 0 50 degree M100 yaw Fig. 9. M100 attitude. used for the long range approach, which subsequently tran- sitioned into a PID controller at close range. A Kalman filter was used to estimate the position of the MAV relative to the landing pad by fusing together measurements from the MAV’s onboard INS, a visual fiducial marker and a mobile phone. Furthermore, it was shown that this system can be implemented using only commercial off-the-shelf components. Future work may include using a better multiscale visual fiducial on the landing pad to allow visual target tracking at longer and closer ranges using a single camera or simpli- fying the system by removing the requirement for a mobile phone. Performance improvements could also be achieved by adding a model of the ground vehicle’s turbulence or adding wind estimation in the control system. REFERENCES Garone, E., Determe, J.F., and Naldi, R. (2014). Gen- eralized traveling salesman problem for carrier-vehicle systems. AIAA Journal of Guidance, Control and Dy- namics, 37(3), 766–774. Garrido-Jurado, S., noz Salinas, R.M., Madrid-Cuevas, F., and Mar´ın-Jim´enez, M. (2014). Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition, 47(6), 2280 – 2292. Gautam, A., Sujit, P.B., and Saripalli, S. (2015). Appli- cation of guidance laws to quadrotor landing. In Un- manned Aircraft Systems (ICUAS), 2015 International Conference on, 372–379. Groves, P.D. (2013). Principles of GNSS, inertial, and multisensor integrated navigation systems. Artech House, 2nd edition. Holt, R. and Beard, R. (2010). Vision-based road-following using proportional navigation. Journal of Intelligent and Robotic Systems, 57(1-4), 193 – 216. Kabamba, P.T. and Girard, A.R. (2014). Fundamentals of Aerospace Navigation and Guidance. Cambridge University Press. Kim, J., Jung, Y., Lee, D., and Shim, D.H. (2014). Out- door autonomous landing on a moving platform for quadrotors using an omnidirectional camera. In Un- manned Aircraft Systems (ICUAS), 2014 International Conference on, 1243–1252. Kolodny, L. (2016). Mercedes-benz and Matternet unveil vans that launch delivery drones. http://tcrn.ch/ 2c48his. (Accessed on 09/09/2016). Lange, S., Sunderhauf, N., and Protzel, P. (2009). A vision based onboard approach for landing and position control of an autonomous multirotor uav in gps-denied environments. In Advanced Robotics, 2009. ICAR 2009. International Conference on, 1–6. Lardinois, F. (2016). Ford and DJI launch $100,000 devel- oper challenge to improve drone-to-vehicle communica- tions. tcrn.ch/1O7uOFF. (Accessed on 09/28/2016). Lin, C.E. and Yang, S.K. (2014). Camera gimbal tracking from uav flight control. In Automatic Control Confer- ence (CACS), 2014 CACS International, 319–322. Ling, K. (2014). Precision Landing of a Quadrotor UAV on a Moving Target Using Low-cost Sensors. Master’s thesis, University of Waterloo. Mathew, N., Smith, S.L., and Waslander, S.L. (2015). Planning paths for package delivery in heterogeneous multirobot teams. IEEE Transactions on Automation Science and Engineering, 12(4), 1298–1308. Muskardin, T., Balmer, G., Wlach, S., Kondak, K., La- iacker, M., and Ollero, A. (2016). Landing of a fixed- wing uav on a mobile ground vehicle. In 2016 IEEE International Conference on Robotics and Automation (ICRA), 1237–1242. Olson, E. (2011). Apriltag: A robust and flexible visual fiducial system. In Robotics and Automation (ICRA), 2011 IEEE International Conference on, 3400–3407. Quigley, M., Conley, K., Gerkey, B.P., Faust, J., Foote, T., Leibs, J., Wheeler, R., and Ng, A.Y. (2009). ROS: an open-source robot operating system. In ICRA Workshop on Open Source Software. Tan, R. and Kumar, M. (2014). Tracking of ground mobile targets by quadrotor unmanned aerial vehicles. Unmanned Systems, 2(02), 157–173. Yang, S., Ying, J., Lu, Y., and Li, Z. (2015). Precise quadrotor autonomous landing with SRUKF vision per- ception. In 2015 IEEE International Conference on Robotics and Automation (ICRA), 2196–2201. Zhou, G., Fang, L., Tang, K., Zhang, H., Wang, K., and Yang, K. (2015). Guidance: A visual sensing platform for robotic applications. In 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 9–14.