Image Processing in Optical Guidance for Autonomous Landing of Lunar Probe Ding Meng 1 Cao Yun-feng Wu Qing-xian 1 and Zhang Zhen 1 2 1School of Automatic Engineering NanJing University of Aeronautics and Astronautics E-mail:nuaa_dm@hotmail.com 2 Academy of Frontier Science NanJing University of Aeronautics and Astronautics E-mail:cyfac@nuaa.edu.cn Abstract Because of the communication delay between earth and moon, the GNC technology of lunar probe is becoming more important than ever. Current navigation technology is not able to provide precise motion estimation for probe landing control system Computer vision offers a new approach to solve this problem. In this paper, author introduces an image process algorithm of computer vision navigation for autonomous landing of lunar probe. The purpose of the algorithm is to detect and track feature points which are factors of navigation. Firstly, fixation areas are detected as sub-images and matched. Secondly, feature points are extracted from sub-images and tracked. Computer simulation demonstrates the result of algorithm takes less computation and fulfils requests of navigation algorithm. Key words: Fixation Area, Feature Point, Template Match, FPs Tracking I. INTRODUCTION n paper[1], the status of China’s deep space exploration plan is introduced including CE-1 lunar orbiter, the china’s subsequent Lunar Exploration Program. It is an important purpose in the second stage of china lunar exploration to land accurately of probe on the moon’s surface. The guidance-navigation-control(GNC) technology of moon probe is becoming more important than ever. Because of the communication delay induced by the large distances between the earth and moon, human kinds are hardly able to guide probe to landing safely on the moon. Probe will have to use on-board sensors and algorithms. However, current navigation technology can’t provide precision motion estimation for probe landing control system[2]. Therefore, new motion estimation technology must be developed. Computer vision may offer an approach to solve this problem. Manuscript received september 1, 2007. Ding Meng received B.E. and M.E degrees in School of Automatic Engineering of NanJing University of Aeronautics and Astronautics in 2003 and 2006. He is currently working toward Ph.D. in the same university. His interests are Computer Vision and Pattern Classification. Phone: 86-025-84896392; Cao Yun-feng received Ph.D in School of Automatic Engineering of NanJing University of Aeronautics and Astronautics in 2005. Now, he is the professor in Academy of Frontier Science of NanJing University of Aeronautics and Astronautics. His interests are Flight Control System and Intelligence Control. Phone: 86-025-84890902; Up to now, computer vision has been successfully used for autonomous navigation of robot and Micro Air Vehicle [3], autonomous landing of Unmanned Helicopter[4], and intelligence traffic[5]. Some appliances are being researched in the field of space exploration. The Near Earth Asteroid Rendezvous (NEAR), rendezvoused with asteroid Eros 433 in February 2000, used optical navigation extensively for orbit determination and small body 3-D modeling[6]. The Deep Space 1 mission as a part of the New Millennium Program is flying an autonomous optical navigation technology demonstration. The DS-1 AutoOpNav system used onboard centroiding of reference asteroids for autonomous navigation during small body fly-bys [7]. However, for probe accurate landing, the new, more robust and more precise vision algorithm should be developed. Image processing is the first step and a essential part of computer vision navigation. Generally, image processing of vision navigation study is mainly focused on feature detection and tracking. In 1997, Kawaguchi, from Japan, created a method, in which several fixed reference windows and a 2-D fast Fourier transform (FFT) algorithm were used in the correlation tracking[8]. In many case, flat areas must be chosen for landing security. Safe Landing Area Determination (SLAD),which offered by A. Johnson from Jet Propulsion Laboratory(JPL), achieved this task successfully [9]. However, there are no features or distinctive features on those landing sites. Although tracking by feature points is not the best way, feature points haved to be filtered in order to avoid this problem. Point tracking is the usual way in vision navigation .In 1999, Andrew E. Johnson, from JPL, generated a method based on automatic feature point(FP) tracking between a pair of images followed by two frames[2]. As a part of MUSES-C, Japanese scientist Toshihiko Misu offered an algorithm based on fixation area, which fixates on the areas with local varying intensity, and employs shading pattern as a fixation area, and then tracked the center of the fixation area[10]. Real-time image processing is the essential request in computer vision navigation. Toshihiko’s method requires less computation than Johnson’s. Nevertheless, the Number of fixation area’s center is difficult to satisfy the requirement of following navigation algorithm. This paper introduces an image processing algorithm using feature point tracking in probe autonomous landing which built on the methods from Johnson and Toshihiko. The procedure is as follow (Fig1): I ICIUS 2007 Oct 24-25, 2007 Bali, Indonesia ICIUS2007-A001 ISBN 978-979-16955-0-3 1 © 2007 ICIUS Step1. Extract fixation areas as templates and sub-images. Step2. Match templates and detecting FPs from sub-images Step3. Match FPs In section 2, fixation area detection algorithm is proposed. Feature point detection and tracking in the sub-images are explained in section 3 and section 4. Computer simulations of this algorithm are illustrated to demonstrate this image processing algorithm in section 5. In Section 6 we state conclusions. Fig.1 Procedure of algorithm II. FIXATION AREA DETECTION In artificial environment, edges and vertexes are usually used as features in image tracking. But in natural environment, these shapes can’t be found easily. Paper[11] proposed the use of ground control points based on Gabor function in image registration of natural terrain. In paper[10], shading pattern is employed as a fixation-area. To reduce computation, the averaging, Laplacian filtering and variance are used. In this paper, improved Toshihiko Misu’s method is used firstly to pre-process primary image for obtaining sub-image. The purpose of this step is that block matching is more robust than point matching. We can match block firstly in entire image and then match points in sub-image. The template for matching block is extracted as follows[10]: Step1. Enhance specific spatial wavelength of the original image by 2-D band-pass-filter. Step2. Calculate local variances of filtered image to evaluate ”contrast”. Step3. Extract high local-variance areas as templates. Step4. Least-square block matching. The matching is performed so as to minimize square error of the intensity between templates and region to be matched. 2.1Band-Pass-Filter The conception of filtering comes from the Fourier transform for signal processing in the frequency domain. Image processing is interested in filtering operations that are performed on the pixels of a digital image. The filter in the image processing is called spatial filtering. In generally, the image filter can be divided into smoothing spatial filter and sharpening spatial filter. Firstly, original image should be processed by smoothing spatial filter to reduce noise. In fact, smoothing spatial filter is a type of low-pass-filter(LPF), includes linear filter, such as averaging filters, and nonlinear filter, such as median filters. To reduce the consumption of computational power and storage, we use averaging as LPF, which is different from averaging filter. The averaging filter can’t change the size of original image and spend more time. The equation of averaging is: 1 1 0 0 1 ( , ) ( , ) y x S S s x i j x y y E u v E S u i S v j S S − − = = = + + ∑∑ (2.1) Where is the intensity of averaged and sub-sample image at . The size of this image is ( , ) s E u v ( , ) u v 1 x y S S of size of original image. ( , x y S S ) is the sub-sampling interval. Secondly, the 8-neighbor Laplacian is used as high-pass-filter. The mask of Laplacian as follows: 2.2Variance Map To show locally varying intensity, the variance map is calculated. Statistical variance within a window and the size of which is equal to computed template is computed. The formulation is: 1 1 1 1 2 2 0 0 0 0 ( , ) { ( , )} { ( , )} y y x x w w w w x y x y L L i j i j x y x y S S S S V u v E u i v j E u i v j W W W W − − − − = = = = = + + − + ∑∑ ∑∑ + Where is the local variance of Laplanian-filtered image. ( , ) V u v x y W W × is the size of template. is the intensity of Laplanian-filtered image at . ( , ) L E u v ( , ) u v , . y x x y x y W W w w S S = = 2.3Template extraction Templates for tracking are extracted according to variance map. The high-local-variance points are selected as the template. We should pay attention: a.The points chosen as template in the variance map are not near the edge of the map, in order to easily track points in next frame image. b.Some other templates which aren’t near extracted ones should be extracted also. This is because the cluster or single template would be weak to the observation noise which includes tracking error and error from the range measurement. 2.4Template matching To track the fixation area, we employ least-square block matching. The matching is performed so as to minimize the square error of the intensity between template and region to be matched. The number of templates should be certain. If any template which has been out of the image can’t be tracked, the new template is extracted instead. ICIUS 2007 Oct 24-25, 2007 Bali, Indonesia ICIUS2007-A001 ISBN 978-979-16955-0-3 2 © 2007 ICIUS III. FP DETECTION FROM FIXATION AREA To best our knowledge, some real-time FPD algorithms have been described in many literatures. FPD based on Harris Corner Finding algorithm has been employed by ASSET-2 motion, segmentation and shape tracking system[12]. After detecting the fixation area, paper[10] assumed the center of the fixation area as an feature point. I don’t think this is a good idea. In this paper, a FPD algorithm is selected to detect feature points from fixation area. We can use those points for match. In this paper, we select and improve a FPD algorithm of Benedetti and Perona[14], which is based on the work of Tomasi and Kanada[13]. There are two reasons to select this algorithm. a)This algorithm decreased the complexity of Tomasi’s method by eliminating the demand for any transcendental arithmetic operations and didn’t require calculation of square root. b)This algorithm is easy to track across several frames. It is difficult to detect and an individual point because of the presence of noise and distortion. For this reason patches of pixels are actually considered. The method proposed in paper[14] for detect FP is expressed simply as follows: Step1.Computing the image gradient across the image at each pixel location ( 1, ) ( 1, ) ( , , ) 2 x I x y I x y I x y t + − − = (3.1) ( , 1) ( , 1) ( , , ) 2 y I x y I x y I x y t + − − = (3.2) ( , , ) I x y t denoted the continuous function defining the brightness values of a sequence of images captured by a camera. The partial derivatives of with respect to x, y, are denoted respectively by and . ( , , ) I x y t ( , , ) xI x y t ( , , ) yI x y t Step2.Defining matrix G 1 1 1 1 2 2 ( ( , , )) ( , , ) ( , , ) ( , , ) ( , , ) ( ( , , )) (3.3) x x y W W x y y W W I x y t I x y t I x y t G I x y t I x y t I x y t a b b c ⎡ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ ∑ ∑ ∑ ∑ ⎤ b W1 denoted feature patch of pixels including pixel point(x,y). Step3. Find ( , ) t i j Pλ 2 ( , ) ( )( ) t i j t t P a c λ λ λ = − − − (3.4) Step4. retain pixel if ( , ) 0, ( , ) t i j t P a i j λ λ > > , tλ is a threshold value. Step5. Perform a final non-maximum suppression step on 1( , ) i j λ , yielding the actual feature locations. The fixation is the area with local varying intensity, so good features for tracking are expected to be abundant. At present, there are two strategies for feature point search: exhaustive search and random search strategy. The first one is traditional feature point detection algorithm which searches the image for every distinction exhaustively. Suppose that N feature points are needed for motion estimation which is the purpose of searching. The random search strategy selects a pixel at randomly from the image. If the randomly selected pixel has an interest value greater than a predetermined threshold, it is as a feature point. This procedure is repeated until N feature points are detected. Compared with the first algorithm, the second one raised the speed of detection. In this paper, we use exhaustive search to obtain enough FPs because the size of sub-image is small. IV. FEATURE POINT TRACKING The next step is to locate the features detected in the first frame in the second frame. This procedure is called feature tracking. Feature tracking can be divided into two groups of algorithm: correlation based methods and optical flow based method. Correlation based methods require more computation and are appropriate when the motion of features in the image is large, Such as RANSAC. In this paper, we use Tomasi-Kanade’s feature tracking which is optical flow based method[15]. When the camera moves, the patterns of image intensities change in a complex way. However, images taken at late time instants are usually strongly related to each other, because they refer to the same scene taken from only slightly different viewpoints. ( , , ) ( , , ) x y I x y t I x d y d t τ = + + + (4.1) Using Taylor-series expansion, obtaining: ( , , ) ( , , ) x y x x y y t I x d y d t I x y t I d I d I τ + + + ≈ + + + (4.2) From (4.1) and (4.2): 0 x x y y t I d I d I + + = (4.3) Equation(4.3) is called optical flow constraint equation. If every pixel(N pixels) in the feature window has the same displacement, we can obtain: 1 1 1 x y t x y N N N x y t I I I d d I I I ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ • • • ⎢ ⎥ ⎢ ⎥ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ = − • • • ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ • • • ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ (4.4) The displacement of window can be gained by least-square. ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ − = ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ∑ ∑ ∑ ∑ ∑ ∑ = = = = = = N k k t k y N k k t k x y x N k k y N k k y k x N k k y k x N k k x I I I I d d I I I I I I 1 1 1 2 1 1 1 2 ) ( ) ( (4.5) i.e., e Gd = (4.6) Displacement is: (4.7) e G d 1 − = V. COMPUTER SIMULATIONS AND RESULT According to the algorithm which has been expounded in this paper, we give the result of computer simulations in ICIUS 2007 Oct 24-25, 2007 Bali, Indonesia ICIUS2007-A001 ISBN 978-979-16955-0-3 3 © 2007 ICIUS this section. Fig 2(a)(b)(c) are original images of continuous frame. (a) (b) (c) Fig2. original images of continuous frame Fig2(a) is LPF image of Fig1(a), . Fig2(b) is the image by Laplacian filtering. Fig3(c) is variance map(4 ×4) of Fig1(a). Fig3(d) is the extracted areas on original image. 5 x y S S = = (a) (b) (c) (d) Fig3.proceed image Fig4(a)(b)(c)(d)(e) is five templates(20×20) from Fig2(a) (a) (b) (c) Fig4.templates(sub-image) (d) (e) Fig4.templates(sub-image) Fig5 illustrates the result of template match. White Square means the position of sub-images from 3 frames images in the first frame. Arrow means the direction of template moving. Fig5.template match Fig6 illustrates the results of FPs detection. W1=3, threshold=1500. Fig7 illustrates the result of FPs tracking in the sub-image. Fig8 is the result of FPs tracking in the whole image. Fig6. FPs detection Fig7. FPs tracking of sub-image Fig8. FPs tracking of whole image ICIUS 2007 Oct 24-25, 2007 Bali, Indonesia ICIUS2007-A001 ISBN 978-979-16955-0-3 4 © 2007 ICIUS VI. CONCLUSIONS Computer simulation demonstrates that we can receive enough FPs real-timely and correctly. FPs can be extracted from the sub-images, as a result, FPs tracking algorithm cost less computation than other methods. At the same time, the field of FPs tracking becomes smaller, so the level of FPs matching becomes higher. This algorithm should be improved in the future. We should extend the distance between different feature points. The purpose is to make the result of motion estimation more correct REFERENCES [1]Ye Peijian, Peng Jing. Deep Space Exploration and Prospect in China. Engineering Science. Vol.8 No.10 oct.2006 [2]Larry H.Matthies and Andrew E.Johnson. Precise Image-Based Motion Estimation for Autonomous Small Body Exploration. 5th International Symposium on Artificial Intelligence and Automation in Space,Noordwijk,June 1999:627-634. [3]Scott M.Ettinger, Michael C.Nechyba and Peter D.lfju. Vision-Guided Flight Stability and Control for Micro Air Vehicles. IEEE Int. Conf. on Intelligent Robots and Systems, 2002:2134-2140. [4]Srikanth Saripalli, James F.Montgomery and Gaurav S.Sukhatme. Vision-Based Autonomous Landing of an Unmmaned Aerial Vehicle. Preceeding of IEEE International Conference on Robotics and Automation,2002:2799-2804. [5]Cucchiara R, Grana C, Piccardi M, et al. Statistic and Knowledge-based moving object detection in traffic scenes. Proceedings of IEEE Intelligent Transportation Systems, 2000 [6]J.K. Miller et al.“Navigation analysis For Eros rendezvous and orbital phases.“ Journal Astronautical Sciences, vol.43, No.4, pp.453-476.1995 [7] R. Szeliski and S.B. Kang. Recovering 3-D shape and motion from image streams using non-linear least squares. Journal Visual Communication and Image Representation, vol. 5, No.1, pp 10-28, March 1994. [8] Kawaguchi, J., Hashimoto, T., Kubota, T., Sawai, S., and Fujii, G. (1997) Autonomous optical guidance and navigation strategy around a small body. AIAA Journal of Guidance, Control, and Dynamics, 20, 5(Sept.—Oct. 1997), 1010—1017. [9] A. Johnson, A. Klump, J. Collier, and AronWolf, “LIDAR-Base Hazard Avoidance for Safe Landing on Mars," AAS/AIAA Space Flight Mechanics Meeting, Santa Barbara, CA,February 2001. [10] TOSHIHIKO MISU TATSUAKI HASHIMOTO KEIKEN NINOMIYA, Optical Guidance for Autonomous Landing of Spacecraft, IEEE Transactions on Aerospace and Electronic Vol. 35, No. 2 April 1999 [11] Zheng, Q., and Chellappa, R. (1993) A computational approach to image registration. IEEE Transactions on Image Processing, 2, 3 (July 1993), 311—326. [12] S. Smith and J. Brady, “4sset-2: Real-time Motion Segmentation and Shape Tracking,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 8, no. 17, pp. 814-820, 1995. [13]Jianbo Shi and Carlo Tomasi. Good Features to Track. IEEE Conf.Comput.Vision and Pattern Recognition, 1994:593-600. [14]A.Benedetti, P.Perona. Real-time 2-D Feature Detection on a Reconfigurable Computer. IEEE Conf. Computer Vision and Pattern Recognition,1998:586-593. [15] Tiziano Tommasini, Andrea Fusiello. Making Good Feature Tracker Better. Santa Barbara, USA, IEEE Comput.Soc, Conf. Computer Vision and Pattern Recognition, 1998:178-182. ICIUS 2007 Oct 24-25, 2007 Bali, Indonesia ICIUS2007-A001 ISBN 978-979-16955-0-3 5 © 2007 ICIUS