arXiv:1401.6904v3 [cs.RO] 27 Apr 2015 1 Adaptive Visual Tracking for Robotic Systems Without Image-Space Velocity Measurement Hanlei Wang Abstract In this paper, we investigate the visual tracking problem for robotic systems without image-space velocity measurement, simultaneously taking into account the uncertainties of the camera model and the manipulator kinematics and dynamics. We propose a new image-space observer that exploits the image-space velocity information contained in the unknown kinematics, upon which, we design an adaptive controller without using the image-space velocity signal where the adaptations of the depth- rate-independent kinematic parameter and depth parameter are driven by both the image-space tracking errors and observation errors. The major superiority of the proposed observer-based adaptive controller lies in its simplicity and the separation of the handling of multiple uncertainties in visually servoed robotic systems, thus avoiding the overparametrization problem of the existing work. Using Lyapunov analysis, we demonstrate that the image-space tracking errors converge to zero asymptotically. The performance of the proposed adaptive control scheme is illustrated by a numerical simulation. Index Terms Visual tracking, adaptive control, uncertain depth, manipulator. I. I NTRODUCTION It is generally believed that the incorporation of versatile sensory information (e.g., the information provided by joint position/velocity sensors, tip force/torque sensors, and vision systems) into the control system is an important aspect of intelligent robots. Mimicking the action of human beings, more and more manipulators are equipped with cameras to monitor their status and further to perform visual servoing so that the system can achieve certain robustness The author is with the Science and Technology on Space Intelligent Control Laboratory, Beijing Institute of Control Engineering, Beijing 100190, China (e-mail: hlwang.bice@gmail.com). March 12, 2018 DRAFT 2 against model uncertainties (see, e.g., [1], [2]). Many results in the past years have been devoted to the visual servoing problem [1], [3], [2], [4], [5], [6], [7], [8], [9], [10]. The visual servoing control schemes can in general be grouped into two classes (see, e.g., [2]). The first class (e.g., [1], [5], [9]) is known as the position-based visual servoing, which simply takes the camera as a specific task-space sensor, i.e., the end-effector position/velocity information is obtained from the camera. One possible disadvantage of this scheme, as is frequently stated in the literature (e.g., [2], [7]), is the requirement of the precise/extensive calibration. The second class (e.g., [4], [6], [7]) is known as the image-based visual servoing, which directly utilizes the information of the concerned object in the image space and does not require the calibration of the camera. The advantage of the image-based visual servoing is now well known, i.e., the possible errors in establishing and calibrating the camera model are avoided. As a standard control methodology, adaptive control has been shown to be adept at treating model uncertainties and be promising to achieve aggressive performance [11]. Since the late 1980s, numerous adaptive controllers for robot manipulators taking into account the nonlinear robot dynamics have been proposed (e.g., [12], [13], [14]), and these controllers are all based on the linearity-in-parameters property of the manipulator dynamic model. The recent studies in [15], [16], [17], [18] show how the linearity-in-parameters feature of the manipulator kinematics is exploited for performing adaptive tracking/regulation control in the case of existence of the kinematic uncertainties. An interesting property of a visually servoed robotic system (with a fixed camera) is that if the depth of the feature point with respect to the camera frame is unknown but kept constant, the overall kinematics of the system that describes the mapping from joint space to image space is linearly parameterized [15]. This desirable feature of the overall kinematics, unfortunately, no longer holds in the case that the unknown depth is time varying since the depth acts as the denominator in the overall kinematics [7], [19], [20], [21]. Via exploiting the respective linearity-in-parameters properties of the depth and the depth-independent interaction matrix, adaptive strategies are developed in [7], [19], [20], [21], [22], [23] to handle the uncertain camera parameters. In particular, the adaptive visual tracking problem is resolved in [19], and the adaptive solutions to the visual regulation problem are given in [21], [23], by designing appropriate control and adaptation laws to accommodate the uncertainties in the manipulator dynamics and kinematics and the camera model However, one possible limitation of the above results which deal with the tracking problem is March 12, 2018 DRAFT 3 the requirement of image-space velocity measurement in the control input. One may notice that in the adaptive regulation algorithms given in [21], [23], the control inputs do not need the image- space velocity measurement, yet the parameter adaptation laws do use the image-space velocity signal and in addition their extension to the more challenging tracking problem remains unclear. Also note that if applying the approach in [17] to the visual tracking problem with constant depth, the image-space velocity can indeed be avoided in the kinematic parameter adaptation, yet the control will still require the availability of the image-space velocity. The image-space velocity is usually/commonly obtained by the standard numerical differentiation of the image- space position information. It is well recognized that this velocity signal tends to be very noisy due in part to the relatively long processing time or delays of the image information, and thus it is undesirable to use image-space velocities in the control. One possible solution is given in [24], extending the result in [15] to the case of time-varying uncertain depth. The limitation of [24] lies in three aspects: 1) if we further accommodate the uncertain dynamics based on [24], the overparametrization and even nonlinear parametrization (due to the presence of the uncertain depth in the denominator of an unknown term to be compensated for) problems will occur (refer to [24, equation (22)]), and additionally the separation of the kinematic and dynamic uncertainties is impossible; 2) the determination of the controller parameters relies on some priori knowledge of the system model; 3) it requires high control activities to accommodate the variation of the depth, due to the velocity-dependent feedback gain (which means that the undesirable high-gain feedback is demanded in the case that the manipulator motions at a high velocity). So, the best result that can be achieved by using the scheme in [24] is still conservative. Other adaptive control schemes appear in [25], [26], [27], where cascade-framework-based control schemes are proposed in [26], [27], and an observer-based controller is proposed in [25], which achieves the image-space trajectory tracking of electrically driven robots with the desired armature current not involving the image-space velocity. The results in [25], [26], [27], in contrast to [24], take into consideration the uncertain robot kinematics and dynamics. Nevertheless, the results in [25], [27] can only deal with the case that the depth is constant, and the controller given in [26] needs to obtain the end-effector position with respect to the manipulator base frame so as to perform the kinematic parameter estimation (refer to [26, equation (21)]) (which means that it is not a completely image-based visual servoing but a combination of image-based and position-based schemes, thus demanding the elaborate calibration and tending to be vulnerable to modeling March 12, 2018 DRAFT 4 errors). Moreover, the SDU factorization adopted in [26] (some detailed analysis appears in [28]) results in the complexity in both the controller design and stability analysis. Another limitation of [26] may be the requirement of the persistent excitation (PE) of the kinematic regressor (see the proof of Theorem 3 in [28]) In our opinion, the separation of the handling of multiple uncertainties of the system is highly preferred, whose superiority may be the avoidance of overparametrization, the simplification of the control scheme, and consequently better performance of the closed-loop system. Along this idea, in this paper, we propose an observer-based adaptive control scheme for visual tracking with time-varying depth (unlike the control schemes in [25], [27] that can only handle the constant depth case) and with uncertain manipulator kinematics and dynamics. The proposed adaptive controller avoids the measurement of image-space velocity and realizes the separation of the handling of three categories of parameter uncertainties. Using a depth-dependent quasi- Lyapunov function, we show the convergence of the image-space tracking errors. In contrast to the velocity-dependent-gain feedback and the overparametrization problem in [24], our control scheme employs a constant-gain feedback taking into account the uncertain manipulator dynamics and kinematics in addition to the uncertain camera model and achieves the separation of the handling of the depth, depth-rate-independent kinematic, and dynamic parameter uncertainties (avoiding the overparametrization or even the nonlinear parametrization). Moreover, the elaborate calibration and vulnerability to model uncertainties of [26] (due to the kinematic parameter estimation) are conquered by the proposed completely image-based servoing controller, and additionally, the PE condition associated with the kinematic regressor in [26] is not demanded in the proposed control scheme. II. K INEMATICS AND D YNAMICS In this paper, we consider a visually servoed robotic system consisting of an n -DOF (degree- of-freedom) manipulator and a fixed pinhole uncalibrated camera (see, e.g., [29]), where the manipulator end-effector motion is mapped to the image space by the camera and it is assumed that the number of the feature points is m . The fact that the camera is not calibrated means that the extrinsic and intrinsic parameters of the camera are uncertain. Let x i ∈ R 2 (with the unit being pixel) represent the position of the projection of the i -th feature point on the image plane, and r i ∈ R 3 denote the position of the i -th feature point March 12, 2018 DRAFT 5 with respect to the base frame of the manipulator, i = 1 , . . . , m . Via the image Jacobian matrix [2] or the interaction matrix [1], the relationship between the image-space velocity ̇ x i and the feature-point velocity ̇ r i can be written as [7] ̇ x i = 1 z i ( q ) ( ̄ D − x i d T 3 ) ̇ r i (1) where z i ( q ) ∈ R denotes the depth of the i -th feature point with respect to the camera frame, ̄ D ∈ R 2 × 3 and d 3 ∈ R 3 are taken from D = [ ̄ D T , d 3 ] T which is the left 3 × 3 portion of the perspective projection matrix, N i ( x i ) = ̄ D − x i d T 3 ∈ R 2 × 3 is called the depth-independent interaction matrix in [7], i = 1 , . . . , m , and q ∈ R n denotes the joint position of the manipulator. In addition, it should be noted that z i ( q ) = d T 3 r i + d 0 with d 0 being a constant and ̇ z i ( q ) = d T 3 ̇ r i (see also [7]) and it is assumed that z i ( q ) is uniformly positive, i = 1 , . . . , m . Equation (1) can be rewritten as the following compact form ̇ x = Z − 1 ( q ) N ( x ) ̇ r (2) where x = [ x T 1 , . . . , x T m ] T , r = [ r T 1 , . . . , r T m ] T , Z ( q ) = diag [ z 1 ( q ) I 2 , . . . , z m ( q ) I 2 ] with I 2 being the 2 × 2 identity matrix, and N ( x ) = diag [ N 1 ( x 1 ) , . . . , N m ( x m ) ] . Let v 0 ∈ R 3 denote the translational velocity of a reference point on the end-effector with respect to the manipulator base frame and ω 0 ∈ R 3 the angular velocity of the end-effector with respect to the manipulator base frame, which relate to the joint velocity ̇ q as [30], [31]   v 0 ω 0   = J r ( q ) ̇ q (3) where J r ( q ) ∈ R 6 × n denotes the manipulator Jacobian matrix. The relationship between the velocity of the m feature points ̇ r and the manipulator joint velocity ̇ q can be written as [20] (see also [2], [30], [31]) ̇ r =      I 3 − S ( c 1 ) . . . . . . I 3 − S ( c m )      ︸ ︷︷ ︸ J f J r ( q ) ̇ q (4) where I 3 is the 3 × 3 identity matrix, c i ∈ R 3 is the position vector of the i -th feature point with respect to the reference point on the manipulator end-effector, i = 1 , . . . , m , and the skew- March 12, 2018 DRAFT 6 symmetric form S ( b ) is defined as S ( b ) =      0 − b 3 b 2 b 3 0 − b 1 − b 2 b 1 0      for a 3-dimensional vector b = [ b 1 , b 2 , b 3 ] T . The combination of (2) and (4) gives rise to the overall kinematic equation [19], [20], [21], i.e., ̇ x = Z − 1 ( q ) N ( x ) J f J r ( q ) ︸ ︷︷ ︸ J ( q,x ) ̇ q (5) where J ( q, x ) is a Jacobian matrix that does not depend on the depth (also referred to as the depth-independent image Jacobian matrix in [20]). The exploitation of the structure of (1) allows J ( q, x ) to be decomposed as J ( q, x ) = ( I m ⊗ ̄ D ) J f J r ( q ) ︸ ︷︷ ︸ J ⊥ z ( q ) − X ( I m ⊗ d T 3 ) J f J r ( q ) ︸ ︷︷ ︸ J z ( q ) (6) where I m is the m × m identity matrix, the matrix X = diag [ x i , i = 1 , . . . , m ] , ⊗ denotes the Kronecker product [32], J ⊥ z ( q ) is a Jacobian matrix that maps the joint velocity ̇ q to a plane which is parallel to the image plane, and J z ( q ) is a Jacobian matrix that describes the relationship between the changing rate of the depth vector z ( q ) = [ z 1 ( q ) , . . . , z m ( q )] T and ̇ q (see, e.g., [7]), i.e., ̇ z ( q ) = J z ( q ) ̇ q. (7) It is worth remarking that the existence of the second term on the right side of (6) is due to the variation of the depth vector z ( q ) while that of the first one is independent of the variation of z ( q ) . Therefore, J ⊥ z ( q ) is called the depth-rate-independent Jacobian matrix . We now make the following assumption. Assumption 1: The number of the manipulator DOFs and that of the feature points satisfy the constraint that n ≥ 2 m and m ≤ 3 , and the three feature points are non-collinear in the case m = 3 . Furthermore, for ∀ u = [ u T 1 , . . . , u T m ] T with u i ∈ R 2 , i = 1 , . . . , m , the rank of N ( u ) J f is 2 m . March 12, 2018 DRAFT 7 Remark 1 1 : From [7, Proposition 1], we obtain that rank [ N i ( u i )] = 2 , ∀ i . Next, we discuss the rank of N ( u ) J f for m = 1 , m = 2 , and m = 3 , respectively. 1) In the case m = 1 , it is straightforward to obtain that J f has full row rank and thus rank [ N ( u ) J f ] = 2 (see also [7], [20]). 2) In the case m = 2 , the rank of N ( u ) J f is equal to that of the matrix J T f N T ( u ) =   I 3 I 3 S ( c 1 ) S ( c 2 )     N T 1 ( u 1 ) 0 3 × 2 0 3 × 2 N T 2 ( u 2 )   . Now consider the following linear equation with μ 1 , μ 2 ∈ R 3 being the unknowns   I 3 I 3 S ( c 1 ) S ( c 2 )     μ 1 μ 2   = 0 . (8) As is well known, the rank of the skew-symmetric matrix S ( b ) is 2 for ∀ b 6 = 0 , and therefore the rank of S ( c 2 − c 1 ) is 2, which leads us to obtain from the standard matrix theory that the rank of the coefficient matrix J f is 5. According to the standard theory of linear equations, the solutions of equation (8) constitute a one-dimensional space with the elements being of the form [ μ T 1 , μ T 2 ] T = k [ c T 1 − c T 2 , c T 2 − c T 1 ] T where k is an arbitrary constant. Let us now consider the following linear equation with λ i , i = 1 , . . . , 4 being the unknowns   N T 1 ( u 1 ) 0 3 × 2 0 3 × 2 N T 2 ( u 2 )        λ 1 . . . λ 4      = k   c 1 − c 2 c 2 − c 1   . (9) If c 1 − c 2 is not in the intersection of the range spaces of N T 1 ( u 1 ) and N T 2 ( u 2 ) , equation (9) has a solution only in the case that k = 0 , and this solution is λ i = 0 , i = 1 , . . . , 4 . Hence, the rank of N ( u ) J f is 4 . 3) In the case m = 3 , from the standard matrix theory, the rank of J T f is equal to that of the following matrix (which is obtained by the elementary row operation of J T f )   I 3 I 3 I 3 0 3 × 3 S ( c 2 − c 1 ) S ( c 3 − c 1 )   . To determine the rank of this matrix, we have to identify that of B = [ S ( c 2 − c 1 ) S ( c 3 − c 1 ) ] . Suppose that there is a nonzero vector μ ∈ R 3 such that B T μ = 0 , which then means that 1 The discussions on the cases of m = 2 and m = 3 are largely due to the constructive comments from one anonymous reviewer. March 12, 2018 DRAFT 8 μ is parallel to c 2 − c 1 and c 3 − c 1 simultaneously. Obviously, this will not happen since the three feature points are non-collinear. Therefore, the rank of B is 3 and consequently the rank of J T f is 6. Then, we obtain from the standard theory of linear equations that the null space of J T f is a set containing three independent basis vectors, whose elements can be expressed as k 1 [ c T 1 − c T 2 , c T 2 − c T 1 , 0 T 3 ] T + k 2 [ 0 T 3 , c T 2 − c T 3 , c T 3 − c T 2 ] T + k 3 [ c T 1 − c T 3 , 0 T 3 , c T 3 − c T 1 ] T with k 1 , k 2 , and k 3 being arbitrary constants. Now consider the following linear equation with λ i , i = 1 , . . . , 6 being the unknowns      N T 1 ( u 1 ) 0 3 × 2 0 3 × 2 0 3 × 2 N T 2 ( u 2 ) 0 3 × 2 0 3 × 2 0 3 × 2 N T 3 ( u 3 )           λ 1 . . . λ 6      = k 1      c 1 − c 2 c 2 − c 1 0 3      + k 2      0 3 c 2 − c 3 c 3 − c 2      + k 3      c 1 − c 3 0 3 c 3 − c 1      . (10) If none of the nonzero elements in span { c 1 − c 2 , c 1 − c 3 } are in the range space of N T 1 ( u 1 ) , none of the nonzero elements in span { c 2 − c 3 , c 2 − c 1 } are in the range space of N T 2 ( u 2 ) , and none of the nonzero elements in span { c 3 − c 1 , c 3 − c 2 } are in the range space of N T 3 ( u 3 ) , equation (10) has only one solution λ i = 0 , i = 1 , . . . , 6 . Hence, the rank of J T f N T ( u ) in this case is 6 . Remark 2: The rank of N ( u ) J f has been discussed in [20, p. 616]. Yet, the analysis there is neither complete nor rigorous for the cases m = 2 and m = 3 . Here, it is demonstrated that N ( u ) J f has full row rank if the relative position vectors between the feature points in the manipulator base frame satisfy certain conditions. The proof of the fact that rank ( J f ) = 5 for the case m = 2 and that rank ( J f ) = 6 for the case m = 3 has already been given in [20, p. 616], yet a different approach is used here to prove this fact. For more complete and detailed discussions as well as the vivid explanations of the singularity issues associated with the case of three feature points (i.e., m = 3 ), please refer to [33]. We further make the following assumption to facilitate the controller design and stability analysis in the sequel. Assumption 2: For ∀ u = [ u T 1 , . . . , u T m ] T with u i ∈ R 2 , i = 1 , . . . , m , the matrix J ( q, u ) = N ( u ) J f J r ( q ) has full row rank in the case that Assumption 1 holds. Assumption 2 holds if the manipulator is away from the singular configuration and the manip- March 12, 2018 DRAFT 9 ulator end-effector and the camera are in the nonsingular relative configuration. In fact, from As- sumption 1, we know that rank [ N ( u ) J f ] = 2 m . Since the manipulator is assumed to be away from the singular configuration, we obtain rank [ J r ( q )] = min { n, 6 } ≥ 2 m . From [34, p. 210], the rank of J ( q, u ) can be determined as rank [ J ( q, u )] = rank [ J r ( q )] − dim [ N ∗ ( N ( u ) J f ) ∩ R ∗ ( J r ( q ))] , where N ∗ ( N ( u ) J f ) denotes the null space of N ( u ) J f and R ∗ ( J r ( q )) the range space of J r ( q ) . The vectors in the range space of J r ( q ) that denote the velocities of the feature points motioning towards the pinhole of the camera, obviously, lie in the null space of N ( u ) J f since, physically, the image-space velocities corresponding to these vectors are zero. The assumption that the end-effector and the camera are in the nonsingular relative configuration ensures that the rank of J ( q, u ) is the largest, i.e., only min { n, 6 } − 2 m basis vectors in R ∗ ( J r ( q )) lie in the null space of N ( u ) J f . Then, we obtain rank [ J ( q, u )] = min { n, 6 } − (min { n, 6 } − 2 m ) = 2 m . In the special case that n ≥ 6 , from [34, p. 220], we have rank [ J ( q, u )] = rank [ N ( u ) J f ] = 2 m , which implies that the nonsingular relative configuration is always ensured for n ≥ 6 . The overall kinematics (5) has the following property. Property 1: The two quantities Z ( q ) ψ and ̇ Z ( q ) φ can be linearly parameterized [7], [19], i.e., Z ( q ) ψ = Y z ( q, ψ ) a z (11) ̇ Z ( q ) φ = ̄ Y z ( q, ̇ q, φ ) a z (12) where ψ = [ ψ T 1 , . . . , ψ T m ] T and φ = [ φ T 1 , . . . , φ T m ] T with ψ i ∈ R 2 and φ i ∈ R 2 , i = 1 , . . . , m , which also directly yields Φ J z ( q ) ̇ q = ̇ Z ( q ) φ = ̄ Y z ( q, ̇ q, φ ) a z (13) where Φ = diag [ φ i , i = 1 , . . . , m ] , a z ∈ R p 1 is the unknown depth parameter vector, and Y z ( q, ψ ) ∈ R (2 m ) × p 1 and ̄ Y z ( q, ̇ q, φ ) ∈ R (2 m ) × p 1 are two regressor matrices. In addition, J ( q, x ) ̇ q can also be linearly parameterized [19], which gives J ⊥ z ( q ) ̇ q = Y ⊥ z ( q, ̇ q ) a ⊥ z (14) where a ⊥ z ∈ R p 2 is the unknown depth-rate-independent kinematic parameter vector, and Y ⊥ z ( q, ̇ q ) ∈ R (2 m ) × p 2 is the depth-rate-independent kinematic regressor matrix. Therefore, J ( q, x ) ̇ q can be parameterized as [by (13) and (14)] J ( q, x ) ̇ q = Y ⊥ z ( q, ̇ q ) a ⊥ z − ̄ Y z ( q, ̇ q, x ) a z . (15) March 12, 2018 DRAFT 10 The equations of motion of the manipulator can be written as [11], [31] M ( q ) ̈ q + C ( q, ̇ q ) ̇ q + g ( q ) = τ (16) where M ( q ) ∈ R n × n is the inertia matrix, C ( q, ̇ q ) ∈ R n × n is the Coriolis and centrifugal matrix, g ( q ) ∈ R n is the gravitational torque, and τ ∈ R n is the exerted joint torque. Three fundamental properties associated with the dynamics (16) that shall be useful for the subsequent controller design and stability analysis are listed as follows (see, e.g., [11], [31]). Property 2: The inertia matrix M ( q ) is symmetric and uniformly positive definite. Property 3: The Coriolis and centrifugal matrix C ( q, ̇ q ) can be suitably selected such that ̇ M ( q ) − 2 C ( q, ̇ q ) is skew-symmetric. Property 4: The dynamics (16) depends linearly on an unknown constant dynamic parameter vector a d ∈ R p 3 , and thus M ( q ) ̇ ξ + C ( q, ̇ q ) ξ + g ( q ) = Y d ( q, ̇ q, ξ, ̇ ξ ) a d (17) where Y d ( q, ̇ q, ξ, ̇ ξ ) ∈ R n × p 3 is the dynamic regressor matrix, ξ ∈ R n is a differentiable vector, and ̇ ξ is the derivative of the vector ξ with respect to time. III. O BSERVER -B ASED A DAPTIVE T RACKING C ONTROL In this section, we investigate the adaptive visual tracking for robotic systems with time-varying depth and with uncertain kinematics and dynamics. We will at first develop an image-space observer, and then, based on this observer, we propose an adaptive tracking controller without involving image-space velocity measurement to realize the asymptotic trajectory tracking in the image space, i.e., x − x d → 0 and ̇ x − ̇ x d → 0 as t → ∞ , where x d denotes the desired trajectory in the image space and we assume that x d , ̇ x d , and ̈ x d are all bounded. The image-space observer is designed as ̇ x o = ˆ Z − 1 ( q ) ˆ J ( q, x ) ̇ q − 1 2 ˆ Z − 1 ( q ) ˆ ̇ Z ( q ) × ( x o − x d ) − α ( x o − x ) (18) where x o denotes the observed quantity of the image-space position, α is a positive design constant, ˆ Z ( q ) and ˆ ̇ Z ( q ) are the estimates of Z ( q ) and ̇ Z ( q ) , respectively, which are obtained by replacing a z in Z ( q ) , ̇ Z ( q ) with its estimate ˆ a z , and ˆ J ( q, x ) is the estimate of J ( q, x ) , which March 12, 2018 DRAFT 11 is obtained by replacing a ⊥ z and a z in J ( q, x ) with their estimates ˆ a ⊥ z and ˆ a z , respectively. The employment of the second term on the right side of (18) is to accommodate the variation of the depth. The closed-loop observer dynamics can be written as ∆ ̇ x o = ˆ Z − 1 ( q ) ˆ J ( q, x ) ̇ q − Z − 1 ( q ) J ( q ) ̇ q − 1 2 ˆ Z − 1 ( q ) ˆ ̇ Z ( q ) × ( x o − x d ) − α ∆ x o (19) where ∆ x o = x o − x is the image-space observation error. Equation (19) can be further formulated as Z ( q )∆ ̇ x o = [ Z ( q ) − ˆ Z ( q ) ] ˆ Z − 1 ( q ) ˆ J ( q, x ) ̇ q + ˆ J ( q, x ) ̇ q − J ( q, x ) ̇ q − 1 2 Z ( q ) ˆ Z − 1 ( q ) ˆ ̇ Z ( q ) ( x o − x d ) − αZ ( q )∆ x o . (20) Let us rewrite (20) as (by Property 1) Z ( q )∆ ̇ x o + 1 2 ̇ Z ( q )( x o − x d ) = − Y z ( q, ˆ Z − 1 ( q ) ˆ J ( q, x ) ̇ q ) ∆ a z + Y ⊥ z ( q, ̇ q )∆ a ⊥ z − ̄ Y z ( q, ̇ q, x ) ∆ a z + 1 2 ̇ Z ( q ) ( x o − x d ) − 1 2 Z ( q ) ˆ Z − 1 ( q ) ˆ ̇ Z ( q ) ( x o − x d ) ︸ ︷︷ ︸ Π − αZ ( q )∆ x o (21) where ∆ a z = ˆ a z − a z and ∆ a ⊥ z = ˆ a ⊥ z − a ⊥ z are the depth and depth-rate-independent kinematic parameter estimation errors, respectively, and the term Π can be interestingly written as (again March 12, 2018 DRAFT 12 by Property 1) Π = 1 2 [ ̇ Z ( q ) − ˆ ̇ Z ( q ) ] ( x o − x d ) + 1 2 [ ˆ Z ( q ) − Z ( q ) ] ˆ Z − 1 ( q ) ˆ ̇ Z ( q ) ( x o − x d ) = − 1 2 ̄ Y z ( q, ̇ q, x o − x d )∆ a z + 1 2 Y z ( q, ˆ Z − 1 ( q ) ˆ ̇ Z ( q ) ( x o − x d ) ) ∆ a z . (22) In this way, equation (21) can be rewritten as Z ( q )∆ ̇ x o + 1 2 ̇ Z ( q )( x o − x d ) = − αZ ( q )∆ x o + Y ⊥ z ( q, ̇ q )∆ a ⊥ z − Y ∗ z ∆ a z (23) where the combined depth regressor Y ∗ z is defined by Y ∗ z = Y z ( q, ˆ Z − 1 ( q ) ˆ J ( q, x ) ̇ q ) + ̄ Y z ( q, ̇ q, x + x o − x d 2 ) − 1 2 Y z ( q, ˆ Z − 1 ( q ) ˆ ̇ Z ( q ) ( x o − x d ) ) . (24) Next, we develop an adaptive controller based on the observed quantities generated by the observer (18), and the kinematic equation (7) and the decomposition property of J ( q, x ) given by equation (6) will be exploited for the adaptive controller design. Let us define a joint reference velocity as ̇ q r = [ ˆ J ( q, ( x o + x d ) / 2) ︸ ︷︷ ︸ ˆ J ∗ ] + [ ˆ Z ( q ) ̇ x r ] (25) where ˆ J ∗ + = ˆ J ∗ T ( ˆ J ∗ ˆ J ∗ T ) − 1 is the standard generalized inverse of the modified estimated Jacobian matrix ˆ J ∗ [which is obtained by replacing a ⊥ z and a z in J ( q, ( x o + x d ) / 2) with ˆ a ⊥ z and ˆ a z , respectively], and ̇ x r = ̇ x d − γ ( x o − x d ) with γ being a positive design constant. Differentiating (25) with respect to time gives the joint reference acceleration ̈ q r = ˆ J ∗ + [ ˆ Z ( q ) ̈ x r + ̇ ˆ Z ( q ) ̇ x r − ̇ ˆ J ∗ ̇ q r ] + ( I n − ˆ J ∗ + ˆ J ∗ ) ̇ ˆ J ∗ T ˆ J ∗ + T ̇ q r (26) where the standard result concerning the time derivative of ˆ J ∗ + is used and I n is the n × n identity matrix. As can be clearly seen from (26), the variable ̈ q r does not involve the measurement of the image-space velocity ̇ x . March 12, 2018 DRAFT 13 Remark 3: The use of the modified estimated Jacobian matrix ˆ J ∗ instead of the estimated Jacobian matrix ˆ J ( q, x ) is to accommodate the effect of the time-varying depth and to avoid the image-space velocity measurement in deriving the joint reference acceleration. Then, define a joint-space sliding vector s = ̇ q − ̇ q r . (27) Using ˆ J ∗ to premultiply both sides of (27) and exploiting Property 1 gives ˆ J ∗ s = ˆ J ( q, x ) ̇ q + 1 2 ˆ ̇ Z ( q )( − x o − x d + 2 x ) − ˆ Z ( q ) ̇ x r = Z ( q ) [ ̇ x − ̇ x d + γ ( x o − x d )] + 1 2 ̇ Z ( q )(∆ x − ∆ x o ) + Y ⊥ z ( q, ̇ q )∆ a ⊥ z − [ 1 2 ̄ Y z ( q, ̇ q, x o + x d ) + Y z ( q, ̇ x r ) ︸ ︷︷ ︸ Y ∗∗ z ] ∆ a z (28) where ∆ x = x − x d is the image-space position tracking error. Now we propose the control law as τ = − ˆ J ∗ T K ˆ J ∗ s + Y d ( q, ̇ q, ̇ q r , ̈ q r )ˆ a d (29) where K is a symmetric positive definite matrix and ˆ a d is the estimate of a d . The adaptation laws for the estimated parameters ˆ a d , ˆ a ⊥ z , and ˆ a z are given as ̇ ˆ a d = − Γ d Y T d ( q, ̇ q, ̇ q r , ̈ q r ) s (30) ̇ ˆ a ⊥ z =Γ ⊥ z Y ⊥ T z ( q, ̇ q ) (∆ x − ∆ x o ) (31) ̇ ˆ a z = − Γ z ( Y ∗∗ T z ∆ x − Y ∗ T z ∆ x o ) (32) where Γ d , Γ ⊥ z , and Γ z are all symmetric positive definite matrices. Substituting the control law (29) into the manipulator dynamics (16) yields M ( q ) ̇ s + C ( q, ̇ q ) s = − ˆ J ∗ T K ˆ J ∗ s + Y d ( q, ̇ q, ̇ q r , ̈ q r )∆ a d (33) where ∆ a d = ˆ a d − a d is the dynamic parameter estimation error. March 12, 2018 DRAFT 14 The closed-loop behavior of the system can then be described by                                  Z ( q )∆ ̇ x o + (1 / 2) ̇ Z ( q )(∆ x o + ∆ x ) = − αZ ( q )∆ x o + Y ⊥ z ( q, ̇ q )∆ a ⊥ z − Y ∗ z ∆ a z , Z ( q )∆ ̇ x + (1 / 2) ̇ Z ( q )(∆ x − ∆ x o ) = − γZ ( q )( x o − x d ) − Y ⊥ z ( q, ̇ q )∆ a ⊥ z + Y ∗∗ z ∆ a z + ˆ J ∗ s, M ( q ) ̇ s + C ( q, ̇ q ) s = − ˆ J ∗ T K ˆ J ∗ s + Y d ( q, ̇ q, ̇ q r , ̈ q r )∆ a d (34) and the parameter adaptation laws (30), (31), and (32). We are presently ready to formulate the following theorem. Theorem 1: The observer (18), the control (29), and the adaptation laws (30), (31), (32) for the visually servoed robotic system (5), (16) guarantee the convergence of the image-space tracking errors if α > γ/ 3 , i.e., ∆ x → 0 and ∆ ̇ x → 0 as t → ∞ . Proof: Following [13], [35], we consider the Lyapunov-like function candidate V 1 = (1 / 2) s T M ( q ) s + (1 / 2)∆ a T d Γ − 1 d ∆ a d , whose time derivative along the trajectories of the third subsystem of (34) and (30) can be written as ̇ V 1 = − s T ˆ J ∗ T K ˆ J ∗ s ≤ 0 (exploiting Property 3), which implies that s ∈ L ∞ , ˆ J ∗ s ∈ L 2 , and ˆ a d ∈ L ∞ . The fact that ˆ J ∗ s ∈ L 2 and Z ( q ) is uniformly positive definite yields the result that ∫ t 0 s T ˆ J ∗ T Z − 1 ( q ) ˆ J ∗ sdr ≤ l M , ∀ t ≥ 0 for some positive constant l M . Let us consider the following depth-dependent nonnegative function V 2 =1 2 ∆ x T o Z ( q )∆ x o + 1 2∆ x T Z ( q )∆ x + 1 2∆ a ⊥ T z Γ ⊥− 1 z ∆ a ⊥ z + 1 2∆ a T z Γ − 1 z ∆ a z + 1 γ [ l M − ∫ t 0 s T ˆ J ∗ T Z − 1 ( q ) ˆ J ∗ sdr ] ︸ ︷︷ ︸ Π ∗ (35) where the employment of the term Π ∗ follows the typical practice (see, e.g., [36, p. 118]). The time derivative of V 2 along the trajectories of the upper two subsystems of (34) can be March 12, 2018 DRAFT 15 written as ̇ V 2 = − α ∆ x T o Z ( q )∆ x o − γ ∆ x T Z ( q )( x o − x d ) − (∆ x − ∆ x o ) T Y ⊥ z ( q, ̇ q )∆ a ⊥ z + ( ∆ x T Y ∗∗ z − ∆ x T o Y ∗ z ) ∆ a z + ∆ a ⊥ T z Γ ⊥− 1 z ̇ ˆ a ⊥ z + ∆ a T z Γ − 1 z ̇ ˆ a z + ∆ x T ˆ J ∗ s − 1 γ s T ˆ J ∗ T Z − 1 ( q ) ˆ J ∗ s. (36) Substituting the adaptation laws (31) and (32) into (36) gives ̇ V 2 = − α ∆ x T o Z ( q )∆ x o − γ ∆ x T Z ( q )∆ x − γ ∆ x T Z ( q )∆ x o + ∆ x T ˆ J ∗ ( q ) s − 1 γ s T ˆ J ∗ T Z − 1 ( q ) ˆ J ∗ s. (37) Using the following result obtained from the standard theory of inequalities ∆ x T ˆ J ∗ s ≤ 1 4 γ ∆ x T Z ( q )∆ x + 1 γ s T ˆ J ∗ T Z − 1 ( q ) ˆ J ∗ s we obtain from (37) that ̇ V 2 ≤ − α ∆ x T o Z ( q )∆ x o − γ ∆ x T Z ( q )∆ x o − 3 γ 4 ∆ x T Z ( q )∆ x = −   ∆ x o ∆ x   T   αZ ( q ) ( γ/ 2) Z ( q ) ( γ/ 2) Z ( q ) (3 γ/ 4) Z ( q )   ︸ ︷︷ ︸ H   ∆ x o ∆ x   ≤ 0 (38) since the matrix H is uniformly positive definite under the condition α > γ/ 3 , according to the standard matrix theory. The inequality (38) as well as the definition of V 2 given by (35) yields the result that ∆ x o ∈ L 2 ∩ L ∞ , ∆ x ∈ L 2 ∩ L ∞ , ˆ a ⊥ z ∈ L ∞ , and ˆ a z ∈ L ∞ . If rank ( ˆ J ∗ ) = 2 m , we obtain from the standard matrix theory that ˆ J ∗ + is bounded. Then, we obtain that ̇ q r ∈ L ∞ from equation (25) since ˆ Z ( q ) is bounded and ̇ x r ∈ L ∞ . From the result that s ∈ L ∞ , we have that ̇ q ∈ L ∞ . From (18), we have that ̇ x o ∈ L ∞ , which further gives rise to the result that ̈ x r ∈ L ∞ . From the adaptation laws (31) and (32), we have that ̇ ˆ a ⊥ z ∈ L ∞ and ̇ ˆ a z ∈ L ∞ , which mean that ̇ ˆ Z ( q ) and ̇ ˆ J ∗ are bounded. Therefore, we obtain that ̈ q r ∈ L ∞ from (26). From (33), we obtain that ̇ s ∈ L ∞ since M ( q ) is uniformly positive definite (by Property 2), which, plus the result that March 12, 2018 DRAFT 16 ̈ q r ∈ L ∞ , yields the conclusion that ̈ q ∈ L ∞ . Then, from the kinematics (5) and its differentiation with respect to time, we obtain that ̇ x ∈ L ∞ and ̈ x ∈ L ∞ . We also obtain that ̈ x o ∈ L ∞ from the differentiation of equation (18). Then, we have that ∆ ̇ x o ∈ L ∞ , ∆ ̇ x ∈ L ∞ , ∆ ̈ x o ∈ L ∞ , and ∆ ̈ x ∈ L ∞ . Hence, ∆ x o , ∆ x , ∆ ̇ x o , and ∆ ̇ x are all uniformly continuous. From the properties of square-integrable and uniformly continuous functions [36, p. 117], we obtain that ∆ x o → 0 and ∆ x → 0 as t → ∞ . Then, from Barbalat’s Lemma [11], we have that ∆ ̇ x o → 0 and ∆ ̇ x → 0 as t → ∞ .  Remark 4: The avoidance of image-space velocity measurement is achieved at the kinematic level, which results in the separation of the handling of the kinematic and dynamic uncertainties. In addition, the cascaded feature of the closed-loop system facilitates the stability analysis. Remark 5: 1) Compared with the results in [15], [19], [24], [26], the novel points of our result mainly lie in the proposed observer (18), the definition of the reference velocity (25), the image- space-velocity-free adaptation law (32), and the proposed depth-dependent quasi-Lyapunov function (35) as well as the associated stability analysis. The adaptation law (31) for updating ˆ a ⊥ z coincides with the one in [25] 2 , [27], yet the results in [25], [27] are confined to the simpler case of constant depth. The control law (29) as well as the dynamic parameter adaptation law (30) is basically the same as the one in [15] (i.e., an extension of [13] to handle both the uncertain kinematics and dynamics), yet employ a new estimated Jacobian matrix ˆ J ∗ and new reference velocity and acceleration. 2) The simplicity of the proposed control scheme is reflected in the aspects that the over- parametrization when accommodating the uncertain dynamics is avoided and the constant- gain feedback is adopted (unlike the result in, e.g., [24]), and that the explicit measurement of the feature-point position with respect to the manipulator base frame is not required (in contrast with [26]). Remark 6: The standard projection approach [37] can be applied to the adaptation laws (31) and (32) so that ˆ J ∗ has full row rank [this originates from the fact that J ( q, ( x o + x d ) / 2) has 2 The task-space observer and the desired armature current given in [25] (which deals with the adaptive control of electrically driven robots) make us believe that one can obtain the solution for rigid robots (a reduced case of electrically driven robots) from [25] and will find that the adaptation law (31) is in essence the same as this solution. March 12, 2018 DRAFT 17 full row rank according to Assumption 2] and ˆ Z ( q ) is uniformly positive definite during the adaptation process (see also [19], [21]). IV. S IMULATION R ESULTS In this section, we present the simulation results to show the performance of the proposed observer-based adaptive controller. We consider a visually servoed robotic system that includes a typical three-DOF manipulator and a fixed camera, as is shown in Fig. 1, and the number of the feature points that are under consideration is set as one. The focal length of the camera is set as f = 0 . 15 m and the two scaling factors of the camera are set to be the same value β = 900 . 0 . The three axes of the camera frame (denoted by X C , Y C and Z C , respectively) are assumed to be aligned with the axes Y 0 , Z 0 , and X 0 of the manipulator base frame, respectively, yet there is an offset d C = 5 . 0 m along the axis Z C between the origins of the two frames. The lengths of the three links of the manipulator are l 1 = 2 . 0 m, l 2 = 2 . 0 m, and l 3 = 2 . 0 m. The mass and inertia properties of the manipulator are not listed due to the space limitation. The sampling period is chosen to be 5 ms. The controller parameters are determined as K = 0 . 001 I 2 , α = 10 . 0 , γ = 10 . 0 , Γ d = 300 . 0 I 8 , Γ ⊥ z = 600 . 0 I 2 and Γ z = 0 . 2 I 3 . The initial estimates of the kinematic parameters (including the camera parameters) are chosen as ˆ l 2 (0) = ˆ l 3 (0) = 3 . 0 m, ˆ d C (0) = 3 . 0 m, ˆ f (0) = 0 . 1 m, and ˆ β (0) = 700 . 0 . The initial estimate of the dynamic parameter vector is chosen as ˆ a d (0) = [ 0 T 6 , 15 , 0 ] T . The desired trajectory in the image space is given as x d = [45 + 20 cos( πt/ 3) , 65 + 20 sin( πt/ 3)] T . The simulation results are shown in Fig. 2 and Fig. 3. From Fig. 2, we see that the image-space position tracking errors indeed converge to zero asymptotically. Fig. 3 gives the responses of the actual and estimated depths during the motion of the manipulator. It seems that the estimated depth tends to approach the actual depth. Although the convergence of the depth estimation error does not occur, the asymptotic image-space trajectory tracking is still realized. V. C ONCLUSION In this paper, we have examined the visual tracking problem for robotic systems with uncer- tain camera model and uncertain manipulator kinematics and dynamics, and the image-space velocity is assumed to be unavailable. To achieve visual tracking without image-space velocity March 12, 2018 DRAFT 18 Camera C Z C X C Y 0 Z 0 Y 0 X Manipulator Fig. 1. Three-DOF manipulator with a fixed camera 0 5 10 15 20 −8 −6 −4 −2 0 2 4 time (s) tracking errors (pixel) X error Y error Fig. 2. Image-space position tracking errors measurement, we propose a novel image-space observer and an adaptive controller based on the observed quantities, which yield a cascade closed-loop robotic system. Using a depth-dependent quasi-Lyapunov function plus the standard Lyapunov-like function for analyzing the Slotine and Li adaptive controller, we demonstrate that the image-space tracking errors converge to zero. We also show the asymptotic convergence of the image-space observation errors. A simulation is conducted to show the performance of the proposed observer-based adaptive controller. March 12, 2018 DRAFT 19 0 5 10 15 20 1.5 2 2.5 3 3.5 4 4.5 5 5.5 time (s) m actual depth estimated depth Fig. 3. Actual and estimated depths A CKNOWLEDGMENT The author would like to thank the anonymous reviewers and the Associate Editor of Auto- matica for their helpful comments on the paper. R EFERENCES [1] B. Espiau, F. Chaumette, and P. Rives, “A new approach to visual servoing in robotics,” IEEE Transactions on Robotics and Automation , vol. 8, no. 3, pp. 313–326, Jun. 1992. [2] S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation , vol. 12, no. 5, pp. 651–670, Oct. 1996. [3] K. Hashimoto, T. Ebine, and H. Kimura, “Visual servoing with hand-eye manipulator—optimal control approach,” IEEE Transactions on Robotics and Automation , vol. 12, no. 5, pp. 766–774, Oct. 1996. [4] R. Kelly, R. Carelli, O. Nasisi, B. Kuchen, and F. Reyes, “Stable visual servoing of camera-in-hand robotic systems,” IEEE/ASME Transactions on Mechatronics , vol. 5, no. 1, pp. 39–48, Mar. 2000. [5] E. Malis and F. Chaumette, “Theoretical improvements in the stability analysis of a new class of model-free visual servoing methods,” IEEE Transactions on Robotics and Automation , vol. 18, no. 2, pp. 176–186, Apr. 2002. [6] A. Astolfi, L. Hsu, M. S. Netto, and R. Ortega, “Two solutions to the adaptive visual servoing problem,” IEEE Transactions on Robotics and Automation , vol. 18, no. 3, pp. 387–392, Jun. 2002. [7] Y.-H. Liu, H. Wang, C. Wang, and K. K. Lam, “Uncalibrated visual servoing of robots using a depth-independent interaction matrix,” IEEE Transactions on Robotics , vol. 22, no. 4, pp. 804–817, Aug. 2006. [8] G. Hu, W. MacKunis, N. Gans, W. E. Dixon, J. Chen, A. Behal, and D. Dawson, “Homography-based visual servo control with imperfect camera calibration,” IEEE Transactions on Automatic Control , vol. 54, no. 6, pp. 1318–1324, Jun. 2009. [9] G. Hu, N. Gans, and W. Dixon, “Quaternion-based visual servo control in the presence of camera calibration error,” International Journal of Robust and Nonlinear Control , vol. 20, no. 5, pp. 489–503, Mar. 2010. March 12, 2018 DRAFT 20 [10] S. S. Mehta, V. Jayaraman, T. F. Burks, and W. E. Dixon, “Teach by zooming: A unified approach to visual servo control,” Mechatronics , vol. 22, no. 4, pp. 436–443, Jun. 2012. [11] J.-J. E. Slotine and W. Li, Applied Nonlinear Control . Englewood Cliffs, NJ: Prentice-Hall, 1991. [12] J. J. Craig, P. Hsu, and S. S. Sastry, “Adaptive control of mechanical manipulators,” The International Journal of Robotics Research , vol. 6, no. 2, pp. 16–28, Jun. 1987. [13] J.-J. E. Slotine and W. Li, “On the adaptive control of robot manipulators,” The International Journal of Robotics Research , vol. 6, no. 3, pp. 49–59, Sep. 1987. [14] ——, “Composite adaptive control of robot manipulators,” Automatica , vol. 25, no. 4, pp. 509–519, Jul. 1989. [15] C. C. Cheah, C. Liu, and J.-J. E. Slotine, “Adaptive tracking control for robots with unknown kinematic and dynamic properties,” The International Journal of Robotics Research , vol. 25, no. 3, pp. 283–296, Mar. 2006. [16] ——, “Adaptive Jacobian tracking control of robots with uncertainties in kinematic, dynamic and actuator models,” IEEE Transactions on Automatic Control , vol. 51, no. 6, pp. 1024–1029, Jun. 2006. [17] D. Braganza, W. E. Dixon, D. M. Dawson, and B. Xian, “Tracking control for robot manipulators with kinematic and dynamic uncertainty,” in Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 , Seville, Spain, 2005, pp. 5293–5297. [18] W. E. Dixon, “Adaptive regulation of amplitude limited robot manipulators with uncertain kinematics and dynamics,” IEEE Transactions on Automatic Control , vol. 52, no. 3, pp. 488–493, Mar. 2007. [19] C. C. Cheah, C. Liu, and J.-J. E. Slotine, “Adaptive vision based tracking control of robots with uncertainty in depth information,” in Proceedings of the IEEE International Conference on Robotics and Automation , Roma, Italy, 2007, pp. 2817–2822. [20] H. Wang, Y.-H. Liu, and D. Zhou, “Dynamic visual tracking for manipulators using an uncalibrated fixed camera,” IEEE Transactions on Robotics , vol. 23, no. 3, pp. 610–617, Jun. 2007. [21] C. C. Cheah, C. Liu, and J.-J. E. Slotine, “Adaptive Jacobian vision based control for robots with uncertain depth information,” Automatica , vol. 46, no. 7, pp. 1228–1233, Jul. 2010. [22] X. Li and C. C. Cheah, “Adaptive regional feedback control of robotic manipulator with uncertain kinematics and depth information,” in Proceedings of the American Control Conference , Montr ́ eal, Canada, 2012, pp. 5472–5477. [23] X. Liang, H. Wang, W. Chen, and Y.-H. Liu, “Uncalibrated image-based visual servoing of rigid-link electrically driven robotic manipulators,” Asian Journal of Control , vol. 16, no. 3, pp. 714–728, May 2014. [24] H. Wang, Y.-H. Liu, and W. Chen, “Uncalibrated visual tracking control without visual velocity,” IEEE Transactions on Control Systems Technology , vol. 18, no. 6, pp. 1359–1370, Nov. 2010. [25] C. Liu, C. C. Cheah, and J.-J. E. Slotine, “Adaptive Jacobian tracking control of rigid-link electrically driven robots based on visual task-space information,” Automatica , vol. 42, no. 9, pp. 1491–1501, Sep. 2006. [26] A. C. Leite, A. R. L. Zachi, F. Lizarralde, and L. Hsu, “Adaptive 3D visual servoing without image velocity measurement for uncertain manipulators,” in 18th IFAC World Congress , Milano, Italy, 2011, pp. 14 584–14 589. [27] H. Wang, “Cascaded framework for adaptive tracking of robotic systems with uncertain kinematics and dynamics,” in Proceedings of the International Conference on Electric Information and Control Engineering , Lushan, China, 2012, pp. 356–359. [28] F. Lizarralde, A. C. Leite, L. Hsu, and R. R. Costa, “Adaptive visual servoing scheme free of image velocity measurement for uncertain robot manipulators,” Automatica , vol. 49, no. 5, pp. 1304–1309, May 2013. [29] D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach , 2nd ed. Upper Saddle River, NJ: Prentice-Hall, 2012. March 12, 2018 DRAFT 21 [30] J. J. Craig, Introduction to Robotics: Mechanics and Control , 3rd ed. Upper Saddle River, NJ: Prentice-Hall, 2005. [31] M. W. Spong, S. Hutchinson, and M. Vidyasagar, Robot Modeling and Control . New York: John Wiley & Sons, Inc., 2006. [32] J. W. Brewer, “Kronecker products and matrix caculus in system theory,” IEEE Transactions on Circuits and Systems , vol. CAS-25, no. 9, pp. 772–781, Sep. 1978. [33] H. Michel and P. Rives, “Singularities in the determination of the situation of a robot effector from the perspective view of 3 points,” INRIA, Research Report 1850, 1993. [34] C. D. Meyer, Matrix Analysis and Applied Linear Algebra . Philadelphia, PA: Society for Industrial and Applied Mathematics, 2000. [35] R. Ortega and M. W. Spong, “Adaptive motion control of rigid robots: A tutorial,” Automatica , vol. 25, no. 6, pp. 877–888, Nov. 1989. [36] R. Lozano, B. Brogliato, O. Egeland, and B. Maschke, Dissipative Systems Analysis and Control: Theory and Applications . London: Spinger-Verlag, 2000. [37] P. A. Ioannou and J. Sun, Robust Adaptive Control . Englewood Cliffs, NJ: Prentice-Hall, 1996. March 12, 2018 DRAFT