You are on page 1of 17

STROJNCKY ASOPIS, 54, 2003, .

65

A CMAC neural network approach to redundant manipulator kinematics control


YANGMIN LI *, SIO HONG LEONG
1 1

This paper studies the inverse kinematics issue of redundant manipulators using the neural network method. The conventional method of solving this problem analytically is by applying the Jacobian Pseudoinverse Algorithm. It is eective and able to resolve the redundancy for additional constraints. However, its demand for computational eort makes it not suitable for real-time control. Recently, neural networks have been widely used in robotic control because they are fast, fault-tolerant and able to learn. In this paper, the CMAC (Cerebellar Model Articulation Controller) neural network for solving the inverse kinematics problems in real time is presented. Simulations have shown that the CMAC neural network has good performance in tackling the inverse kinematics control of redundant manipulators. K e y w o r d s : redundant manipulator, CMAC neural network, kinematics

1. Introduction Robotic manipulators have been widely used in dierent industries to perform many kinds of work such as welding, painting, assembling, nuclear material handling, space, sea exploring, etc. The control of a manipulator involves pre-study of trajectory planning, inverse kinematics and inverse dynamics. Among which, inverse kinematics of a redundant manipulator is of the interest in this paper. Kinematic redundant manipulators arouse research interest due to their redundancy can be utilized not only to avoid singularities and obstacles, but also enhance the working ability in limited workspace and the capability of fault-tolerance. The inverse kinematics problem is to determine the joint variables corresponding to a given end-eector position and orientation. Given a desired workspace trajectory, the task how to nd out the corresponding joint space trajectory is a complex problem since redundant manipulators have more than necessary degrees of freedom (DOF) and multiple or innite number of solutions. Moreover, the inverse kinematics equations are usually nonlinear and are dicult to nd closed-form solutions.
Department of Electromechanical Engineering, Faculty of Science and Technology, University of Macau, Av. Padre Toms Pereira S. J., Taipa, Macao S.A.R., P. R. China * corresponding author, e-mail: YMLi@umac.mo
1

66

STROJNCKY ASOPIS, 54, 2003, . 2

Conventionally, the Jacobian Pseudoinverse Algorithm [1] has been applied to solve the inverse kinematics problems due to its ability to satisfy additional constraints through mapping the velocities corresponding to the additional constraints into the null space of the Jacobian J while tracking the desired workspace trajectory. However, the Jacobian Pseudoinverse Algorithm involves the inversion of the J matrix which can present a serious inconvenience not only at singularity but also in the neighborhood of a singularity though the damped least squares (DLS) inverse can render the inversion better considered from numerical viewpoint. Recent years, neural networks (NN) [2] have been successfully applied in robotic control, whose generalization capacities and structures make them robust and fault-tolerant in algorithms, which are able to solve a problem that was dicult to handle before. Also, neural networks are composed of many neurons, even though some of them are damaged, the output of the network will not have too big variations. Another advantage of NN is their ability to solve highly nonlinear problems. The properties of NN make them so promising when they are applied to robotic control problems. Although there are many kinds of NN [3], the CMAC (Cerebellar Model Articulation Controller) neural networks [4, 5] have special advantage in speed of convergence and a simple -learning method, which are especially suitable for both real time control of robots and nonlinear function approximation of any control systems. A CMAC neural network is proposed in order to simulate the function of our cerebellum. It is an associative neural network in which the inputs determine a small subset of the network and that subset determines the outputs corresponding to the inputs. The associative mapping property of CMAC assures local generalization, that is, similar inputs produce similar outputs while distant inputs produce nearly independent outputs. The CMAC is similar to perceptron, although it is a linear relationship on the neuron scope, in overall it is a nonlinear relationship. About the research on the CMAC neural networks and their applications, Miller III et al. [6] developed a robot tracking system consisting of a manipulator attached with a video camera for tracking an object on a conveyor and having the image of the object specically positioned and oriented on the screen without giving any robot kinematics information, height measurement or any camera-screen calibration, the CMAC was used to learn all these parameters. On the other hand, CMAC was modied by Macnab et al. [7] to utilize radial basis functions for precision control of exible-joint robots in order to deal with the elasticity. For solving the inverse kinematics problems, Oyama et al. [8] proposed a Modular Neural Net System for learning inverse kinematics in order to deal with the multi-valued and discontinuous function of the inverse kinematics system. Gardner et al. [9] investigated the application of neural network in a two-link manipulator trajectory tracking in which the NN was used to compute the components of the

STROJNCKY ASOPIS, 54, 2003, . 2

67

inverse of the Jacobian matrix. Ferreira et al. [10] applied Attentional Mode Neural network (AMNN) in leading a robot arm in 3-D space to a goal point in real time. PSOM Network presented by Walter [11] is able to learn the inverse kinematics based on only 27 data points. This paper is organized as follows: In Section 2, the conventional inverse kinematics of manipulators will be reviewed. Section 3 presents the CMAC neural network method for solving the complex inverse kinematics problems. Simulation will be performed with a 5-link planar manipulator tracking a circle trajectory in Section 4. Finally, some conclusions are presented in Section 5. 2. Conventional manipulator inverse kinematics For a redundant manipulator, the end-eector position is a function of the joint variables which can be expressed as the following kinematic equation: x = f (q) , (1)

where x is the m 1 end-eector position vector and q is the n 1 joint angle vector. n > m in the case of redundant manipulator. Given the desired trajectory x(t) in workspace, we need to nd the joint angle trajectory q(t) corresponding to x(t). The dierential kinematic equation can be derived as x = J (q) q, (2)

where x is the m1 end-eector velocity vector, q is the n1 joint velocity vector, and J(q) is the m n Jacobian matrix. Thus, our main concern is to solve the following inverse kinematics equation: q = J1 (q) x. (3)

Since the manipulator is redundant (n > m), the Jacobian matrix is not square. Usually Eq. (3) can be solved by using the pseudoinverse of the Jacobian that locally minimizes the norm of joint velocities [1]. Equation (3) becomes q = J+ (q) x. Here the matrix J+ is dened as J+ = JT (JJT )
1

(4)

(5)

68 Furthermore, Eq. (4) can be written as

STROJNCKY ASOPIS, 54, 2003, . 2

q = J+ (q) x + I J+ J qa ,

(6)

where qa is a vector of arbitrary joint velocities projected to the null space of J. The redundancy can be resolved by specifying qa so as to satisfy an additional constraint. Obviously the above pseudoinverse formulation involves the inverse of J, a lot of calculations are needed for solving the equation, thus, it is not suitable for real-time control. The CMAC neural network will be proposed to approach this issue since it has fast learning function, rapid algorithm computation, simple structure, and fault-tolerant properties. 3. A CMAC neural network 3.1 P r i n c i p l e s The structure of CMAC neural network is shown in Fig. 1. The input vectors in the input space S are a number of sensors in real world. The input space consists of all possible input vectors, and CMAC maps the input vector into C points in the conceptual memory A. As shown in Fig. 1, two close inputs will have overlaps in A, the closer the inputs the more overlapped in memory A, and two distant inputs will have no overlap. Since practical input space is extremely large, in order to reduce the memory requirement, A is mapped onto a much smaller physical memory A through hash coding. So, any input presented to CMAC will generate C physical memory locations. The output Y will be the summation of the content of the C locations in A.

Fig. 1. Block diagram of CMAC.

STROJNCKY ASOPIS, 54, 2003, . 2

69

It is shown that the associative mapping within the CMAC network assures nearby points in the input space generalize while distant points not. Moreover, since the mapping from A to Y is linear but from S to A is nonlinear, the nonlinear nature of the CMAC network performs a xed nonlinear mapping from the input vector to a many-dimensional output vector. 3.2 L e a r n i n g m e t h o d Network training is typically based on observed training data pairs Y (s) and Yd , where Yd is the desired network output corresponding to the input s. Using the least mean square (LMS) training rule, the weight can be calculated by: w (t + 1) = w (t) + Yd Y (s) , C (7)

where is the learning step length. Therefore, if we dene an acceptable error , no changes are needed for the weights when [Yd Y (s)] . The training can be done after a set of training samples being tested or after each training sample being tested. 3.3 S o l v i n g i n v e r s e k i n e m a t i c s p r o b l e m s o f k i n e m a t i c a l l y redundant manipulators The control block diagram of CMAC neural network for the inverse kinematics control of the manipulator is shown in Fig. 2. From Fig. 2, the following equation can be derived t+1 X t+1 = Xd + Ke, which is equivalent to e + Ke = 0. (9) (8)

Fig. 2. Block diagram of CMAC NN for solving inverse kinematics problems.

70

STROJNCKY ASOPIS, 54, 2003, . 2

If K is a positive denite matrix, the system is asymptotically stable [1]. The shadowed box is our main concern since it solves the inverse kinematics problem. The inputs are the current manipulator conguration q t at time t and t+1 the desired end-eector position increments Xd at time t + 1. The outputs are the corresponding joint angle increments that are fed together with the current joint angles to the robot manipulator in order to have it moved to the desired conguration. Here, NN denotes the CMAC neural network and Opt denotes the optimization process which is used to nd the desired joint angle increments corresponding to the desired end-eector position increments as well as satisfying specied additional constraints. In fact, the inverse kinematics problems can be solved by an optimization method. However, an optimization process usually takes quite a long time and thus is not suitable for real time control. Here we use CMAC neural network to generate an initial solution q t+1 for the optimization process. The dierence between the t+1 initial solution q t+1 and the output of the optimization process qd is fed back to correct the weights of the CMAC neural network, that is, CMAC neural network is learning to produce an output as close as possible to the optimization output. Thus the time spent in the optimization process can be reduced when the CMAC neural network learns more and more. Eventually, the output of the CMAC neural network can replace that of the optimization process. For the optimization process, the gradient method is adopted to minimize the following objective function: U= T e e + q T q, 2 2 (10)

where e is the position error vector, q is joint angle dierence vector, and are the weight constants. The second term on the right-hand side of Eq. (10) is for minimizing the joint angle dierence, smoothing the motion and minimizing the energy consumption, we have, U = eT J + q T , (11) q where J is the Jacobian matrix and thus, qk+1 = qk U , q (12)

where k is the number of iterations and is the length of learning step.

STROJNCKY ASOPIS, 54, 2003, . 2

71

4. Examples In order to evaluate the performance of CMAC, we have developed a simulation software under Microsoft Windows 98 system using Borland C++ Builder. The CMAC code includes multiple designs for the receptive eld lattice and the receptive eld sensitivity functions. In this simulation, the sampling time t = 0.02 s, constant k = 1/t = 50 and the parameters for the optimization are: = 300.0, = 1.0, and = 0.003. 4.1 M a n i p u l a t o r k i n e m a t i c s A 5-link planar manipulator is treated in which each link is connected with revolute joints (5-DOF, m = 2, n = 5, redundancy = n m = 3) as shown in Fig. 3. The total length of the manipulator is 1.5 m with the rst link of 0.5 m, second and third link of 0.25 m, fourth and fth link of 0.275 m. The height of the base support is 0.35 m. From Fig. 4, the Denavit Hartenberg notation of the manipulator is described in Table 1. No joint angle limits are imposed on during kinematics control.

Fig. 3. Initial conguration of the manipulator.

Fig. 4. 5-DOF Planar Manipulator diagram.

The kinematic equations of the 5-DOF planar manipulator can be obtained as X Z = l1 S1 + l2 S12 + l3 S123 + l4 S1234 + l5 S12345 , l1 C1 + l2 C12 + l3 C123 + l4 C1234 + l5 C12345 (13)

72

STROJNCKY ASOPIS, 54, 2003, . 2 T a b l e 1. Denavit-Hartenberg parameters Link 0 1 2 3 4 5 ai i di qi 0 /2 0 0 l1 0 0 q1 /2 l2 0 0 q2 l3 0 0 q3 l4 0 0 q4 l5 0 0 q5

where: l1 , l2 , l3 , l4 , and l5 is the link-length, respectively; q1 , q2 , q3 , q4 , and q5 is each joints angle, respectively; S1 = sin(q1 ), S12 = sin(q1 + q2 ), S123 = sin(q1 + q2 + q3 ), S1234 = sin(q1 + q2 + q3 + q4 ), S12345 = sin(q1 + q2 + q3 + q4 + q5 ), C1 = cos(q1 ), C12 = cos(q1 + q2 ), C123 = cos(q1 + q2 + q3 ), C1234 = cos(q1 + q2 + q3 + q4 ), and C12345 = cos(q1 + q2 + q3 + q4 + q5 ). The relationship between the end-eector velocity and the joint angular velocity is x = J (q) q, where X q1 J (q) = Z q1 and X q1 X q2 X q3 X q4 Z q1 Z q2 Z q3 Z q5 = l1 C1 + l2 C12 + l3 C123 + l4 C1234 + l5 C12345 , = l2 C12 + l3 C123 + l4 C1234 + l5 C12345 , = l3 C123 + l4 C1234 + l5 C12345 , = l4 C1234 + l5 C12345 , X = l5 C12345 , q5 X q2 Z q2 X q3 Z q3 X q4 Z q4 X q5 , Z q5 (14)

= (l1 S1 + l2 S12 + l3 S123 + l4 S1234 + l5 S12345 ), = (l2 S12 + l3 S123 + l4 S1234 + l5 S12345 ), = (l3 S123 + l4 S1234 + l5 S12345 ), = l5 S12345 . Z = (l4 S1234 + l5 S12345 ), q4

STROJNCKY ASOPIS, 54, 2003, . 2

73

The desired end-eector trajectory in our simulation is a circle with radius of 0.25 m, centered at (0.5, 0.6) m. The desired angular velocity is 0.5 rad/s. The trajectory equations are as follows: Xd (t) 0.5 0.25 cos(t) = . 0.25 + 0.25 sin(t) Zd (t) (15)

The cycle time is 4 s and the initial conguration of the manipulator is shown in Fig. 3 with values of q1 (0) = 1.730385, q2 (0) = 1.219399, q3 (0) = 0.0, q4 (0) = = 2.29649 and q5 (0) = 0.0. In the following simulations, the norm error is dened as Enorm (t) = [Xd (t) X (t)] + [Zd (t) Z (t)]
2 2

(16)

and the average error is dened as


2/(t)

Enorm (nt) Eav =


n=1

2/(t)

(17)

4.2 C M A C n e u r a l n e t w o r k s t r u c t u r e The structure of the CMAC neural network is shown in Fig. 5 where the input vector dimension is taken as 7 and the output vector dimension is taken as 5. During simulation the number of vectors in CMAC memory is 10000 and the generalization parameter is 64.

Fig. 5. The input and output of the CMAC neural network.

74

STROJNCKY ASOPIS, 54, 2003, . 2

4.3 S i m u l a t i o n r e s u l t s First of all, we would like to observe performance of the optimization process without using CMAC neural network. The results for dierent optimization iteration numbers taking from 1, 50 to 100 are shown in the Fig. 6, Fig. 7, and Fig. 8, respectively. The norm error has become stable after 50 iterations with maximum norm error of about 0.18 mm as shown in Fig. 7. At the beginning of simulation, the average error is very large with the maximum value of 12.5 mm approximately, after over 20 times of iterations the average error is decreased to a stable value of 0.3 mm approximately as shown in Fig. 8. The program can output the optimal results with any number of iterations, because of page limitations only parts of results are demonstrated here to point out the algorithms shortcomings without using CMAC neural network. Within the following gures, we dene i = joint i (i = 1, 2 . . . 5). Now, the CMAC neural network is exploited and trained step by step with decreasing optimization iterations and learning rates as shown in Figs. 913. It has been found that after 1300 trainings, the maximum norm error is about 0.3 mm (0.02 % of workspace dimension). The result is comparable with that of applying the optimization process alone. Notice that there is only one optimization iteration, thus CMAC neural network really can reduce the optimization time. In order to

Fig. 6. Performance of optimization process without CMAC: iteration numbers = 1.

STROJNCKY ASOPIS, 54, 2003, . 2

75

Fig. 7. Performance of optimization process without CMAC: iteration numbers = 50.

Fig. 8. Performance of optimization process without CMAC: iteration numbers = 100.

76

STROJNCKY ASOPIS, 54, 2003, . 2

Fig. 9. CMAC performance during training with iteration numbers = 12, CMAC = 1, and cycles = 50.

Fig. 10. CMAC performance during training with iteration numbers = 6, CMAC = 1, and cycles = 50.

STROJNCKY ASOPIS, 54, 2003, . 2

77

Fig. 11. CMAC performance during training with iteration numbers = 3, CMAC = 1, and cycles = 50.

Fig. 12. CMAC performance during training with iteration numbers = 2, CMAC = 1, and cycles = 150.

78

STROJNCKY ASOPIS, 54, 2003, . 2

Fig. 13. CMAC performance during training with iteration numbers = 1, CMAC = 0.25, and cycles = 5450.

Fig. 14. Robust performance after link-length changed with iteration numbers = 1.

STROJNCKY ASOPIS, 54, 2003, . 2

79

Fig. 15. Robust performance after link-length changed with iteration numbers = 700.

check whether the system is stable or not, we have tested 5000 more trainings. From Fig. 13, it is seen that after 6000 trainings, the norm error is similar to the result after 1000 trainings. Also, the average error is mostly below 0.1 mm. This implies that the system is stable enough. Furthermore the robustness of the system is evaluated by changing the link-length parameters of the manipulator and observing the CMAC neural network performance. We use the result after 1000 trainings as mentioned above to test the system. The link-length dimension of the manipulator is changed to a new set of values as: l1 = 0.5 m; l2 = 0.1 m; l3 = 0.4 m; l4 = 0.35 m; l5 = 0.15 m. The result is shown in Figs. 14 and 15, from which we can see that the average error caused by the dimension parameter variations of links has decreased to a stable value after about 100 trainings. Also the maximum norm error is about 0.25 mm after 700 trainings, which is close to the result of 0.3 mm above. Therefore the performance of the CMAC neural network in solving the inverse kinematics control issues considering link-length variation is quite robust. 5. Conclusions In this paper the CMAC neural network has been applied in solving the inverse kinematics problem of redundant manipulators. An inverse kinematics control al-

80

STROJNCKY ASOPIS, 54, 2003, . 2

gorithm which involves a CMAC neural network has been proposed. Simulations for a ve-link manipulator have been performed to evaluate the CMAC performance as well as its robust characteristics. The ve-link manipulator can track a circle within the tolerance of errors through trainings by CMAC neural network. Research results show that CMAC neural network is very good for real time control application due to its fast learning and simple computation properties. It should be pointed out that this research work is only restricted to planar manipulators with open chain structure of rigid bodies connected by revolute or prismatic joints. Further work will be extended to space closed or mixed chain structures of manipulators.
Acknowledgements This work was supported in part by grant RG008/00-01w/LYM/FST and RG025/0001S/LYM/FST from Research Committee of University of Macau. The authors would like to thank the comments and suggestions from reviewers and executive editor.

REFERENCES [1] SCIAVICCO, L.SICILIANO, B.: Modeling And Control of Robot Manipulators. New York, McGraw-Hill Book Company 1996. [2] TSOUKALAS, L. H.UHRIG, R. E.: Fuzzy and Neural Approaches in Engineering. New York, John Wiley & Sons Inc. 1997. [3] FREEMAN, J. A.SKAPURA, D. M.: Neural Networks, Algorithms, Applications, and Programming Techniques. Boston, Addison Wesley 1991. [4] MILLER, W. T.GLANZ, F. H.: Implementation of the Cerebellar Model Arithmetic Computer CMAC. UNH CMAC Version 2.1. The University of New Hampshire 1996. [5] LI, Y.LEONG, S. H.: In: Proceedings of the 5th World Multiconference on Systemics, Cybertics and Informatics. Eds.: N. Callaos, N. et al. Vol. 18. Orlando, International Institute of Informatics and Systemics Press 2001, p. 274. [6] MILLER III, W. T.GLANZ, F. H.KRAFT III, L. G.: IEEE Special Issue on Neural Networks, 78, 1990, p. 1561. [7] MACNAB, C. J. B.DELEUTERIO, G. M. T.: In: Proceedings of the IEEE International Conference on Robotics & Automation. Vol. 1. Piscataway, IEEE Press 1998, p. 511. [8] OYAMA, E.TACHI, S.: In: Proceedings of the IEEE International Conference on Robotics & Automation. Vol. 4. Piscataway, IEEE Press 2000, p. 3239. [9] GARDNER, J. F.BRANDT, A.LUECKE, G.: In: Proceedings of the Fifth International Conference on Advanced Robotics. Vol. 1. Piscataway, IEEE Press 1991, p. 487. [10] FERREIRA, A. P. L.ENGEL, P. M.: In: Proceedings of International Workshop on Neural Networks for Identication, Control, Robotics, and Signal/Image Processing. Piscataway, IEEE Press 1996, p. 440. [11] WALTER, J. A.: In: Proceedings of the IEEE International Conference on Robotics & Automation. Vol. 2. Piscataway, IEEE Press 1998, p. 2054.

STROJNCKY ASOPIS, 54, 2003, . 2

81

[12] WOO, M.NEIDER, J.DAVIS, T.: OpenGL Programming Guide. 2nd Edition. Boston, Addison Wesley Developers Press 1997. [13] KEMPF, R.FRAZIER, C.: OpenGL Reference Manual: the ocial reference document to OpenGL version 1.1. Boston, Addison Wesley Developers Press 1997. Received: 9.8.2002 Revised: 3.1.2003

You might also like