You are on page 1of 20

DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

1



Laboratoire dIngnierie des Systmes de Versailles

Master 2
Capteurs, Systmes Electroniques et Robotiques


Project
CREATING A LABVIEW INTERFACE FOR MEASURING THE
POSTION AND ORIENTATION OF THE FOOT OF HUMANOID
ROBOT BASED ON FOUR INFRARED SENSORS
by
DUONG Tan Quang


Supervisors
Mrs. Nelly NADJAR GAUTHIER
Mr. Olivier BRUNEAU



















February, 2014

DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

2

Contents
Acknowledgement
1. Introduction. 4
1.1. Overview............................................................................................... 4
1.2. Sharp IR sensor............. 4
1.3. Labview 4
1.4. Problem statement. 4
2. Analytical equation of the position and orientation 5
2.1. Establish the equation 5
2.2. Validation.. 8
3. Measurement & Labview User Interface. 10
3.1. Measurement 10
3.2. Programming the interface and validation.. 13
4. Analysis.. 16
4.1. Choosing the cut-off frequency....16
4.2. The quality of filtering..... 17
4.3. Comparing the accuracy at different distances 18
5. Conclusion & Future Developments 19
Bibliography

DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

3

Acknowledgement

First, I would like to thank my supervisors Mrs. Nelly NADJ AR GAUTHIER and Mr.
Olivier BRUNEAU. Thank you for give me the great knowledge of humanoid robots
locomotion, robot dynamic, measurement with Labview, and DSP. Thank you for give
me the precious advices and ideas when I met the problems in this project.
I would like to thank Mr. Eric MONACELLI. Thank you for give me a chance to do this
project and provide me the great instructions for the presentations.
From the LISV, I would like to thank Mr. Barrois Olivier. Thank you for support me the
measurement devices and give me the instructions about the devices.
I would like to thank my friends in the class of CSER. Thank you Sun, Linda, Massi, who
give me a hand in the measurement. Thank Zamourri Meher, who gives me the French
conversations every day although my French skill is not good.
Thank you so much! Without the help from all of you, I could not finish this project at
all.


DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

4

1. Introduction
1.1. Overview
The walking control is still one of the hardest tasks in designing a humanoid robot.
To control the movement of the foot exactly, it is necessary to know the position
and orientation of the foot in the reference frame.
In the real environment, the robot foot may confront with the change of terrain or
with the obstacles. It is therefore important to determine the distance between the
foot and the terrain or the obstacle, in order to adjust the controlling strategy
appropriately and timely.
1.2. Sharp IR sensor
[1]

The Sharp IR sensor works by the process of triangulation. A pulse of light
(wavelength range of 850nm +/-70nm) is emitted and then reflected back (or not
reflected at all). When the light returns it comes back at an angle that is dependent
on the distance of the reflecting object. Triangulation works by detecting this
reflected beam angle. By knowing the angle, distance can then be determined.

Figure 1.1: Introduction of Sharp IR sensor
1.3. Labview
[2]

NI LabVIEW (Laboratory Virtual Instrument Engineering Workbench) is a
graphical programming language designed for engineers and scientists to develop
test, control, and measurement applications. The intuitive nature of LabVIEW
graphical programming makes it easy for educators and researchers to incorporate
the software in a range of courses and applications. With LabVIEW, educators and
researchers can use a graphical system design approach to design, prototype, and
deploy embedded systems. It combines the power of graphical programming with
hardware to dramatically simplify and accelerate the development of designs
1.4. Problem statement
The problem of the project is to calculate the distance between any points on the
robots foot and the ground or the obstacle, as well as to determine the angle of
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

5

pitch and the angle of roll of the foot. The measuring results should be shown in a
computer interface.
The measurement is based on the four infrared sensors GP2D120, which are
attached in the four corners of the robots foot. The sensor is integrated with signal
processing and analog voltage output, which helps to measure the distance from 4
cm to 30 cm.
The interface will be based on the Labview, which is software of National
Instrument Company, along with the diver card NI PCI 6221 and a testing box of
BNC 2120.
All of the results should be validated, firstly in the static state and later on the
dynamic state of the humanoid robots foot.
2. Analytical equation of position and orientation
2.1. Establish the equation

Figure 2.1: A model of robots foot in the reference frame
0
(0XZ)
A model of robots foot in the reference frame
0
(OXYZ) is illustrated in the
figure 2.1. The foot is shown as a rectangle ABCD, in which the four sensors
(14) are attached at the four corners (AD). The local frame
1
(O
1
X
1
Y
1
Z
1
) is
placed at the A point.
The projects objective is to calculate the roll angle , as well as the pitch angle
and the distance between any M point on the foot and the ground, in other words,
that is Z
0M
in the reference frame
0
(OXYZ).
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

6

The equations will be built based on an analytical model in the figure 2.2, as
follows.

Figure 2.2: An analytical model for calculating

There are some importance remarks from the above model:
- The light line from the sensor is perpendicular to the foot plane. It means that
(d
1

d
4

) (2) and (d
1
d
4
) (3)
- The first rotation about the Y axis is considered as the projection of AD and
BC on the ground, taking the plane 1 to the plane 2
- The second rotation about the X axis is considered as the projection of AB and
CD on the ground, taking the plane 2 to the plane 3
- The value of (d
1
d
4
) is well calculated by the sensor and the Labview
software. Besides, the size of the robot foot is also well known.
It is now easy to find out the value of , and Z
0A
:
=arctan
d
2
-d
1
AB
; =arctan
d
3
-d
4
AB
(1)
d
1
i
=d
1
cos; d
2
i
=d
2
cos; d
3
i
=d
3
cos; d
4
i
=d
4
cos; (2)
=arctan
d
1
|
-d
4
|
AD
; =arctan
d
2
|
-d
3
|
AD
(3)
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

7

From (2)onJ (3) =arctan[
d
1
-d
4
AD
cos ; =arctan[
d
2
-d
3
AD
cos (4)
In this case, the pitch angle is negative. Therefore, in order to not lose the
general property; the equation of (4) is rewritten as follows:
=arctan[
d
4
-d
1
AD
cos ; =arctan[
d
3
-d
2
AD
cos (5)
The value of Z
0A
is simply determined by the following equation:
Z
0A
=d
1
i
cos (6)
From (2) onJ (6) Z
0A
=d
1
coscos (7)
The value of Z
0M
is now calculated by the following analysis:
The movement of the robots foot is considered as first one translation from the
reference frame
0
(OXYZ) to the frame
1
(O
1
X
1
Y
1
Z
1
), then one rotation about
the Y
1
axis with the pitch angle and finally one rotation about the X
1
axis with
the roll angle .
The homogenous transformation matrices of these one translation and two
rotations are, respectively T
OO
1
, T
Y
1
,
and T
X
1
,q
:
T
OO
1
=

1 0 0 X
O
1
0 1 0 Y
O
1
0 0 1 Z
O
1
0 0 0 1

,T
Y,
=_
cos 0 sin 0
0 1 0 0
sin 0 cos 0
0 0 0 1
_ ,T
X,q
=_
1 0 0 0
0 cos sin 0
0 sin cos 0
0 0 0 1
_
In the reference frameR
0
, Z
O
1
|

0
Z
A
|

0
. From (7): Z
A
|

0
=d
1
coscos.
While the coordinate of any point M on the robots foot in the frame
1
and
0

are, respectively P
M
|

1
=[X
1M
Y
1M
0 1]
T
and P
M
|

0
=[X
0M
Y
0M
Z
0M
1]
T

Therefore, the P
M
|

0
is calculated by the following formula:
P
M
|

0
=T
OO
1
T
Y
1
,
T
X
1
,q
P
M
|

1

_
X
0M
Y
0M
Z
0M
1
_ =_
1 0 0 X
O
1
0 1 0 Y
O
1
0 0 1 d
1
cos cos
0 0 0 1
_ _
cos 0 sin 0
0 1 0 0
sin 0 cos 0
0 0 0 1
_ _
1 0 0 0
0 cos sin 0
0 sin cos 0
0 0 0 1
_ _
X
1M
Y
1M
0
1
_

_
X
0M
Y
0M
Z
0M
1
_ =_
1 0 0 X
0A
0 1 0 Y
0A
0 0 1 d
1
cos cos
0 0 0 1
_ _
cos sinsin sin cos 0
0 cos sin 0
sin cossin cos cos 0
0 0 0 1
_ _
X
1M
Y
1M
0
1
_
Z
0M
=J
1
cos0 cos X
1M
sin0 +
1M
cos0 sin (8)
From the equations of (1), (5) and (8), the values of ,0 onJ Z
0M
are simply
determined.
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

8


Figure 2.3: The model with the negative pitch angle and the positive roll angle
2.2. Validation
a. Validation based on the trigonometric formula
Considering the example in the figure 2.3. In the view of algebra: >0, >0,
O
1
X
M
>0, O
1
Y
M
>0, Z
1M
=0

We have:
Z
0M
=HE +KE (9)
HE =Hcos0 (10)
H =
1M
sin (11)
KE =Z
0A
AK (12)
Z
0A
=J
1
cos0 cos (13)
AK =X
1M
sin0 (14)

From (9 14) Z
0M
=J
1
cos0 cos X
1M
sin0 +
1M
cos0 sin(15)
The equation (15) is identical to the equation (8)
b. Validation based on the software of Cabri Geometry
[*]

1
st
case: The angle of pitch is negative (0 <0) and the angle of roll is positive
( >0)

Figure 2.4: The first model of the robots foot with >0 and 0 <0
[*]
Cabri Geometry is commercial interactive geometry software produced by the French company
Cabrilog for teaching and learning geometry and trigonometry. In this project, I use its demo version
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

9

Calculate q,0 and z
M
Uncertainty(
x
)

x
x
1%
x
1C
=A -0.0389
y
1C
=AB 0,02258
z
0C
=J
1
cos0 cos x
C
sin0
+y
C
sincos0
0,040339
C
=|z
C
J
z
C| =0,00002
0,052%

1
=orcton
J
2
J
1
AB

29,949
0

q1
=|
1
| =0.077
0

0,26%

2
=orcton
J
3
J
4
AB

29,873
0

q2
=|
2
| =0.001
0

0,003%
0
1
=orcton_
J
4
J
1
A
cos
1
]
-22,949
0

01
=|0 0
1
| =0,02
0

0,08%
0
2
=orcton_
J
3
J
2
A
cos
1
]
-22,993
0

02
=|0 0
2
| =0,024
0

0,1%
0
3
=orcton_
J
4
J
1
A
cos
2
]
-22,965
0

03
=|0 0
3
| =0,0004
0

0,001%
0
4
=orcton_
J
3
J
2
A
cos
2
]
-23,008
0

04
=|0 0
4
| =0,039
0

0,17%

2
nd
case: The angle of pitch is positive (0 >0) and the angle of roll is negative
( <0)

Figure 2.5: A model of the robots foot with <0 and 0 >0
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

10

Calculate q,0 and z
M
Uncertainty(
x
)

x
x
1%
x
M
=A -0,03530
y
M
=AB 0,02188
z
0M
=J
1
cos0 cos x
C
sin0
+y
C
sin cos0
0,030892
C
=|z
C
J
z
C| =0,000008 0,025%

1
=orcton
J
2
J
1
AB

-29,680
0

q1
=|
1
| =0.006
0
0.02%

2
=orcton
J
3
J
4
AB

-29,699
0

q2
=|
2
| =0.013
0
0,04%
0
1
=orcton_
J
4
J
1
A
cos
1
]
18,468
0

01
=|0 0
1
| =0,011
0
0,06%
0
2
=orcton_
J
3
J
2
A
cos
1
]
18,456
0

02
=|0 0
2
| =0,001
0
0,005%
0
3
=orcton_
J
4
J
1
A
cos
2
]
18,465
0

03
=|0 0
3
| =0,008
0
0,04%
0
4
=orcton_
J
3
J
2
A
cos
2
]
18,453
0

04
=|0 0
4
| =0,004
0
0,02%

3. Measurement and Labview User Interface
3.1. Measurement
[4]

a. Wiring Instruction
The wiring of the sensor is straight forward because the sensor already has the
wire adapter to plug straight into the screw terminals of the myDAQ. In order to
power the sensor, it needs to be connected to a 4.5 to 5.5 VDC source via the red
wire (Vcc). The myDAQ has a 5V output connection that can supply up to 500mA
of current. The data sheet of the sensor says that the Sharp Infrared Proximity
Sensor has a typical 33mA current draw from the power source, which is well
below the limit of the myDAQ. Once the red power wire is connected the black
wire (GND) needs to be connected to the ground of the myDAQ (this makes sure
that the sensor and the myDAQ both have the same reference point). In this case
we connected the black wire to AGND. The ground also needs to be connected to
AI0-.
b. Measuring strategy
There are 4 steps of the measurement based on Labview:
1. Gather the raw data in the form of voltage from the sensor.
2. Filter and average the data so that we can reject some noise.
3. Interpolate the data to convert it into distance.
4. Applying the analytical equations to determine the desired values.
In order to receive data at a consistent rate, use the continuous acquisition mode to
get a set amount of data utilizing the myDAQ onboard timing engine. The timing
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

11

engine will ensure that the time between samples is consistent for the duration of
the program.
Even though the data is gathered, it will be slightly noisy, and this is due to
external noise that might be found on the wires or even from extra light being
detected by the sensor. Even though the sensor does have signal conditioning to
reduce noise, it is not fully immune to noise. In order to filter out the noise, a low
pass filter should be used to remove extraneous spikes.
The sensor actually sends back data in "time windows" which can be seen on the
graph as steps of voltage. In order to keep the timing and coding simple, the
next task is to average the sample set instead of trying to match the timing of the
myDAQ's sample set to the sample set of the sensor.
After filtering the data and averaging the sample set into a single voltage, this
value is in volts and not centimeters. In order to convert this data to centimeters, it
is necessary to find a way to map this value to a non-linear curve as seen in figure
3.1 below and in the data sheet.

Figure 3.1: Distance Vs Voltage graph (From Sensors Datasheet)
The non linearity is a problem since we cannot use a line approximation, but we
can use the spline interpolation functions found in LabVIEW to curve fit the above
figure to the data. In order to use interpolation, there has to be a set of known
values. By using estimated points from the graph in figure 3.1, we can estimate a
set of data that the spline interpolation can use as a guide to determine the
centimeter or voltage value. In this project, the following table is considered.
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

12


Table 1: Approximate Centimeter to Volt Pairs
From the table 1, the graph of distance vs voltage and the interpolating equation
can also be built again by using Excel, as can be seen in the figure 3.2.

Figure 3.2: Distance Vs Voltage graph and the interpolating equation built by Excel
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

13

3.2. Programming the interface and validation
[5]

There are two main blocks in programming the interface. The first one is to gather
raw signal from sensor, process the signal, as well as interpolate the data to
convert it into distance. The later is for calculate the desired values.
The processing block is programmed as the figure 3.3:

Figure 3.3: The processing block
The validation for this block is checked by testing the result from one sensor, as
can be seen in the figure 3.4.

Figure 3.4: The front panel of the signal processing block
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

14

The calculating block is programmed as follows:

Figure 3.5: The calculating block
The block in figure 3.5 is built to calculate the equation of (1), (5) and (8). The
validation is checked as follows.

Figure 3.6: Validation for the first case of 2.2.b
The result in the figure 3.6 is the same as the result in the first case of 2.2.b
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

15


Figure 3.7: Validation for the second case of 2.2.b
The result in the figure 3.7 is the same as the result in the second case of 2.2.b
When the two blocks are validated, signals from the four sensors will be processed
at the same time and the final interface is shown like the figure 3.8.

Figure 3.8: The Labview interface for the measurement
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

16

4. Analysis
4.1. Choosing the cut-off frequency
[6]

These errors associated with the measurement are typically referred to as noise and
are an undesirable portion of any kinematic waveform. Noise is traditionally lower
in amplitude and associated with a different frequency range than that of the true
signal (J ackson, 1979) and can typically be removed using a low-pass filter; the
objective of any filtering technique is not only to attenuate noise but also to leave
the true signal unaffected (Winter, 1990).
When using a low-pass filter the cut-off frequency is selected so that the lower
frequencies remain yet the higher frequencies are attenuated. As the cut-off
frequency increases, the influence of the filter on the data is reduced and the data
will be similar to the raw signal, including some of the high frequency noise
(Robertson and Dowling, 2003). Determining the most appropriate cut-off
frequency is essential and necessitates knowledge of the signals characteristics.
A number of algorithms exists which are designed to define an objective criterion
for the determination of an appropriate cut-off frequency (J ackson, 1979).
In this project, the algorithm chosen is about a residual analysis (Wells and
Winter, 1980). The term residual refers to the signal content that remains when
then filtered data is subtracted from the raw data (Robertson, 2005).

Figure 4.1: Cut-off frequency vs residual voltage plot
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

17

The figure 4.1 shows the plot of cut-off frequency vs residual voltage, in which a
range of test cut-off frequencies is chosen by, for example, 0.001*fs, 0.002*fs,..
0.01*fs,... 0.48*fs. (Remember that 0.5*fs=the Nyquist frequency.). The residual
amplitude is an estimate of how different the filtered voltage is from the unfiltered
voltage, for the cut-off frequency chosen.
It can be concluded that the residual amplitude will always be greater at a lower
cut-off frequency, since a lower cut-off frequency changes the raw data more than
a higher cut-off frequency
The geometric line A represents the best estimate of the noise residual and is
positioned so that it follows the linear portion of the residual plot and intercepts
the axis at a (0 Hz). The decision regarding the cut-off requires a compromise
between the extent of signal attenuation and the amount of noise allowed to pass
through. Typically a horizontal line B is projected from a to intersect the residual
plot at h and the cut-off is selected at c, which has the frequency of 138 Hz.
4.2. The quality of filtering

Figure 4.2: Comparing the raw signal and filtered signal
The low pass filter is chosen with the following specifications:
- Sample number: 100
- Sampling rate: 1000
- Cut-off frequency: 138 Hz
- ADC resolution: 16 bits
- Infinite impulse response filter: Topology Butterworth, Order-2
As the result in the figure 4.2, the responding time of the filter is 0.005s or 5ms.
The signal has some peaks, which is caused by multiple connection terminals.
To evaluate the filter, it is also necessary to consider the voltage attenuation,
which is the result from the communication between devices and the effect of
filtering. As can be seen in the figure 4.3, the voltage attenuation is about
1.85-1.827
1.85
100% 1,24% and the distance error is
6.663-6.6
6.6
100% .95%.
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

18


Figure 4.3: The voltage attenuation in measuring
4.3. Comparing the accuracy at different distances
5.1cm 8.2cm 11.8cm 13.1cm 18.8cm 22.8cm 27.5cm
5.129 8.225 11.801 13.285 17.903 24.411 28.866
5.099 8300 11.855 13.345 18.893 24.136 32.405
5.081 8.200 11.909 13.429 19.545 23.728 28.494
5.110 8.400 12.273 13.024 18.596 22.872 30.019
5.100 8.159 12.036 13.225 19.456 22.651 20.954
5.148 8.221 11.895 13.498 21.309 25.456 29.868
5.134 8.380 11.795 13.512 19.404 20.934 27.282
5.032 8.300 12.184 13.670 18.460 23.881 29.858
5.135 8.328 11.719 13.089 19.299 22.177 29.538
5.156 8.380 11.955 13.345 19.221 23.810 26.190

x
=.24%
x
=1.1%
x
=1.2%
x
=1.84%
x
=2.17%
x
=2.5%
x
=3.8%
Table 2: Comparing the accuracy in measuring at different distances
The table 2 provides some importance remarks in measuring with the Sharp
infrared sensors:
- The greater distance is, the less accurate the measuring is.
- From 4 to 30 cm, the relative error in measuring increases from about 0.24% to
about 3.08%. The error is acceptable.
DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

19

5. Conclusion & Future Developments
5.1. Conclusion
The project is successful in building a model for calculating the analytical
equations, as well as creating a Labview interface, that is easy to manage for users.
Besides, all of the results are validated well in the different ways.
Infrared sensors emit infrared light, and therefore the sensors cannot work
accurately outside or even inside, if there is direct or indirect sunlight. However,
Sharp IR sensors can work pretty accurately in ambient light. Therefore, it is a
good choice for measuring the distance in the project of humanoid robot.
5.2. Future Developments
In the future research, the accuracy should be checked at the different surfaces and
different light conditions.
Besides, this project has not also evaluated the quality of the chosen filter in the
case of dynamic model yet. To do so, from the equations (1), (5) and (8), it is
necessary to solve the inverse geometric model. It means that the correlation
functions of , and Z
0H
should be found first in the real walking cycle. Then,
from the above equations, find out the simulated function of d
1
,d
2
,d
3
,d
4
under
the following form: d =Asin(t +) +B=Asin[
2
T
t + +B, in which is
the phase and T is the period. The problem is now that changing the period T as so
fast enough as the desired humanoid robot model and adjusting the specifications
of the chosen filter in order to measure the data exactly and timely during the
walking cycle.

DUONG Tan Quang Master 2 CSER UVSQ 28/ 02/ 2014

20

Bibliography
[1] http://www.societyofrobots.com/sensors_sharpirrange.shtml
[2] http://www.ni.com/labview/
[3] The course Bio-Mechanics and Locomotion Mr. Olivier Bruneau, Universit de Versailles
Saint Quentin
[4] http://www.ni.com/example/31470/en/#toc1
[5] The course Travaux pratiques Decouverte de Labview Mrs. Nelly Gauthier, Universit
de Versailles Saint Quentin
[6] Digital Filtering of Three-Dimensional Lower Extremity Kinematics: an Assessment,
Jonathan Sinclair1, Paul John Taylor, Sarah Jane Hobbs, Journal of Human Kinetics volume
39/2013, 25-36 DOI: 10.2478/hukin-2013-0065 ,.Section I Kinesiology

You might also like