Professional Documents
Culture Documents
=
+
=
(1)
where (u
0
, v
0
) are the principal points coordinates, (u, v)
are the image coordinates of the relative points and (k
u
,
k
v
) are the pixel reverse effective size in the (u, v)
direction.
An accurate camera model has to consider the radial
distortion of the lens system. A standard model is a
transformation from ideal coordinates, not distorted (u,
v), to real coordinates, distorted (u
D
, v
D
).
( ) ( )
( ) ( )
+ + =
+ + =
0
2
D 1 0 D
0
2
D 1 0 D
v r k 1 v v v
u r k 1 u u u
(2)
where
( ) ( )
2
v
0
2
u
0 2
D
v v u u
r
=
v v
u u
fk
fk
=
=
(3)
Eq. (3) shows the focal length in terms of horizontal
and vertical pixels.
Fig. 8 Camera model.
To calibrate the distortion, we consider the distorted
pixel coordinates (u
D
, v
D
) in the real image and the
coordinates of the same point (u, v) on the calibration
chessboard. Using Eq. (3), Eq. (2) becomes
( )
( ) ( )
( )
( ) ( )
v v k
v v u u
v v
u u k
v v u u
u u
D 1
2
v
0
2
u
0
0
D 1
2
v
0
2
u
0
0
(4)
where the unknown parameters are u
0
, v
0
, k
1
,
u
,
v
.
Once known k
1
, the radial distortion in the acquired
image can be compensated. To calculate the other
intrinsic parameters we use the DLT method. Using
Eqs. (1) and (4), the equation which describes a
standard camera model is obtained.
) z z ( c ) y y ( b ) x x ( a
) z z ( c ) y y ( b ) x x ( a
f v v
) z z ( c ) y y ( b ) x x ( a
) z z ( c ) y y ( b ) x x ( a
f u u
0 c 3 0 c 3 0 c 3
0 c 2 0 c 2 0 c 2
0
0 c 3 0 c 3 0 c 3
0 c 1 0 c 1 0 c 1
0
+ +
+ +
=
+ +
+ +
=
(5)
Here f is an intrinsic parameter; {a
1
, a
2
, a
3
} are the
elements of the Rotation Matrices; (x
0
, y
0
, z
0
) are the
projection centre points; (x, y, z) are the spatial
coordinates.
3.2 Projector Calibration
Also the projector calibration by the DLT method is
defined. The projector is studied as a reverse camera;
note that in Eq. (5) the Z coordinate is set to zero.
) y y ( b ) x x ( a
) y y ( b ) x x ( a
f v v
) y y ( b ) x x ( a
) y y ( b ) x x ( a
f u u
0 p 3 0 p 3
0 p 2 0 p 2
0
0 p 3 0 p 3
0 p 1 0 p 1
0
+
+
=
+
+
=
(6)
Dental Arch3D Direct Detection System from the Patients Mouth and Robot for Implant Positioning
336
In Eq. (6) (u
0
, v
0
, f) are the projector intrinsic
coordinates; (x
0
, y
0
) are the projector centre coordinates;
(x, y) are the spatial coordinates; (u, v) are the image
coordinates of the relative points and {a
1p
,,b
1p
,..} are
the Rotation Matrices elements. To obtain the intrinsic
and extrinsic parameters shown in Eq. (6) a new
method was used. This method allows to considerer the
projector as an inverse camera, through the use of a
dedicated algorithm that processes the in-plane
coordinates of specific points in a projected calibration
chessboard, acquired by the two micro-cameras. The
two cameras record a set of several images, each
representing a different position of the camera
calibration chessboard. Once acquired the image for
the two cameras, the planar calibration chessboard is
covered with a white paper, and a green and black
chessboard is projected on the white paper. The two
cameras acquire the scene and the algorithm records
this images. During the acquisition phase it is
important not to change the chessboards angulations.
3.3 Scanner Calibration
At this point all the calibration parameters are
processed with an active triangulation system in order
to determine the poses of the two micro-cameras 3D
with respect of the projector pose, in terms of
roto-translational matrix transformation. These
transformation operators allow to describe all points of
interest with respect to a universal coordinate system.
Particularly, the algorithm loads the camera and
projector calibration data. The camera reference
system does not coincide with the projector reference
system, so there is the need to introduce a rigid
transformation which links the two reference systems.
A coordinates change, composed by a rotation (R)
followed by a translation (t), is introduced. If m
1c
indicates the plane homogeneous coordinates in the
camera reference system and m
1
the same plane
homogeneous coordinates in the world reference, we
can write
1 c 1c
m G m =
(7)
where
=
1 0
t R
G
c c
c
=
T
3c
T
2c
T
1c
c
r
r
r
R
=
3c
2c
1c
c
t
t
t
t
Having assigned the position with respect to the
camera reference system, the same plane position with
respect to the projector reference system is identified.
Also in this case a coordinate change is needed to link
the world reference system with the projector reference
system. The relationship is the same used in Eq. (7),
where m
1p
are the plane homogeneous coordinates in
the projector reference system:
1 p p 1
m G m = (8)
Knowing Eqs. (7) and (8), the relative position
between camera and projector can be calculated using
the following relationship:
1
p c pc
G G G
=
3.4 Acquisition Phase
The shape detection is achieved by the structured
light method. To implement this particular technique, it
is necessary to project a Gray-Code pattern together
with the Phase-Shift to avoid the problem of the
matching between the points of the image plane of the
micro-cameras and the points of the image plane of the
projector [16-17]. This codifying process allows to
generate 2
n
lines with bright and dark pixels, where n is
the number of bits and depends on the projector
resolution. In our particular case we projected 256 lines.
We also used a particular pattern: the Gray-Code
pattern is followed by seven projections shifted by 1/8
phase. The shape detection is achieved by a
multi-vision scanning process, whereas the two cameras
simultaneously acquire a tooth side each (Fig. 9).
3.5 Processing Phase
To obtain a CAD model it is necessary to correlate
the data acquired from the cameras with the projector
Dental Arch3D Direct Detection System from the Patients Mouth and Robot for Implant Positioning
337
Fig. 9 Gray-code with phase shift.
data. The transformation (R, t) that changes the
coordinates from the camera reference frame to the
reference frame of the projector are derived from the
calibration process and may be given by the following
relation:
t m R m
c p
+ = (9)
where m
c
are the point coordinates acquired from
camera and m
p
are the point coordinates illuminated by
projector. A generic point m is described on the camera
plane image by the point p
c
in Eq. (10):
c
c
c c
c c
c
c
c
m
z
1
1
z y
z x
1
v
u
p =
= (10)
The projected point vertical coordinate is unknown.
Let p
p
be the coordinate of the point which illuminates
m. The point m is projected on the point p
p
on projector
image plane.
p
p
p
m
z
1
p =
(11)
Using Eqs. (9)-(11) the vector equation is obtained:
t Rp z p z
c c p p
= (12)
Decomposing Eq. (12) on a three linear equations
system, the point m depth (z
p
) is known [14].
c
T
1
T
3 p
p 3 1
c
)p r r (u
u t t
z
=
4. Scanner Calibration Results
4.1 Camera Calibration
The camera calibration is calibrated using the
Camera Calibration Toolbox for Matlab
1
. We use a
1
http://www.vision.caltech.edu/boughetj/calib_doc/.
surface with a printed checkerboard pattern and
we take about 20 images of it in different position
(Fig. 10).
The results of this calibration is shown in Table 1
where the last element is the calibration error (
c
).
4.2 Projector Calibration
To calibrate the projector with the novel algorithm,
only three images are needed, one of the chessboard, to
determine its position with respect to the camera and
two images of the projected pattern, a positive and a
negative (Figs. 11a-11b). The actual images that are
used for projector calibration are the result of the
subtraction of the two aforementioned images (Fig.
11c). The calibration of the projector is almost
Fig. 10 Specimen calibration in different position.
Table 1 Results from traditional acquisition.
Parameters Left cam Right cam
u 498.24.00 295.76
v 467.09.00 552.39.00
u -0.069 0.041
v -0.066 -0.087
f 6,8861111 -0.3402
u
0
304.83 409
v
0
248 273
x
0
23.633 20.163
y
0
14.986 11.293
z
0
-0.9916 2,41875
x -224.683 -1.213.483
y 16.953 -591.193
z 4.262.155 3.920.633
u
921 897
v
893 858
k
u
-2793.93 -2718.18
k
v
-3307.66 -3177.77
c
1,52052 0,3132
Dental Arch3D Direct Detection System from the Patients Mouth and Robot for Implant Positioning
338
(a) (b) (c)
Fig. 11 Projected images: (a) positive; (b) negative; (c)
subtracted.
independent of the camera calibration [18], but for the
points projected position, which depends from the
planes orientation, as determined by the camera.
However this dependence is rather weak.
Besides that, the calibration projector process is
similar to that used for cameras. In fact, as in the
camera calibration, 20 images are obtained, one for
each chessboard position (Fig. 12). The projector
parameters are obtained by the correlation of a target
set of coordinates placed on a projected calibration
specimen with its the corresponding coordinates on the
plane image. The results of this process are shown in
Table 2.
4.3 Acquisition Phase
Once the scanner calibration parameters have been
obtained is possible to pass to the acquisition phase for
reconstruction of a model. Using the Gray Code
described in the previously paragraph, is possible to
obtain the follow images (Fig. 13). The process starts
with the 1 bit image, terminating with the 7 bit one. Of
each image both a positive and a negative image are
recorded. At the end of this process the various images
are obtained in binary code (Fig. 14).
4.4 Surface Generation
Once the individual point cloud has been obtained in
the camera system of reference, it has to be converted
in the mechanical slider system of reference, in order to
enable the generation of the entire point cloud.
In order to do so a further calibration is needed,
using a particular new chessboard (Fig. 15), in which it
is possible to correlate motion of the sliders with the
systems of reference established on the chessboard.
Fig. 12 Specimen projector calibration in different
position.
Table 2 Projector and camera calibration parameters
with the novel algorithm.
Parameters PROJ CAM L PROJ CAM R
u 7.164.829 498.24.00 171.61 295.76
v 4.518.913 467.09.00 461.39.00 552.39.00
u -0.06957 -0.069 0.03837 0.041
v -0.01143 -0.066 -0.0467 -0.087
f -0.451 6,8861111 -0.34 -0.3402
u
0
511.05.00 304.83 285 409
v
0
383.05.00 248 182 273
x
0
-17.055 23.633 20.163 20.163
y
0
-21.848 14.986 11.293 11.293
z
0
3,1326389 -0.9916 2,41875 2,41875
x -510.221 -224.683 -1.213.483 -1.213.483
y 43.601 16.953 -591.193 -591.193
z 4.981.056 4.262.155 3.920.633 3.920.633
u
975.07.00 921 285.000 897
v
1615.03.00 893 182.000 858
k
u
-2956.66 -2793.93 -863.63 -2718.18
k
v
-5982.59 -3307.66 -674.74 -3177.77
c
5,4548611 3,73888889
Fig. 13 Projected gray-code.
Next Fig. 16 shows as example the image recorded
by the same camera after a displacement of the slides.
Every scan is registered and properly aligned with
respect to the universal reference coordinate system in
order to obtain the whole three-dimensional dental arch
model (Fig. 17).
Dental Arch3D Direct Detection System from the Patients Mouth and Robot for Implant Positioning
339
Fig. 14 Images after the subtraction technique.
Fig. 15 Chessboard with identification marks.
Fig. 16 Two chessboard pictures showing the same point
observed from two different positions.
Fig. 17 CAD model of dental arch.
Once the three-dimensional model of the oral cavity
is created, the doctor may decide the best position for
the implants in order to obtain the desired results. A
numerical control milling machine may produce a
prosthesis, which will be then ready to be installed into
the patients oral cavity. The surgeon may then proceed
with surgery, implanting the new artificial roots. Once
this is done, special identifying abutments may be
finally placed on the implants.
5. Implant Positioning
Once obtained the dental CAD model, the dentist
can decide where to place the implants in the patients
mouth. It uses a new system based on a serial/parallel
robot with 2+1 DOF. This system is basically a four bar
link, whose frame may rotate about its longitudinal axis,
actuated by two step motors that, trough two worm
gears, move two rods acting in directions mutually
parallel on the first bar of a four bar link, while on the
third bar a slide allows the doctor to manually control
the motion of the implant micro-motor (Fig. 18). Two
digital encoders measure the angles, in order to
simplify the control. Thus this micro-robot is
essentially a serial robot, whose motion is controlled in
a parallel robot fashion, becoming extremely strong,
but having a minimal impact on the patients mouth
since all the gearing is external.
The procedure to utilize the system is the following.
First step is the acquisition of a CAT record of the
patients mouth. Then the doctor has to decide where to
place the implants navigating within the 3D
representation of the patient mouth, taking into account
all problems connected with implantation in that
particular mouth. Next the intraoral mask for vestibular
support is inserted in the mouth and fixed against a
second mask placed under the chin, if the jaw is to
undergo surgery, otherwise, for the upper denture,
simply putting straps on the head to secure the mask
against the upper vestibular surface.
Once this is done, a complete scanning of the mouth
is to be performed, in order to establish the
correspondence between the CAT representation and
the actual patient mouth, thus locating with precision
the positions in which the implants have to be fitted.
Dental Arch3D Direct Detection System from the Patients Mouth and Robot for Implant Positioning
340
(a)
(b)
(c)
Fig. 18 Robotic system for guiding implant positioning: (a)
up and down; (b) left and right; (c) forward and backward.
Finally the 3D scanner has to be detached from its
base and substituted with the 2 + 1 DOF parallel robot,
and the system will guide the doctor assuming the
correct x, y position and angles that have been
previously established, and the doctor may proceed
with the implant fixation.
Once the implants are positioned, and relative
abutments installed, it is possible to repeat the scanning
process to determine both which shape the abutments
should assume, and consequently the final prosthesis
form.
6. Conclusions
The paper presents the first results of a new scanning
device for intra-oral determination of the mouth model
and the associated guiding robot for implant
positioning, both based on a platform which is fixed to
the patients mouth through a self balanced 6 DOF arm
bearing a vestibular supporting mask. More work is
needed to complete the system, but it will be the first
system that allows determining via software the best
position for an implant and immediately positioning it
into the patient mouth.
Acknowledgments
The present work was partially supported by
Tecnologica Srl of Crotone PIA grant C01/0612/P
46548-13.
The authors wish to acknowlwdge the precious help
of Basilio Sinopoli, Sebastiano Meduri, Diego Pulice
and of the Lab personnel of Dipartimento di Meccanica
of Calabria University for their contribution to this
work.
References
[1] T. Varady, R.R. Martin, J. Cox, Reverse engineering of
geometric modelsan introduction, Computer Aided
Design 29 (1997) 255-268.
[2] S.M. Yamany, A.A. Farag, D. Tasman, A.G. Farman, A
3-D reconstruction system for the human jaw using a
sequence of optical images, IEEE Transaction on Medical
Imaging 19 (5) (2000) 538-547.
[3] F. Duret, J. Blouin, Process and apparatus for taking a
medical cast, US Patent Nr. 4952149, 1990.
[4] R. Massen, J. Gassler, Optical probe and method for the
three-dimensional surveying of teeth, US Patent Nr.
5372502, 1994.
[5] S. Witkowski, (CAD-)/CAM in dental technology,
Quintessence Dent Technol 28 (2005) 169-184.
[6] I.A. Pretty, G. Maupom, A closer look at diagnosis in
clinical dental practice: Part 5. Emerging technologies for
caries detection and diagnosis, Journal of the Canadian
Dental Association 70 (8) (2004) 540a-540i.
[7] G. Arnetzl, D. Pongratz, Milling precision and fitting
accuracy of Cerec Scan milled restorations, International
Journal of Computerized Dentistry 8 (4) (2005) 273-281.
[8] W.H. Mrmann, The origin of the Cerec method: a
personal review of the first 5 years, Int. J. Comput. Dent. 7
(1) (2004) 11-24.
[9] D. Suttor, K. Bunke, S. Hoescheler, H. Hauptmann, G.
Hertlein, LAVAthe system for all-ceramic ZrO
2
crown
and bridge frameworks, Int. J. Comput. Dent. 4 (3) (2001)
195-206.
[10] G. Orentlicher, M. Teich, Evolving implant design. The
nobelactive implant: discussion and case presentations,
Compendium of Continuing Education in Dentistry 31 (1)
Dental Arch3D Direct Detection System from the Patients Mouth and Robot for Implant Positioning
341
(2010) 66-70, 72-77.
[11] Z. Zhang, A flexible new technique for camera calibration,
IEEE Transactions on Pattern Analysis and Machine
Intelligence 22 (11) (2000) 1330-1334.
[12] R. Tsai, A versatile camera calibration technique for
high-accuracy 3D machine vision metrology using
off-the-shelf TV cameras and lenses, IEEE Journal of
Robotics and Automation 3 (4) (1987) 323-344.
[13] J. Weng, P. Cohen, H. Marc, Camera calibration with
distortion models and accuracy evaluation, IEEE
Transaction on Pattern Analysis and Machine Intelligence
14 (1992) 965-980.
[14] L. Chen, C.W. Armstrong, D.D. Raftopoulos, An
investigation on the accuracy of three-dimensional space
reconstruction using the direct linear transformation
technique, J. Biomech 27 (1994) 493-500.
[15] H. Hatze, High-precision three-dimensional
photogrammetric calibration and object space
reconstruction using a modified DLT-approach, J.
Biomech 21 (1988) 533-538.
[16] J. Guhring, C. Brenner, J. Bohm, D. Fritsch, Data
processing and calibration of a cross-pattern stripe
projector, in: Proceedings of IAPRS Congress, Vol. 33,
2000, pp. 327-338.
[17] M. Trobina, Error model of a coded-light range sensor,
Technical Report BIWI-TR-164, ETH Zentrum, 1885.
[18] M. Kimura, M. Mochimaru, T. Kanade, Projector
calibration using arbitrary planes and calibrated camera, in:
IEEE Conference on Computer Vision and Pattern
Recognition (CVPR07), Minneapolis, June 17-22, 2007.