You are on page 1of 36

3D IMAGING SENSORS

CHAPTER 1
INTRODUCTION
1.1 Introduction
In recent years, demand for optical 3D imaging sensors has become increasingly relevant, and has led
to the development of instruments that are now commercially available. During the 1970s and 1980s
their development was mainly performed in research laboratories, and was aimed at designing new
techniques exploiting the use of light beams(both coherent and in coherent) instead of contact probes, in
view of their application in the mechanical manufacturing industry, for measurement and quality control
applications. Novel measurement principles were proposed, and suitable prototypes were developed and
characterized to prove their performances.
In parallel, we participated in a considerable effort directed towards the miniaturization and
integration of optical light sources, detectors and components, in the electronic equipment and the
mechanical structure of the sensors. In the last decade, the availability of techniques and component
shasled to the production of a wide spectrum of commercially available devices, with measurement
resolution from a few nanometers to fractions of a meter, and ranges from microns to a few kilometers.
In recent times, the trend has been to produce devices at decreased costs and with increased ruggedness
and portability.
The problem of manipulating, editing, and storing the measured data was approached at the software
level ,by developing a very powerful suite of programs that import and manipulate the data files, and
output the min popular and well-standardized formats, such as the DXF and IGES formats for CAD
applications the STL format for rapid prototyping machines, and the VRML and 3D format for
visualization.
As a result, the interest in the use of 3Dimaging sensors has increased. In the mechanical and
manufacturing industry, surface quality control, micro profiling and macro profiling are often carried out
on a contactless, optical basis. Optical probes and contact probes are used in combination: this occurs
whenever both the accuracy of the measurement and the efficiency of the process are the critical points .

Sri venkateswara institute of technology

Page 1

3D IMAGING SENSORS
It is significant that, today, one of the input requirements in the design of Coordinate Measuring
Machines(CMMs) is represented by the possibility of mounting optical 3D measurement sensors and
contact probes.
In addition, 3D imaging sensors are becoming of interest in combination with two dimensional(2D)
vision sensors ,especially in robotic applications ,for collision avoidance ,and to solve removing-fromheap and assembling problems .As a result ,those companies traditionally focused on the development
of 2D vision systems ,are now enlarging their product spectrum by including 3D range sensors.
The use of optical depth sensors has gone beyond the mechanical field for which they were
originally intended. Examples of application fields are geology, civil engineering, archaeology , reverse
engineering, medicine, and virtual reality.
The aim of this paper is to briefly review the state of the art concerning 3D sensing techniques and
devices, and to present their applications in industry, cultural heritage, medicine and forensics. It focuses
in part on there search carried out at our Laboratory, and briefly describes related approaches developed
by other groups.
The paper is the following structure: Section2 overviews 3D imaging techniques, and evaluates the
different approaches to highlight which method can be used and what are the main applications.
Section3 gives an insight of the results achieved in the mentioned applications. As far as the industrial
field is concerned, the topics of surface quality control, dimensional measurement and reverse
engineering are covered. Significant results in the cultural heritage field, both achieved by us and by
other research groups are shown.
The approaches developed in the medical field are briefly reviewed, and the postmortem analysis of
lesions is cited as an example of the latest applications in this area. The last section is dedicated to the
use of 3D imaging techniques for the accurate, fast and non-invasive measurement and modeling of
crime scenes.

1.2 Overview of 3D imaging techniques


3D imaging sensors generally operate by projecting (in the active form) or acquiring(in the passive
form) electromagnetic energy on to/from an object followed by recording the transmitted or reflected

Sri venkateswara institute of technology

Page 2

3D IMAGING SENSORS
energy. The most important example of transmission gauging is the industrial computer tomography
(CT), which uses high-energy x-rays and measures the radiation transmitted through the object.
Reflection sensors for shape acquisition can be sub divided into non-optical and optical
sensing . Non-optical sensing includes acoustic sensors( ultrasonic, seismic), electromagnetic(infrared,
ultraviolet, microwave radar, etc...) and others. These technique typically measure distances to objects
by measuring the time required for a pulse of sound or microwave energy to bounce back from an
object.
In reflection optical sensing, light carries the measurement information. There is a remarkable
variety of 3D optical techniques, and their classification is not unique. In this section, we mention the
more prominent approaches, and we classify them as shown in Table1.

X
X
X
X

X
X
X
X

X
X
X

contours
Shape from focusing
Shape from shadows
Texture gradients
Shape from shading
Shape from photometry

X
X

X
X
X
X
X

X
X

X
X
X

X
X
X
X
X
X

Surface

Range

Indirect

Direct

Active
X
X
X
X
X
X

Orientation

Images

Passive

Time delay

Triangulation
Laser triangulators
Structured light
Stereo vision
Photo grammetry
Time of Flight
Interferometry
Moirfringe range

Monocular

X
X
X
X
X
X
X
X
X
X
X
X

Table1.Classification of 3D imaging techniques


The 3D techniques are based on optical triangulation, on time delay, and on the use of monocular
images. They are classified into passive and active methods. In passive methods, the reflectance of the
Sri venkateswara institute of technology

Page 3

3D IMAGING SENSORS
Object and the illumination of the scene are used to derive the shape information: no active device is
necessary. In the active form, suitable light sources are used as the internal vector of information. A
distinction is also made between direct and indirect measurements.

Direct techniques result in range data, i.e., in to a set of distances between the unknown surface and
the range sensor. Indirect measurements are inferred from monocular images and from prior knowledge
of the target properties. They result either in range data or in surface orientation.

Excellent reviews of optical methods and range sensors for 3D measurement are presented
interferences . In are view of 20 years of development in the field of 3D laser imaging with emphasis on
commercial systems available in 2004 is given.
Comprehensive summaries of earlier techniques and systems are provided in the other publications.
A survey of 3D imaging techniques is also given in and in ,in the wider context of both contact and
contact less 3D measurement techniques for the reverse engineering of shapes.

1.2.1 Laser triangulators


Both single-point triangulators and laser stripes belong to this category. They are all based on the
active triangulation principle.Figure1 shows a typical system configuration. Points OP and OC are the
exit and the entrance pupils of a laser source and of a camera. Their mutual distance is the base lined.
The optical axes zP and zC of the laser and the camera form an angle of degrees.

A
S

Laser

zP

OP

OC

d
j

zC

Imageplane

Sri venkateswara institute of technology

Page 4

3D IMAGING SENSORS
S(i,j)
Fig1.1 Schematics of the triangulation principle.
S S
A

The laser source generates a narrow beam, impinging the object at point S (single-point
triangulators). The back scattered beam is imaged at point S at image plane. The measurement of

Sri venkateswara institute of technology

Page 5

3D IMAGING SENSORS
The location (i S, jS) of image point S defines the line of sight S'OC, and, by means
of simple geometry, yields the position of S. The measurement of the surface is
achieved by scanning. In a conventional triangulation configuration, a compromise is
necessary between the Field of View(FOV), the measurement

resolution and

uncertainty, and the shadow effects due to large values of angle.


To overcome this limitation, a method called synchronized scanning has been
proposed. Using the approach illustrated in a large field of view can be achieved with
a small angle, without sacrificing range measurement precision.

Laser stripes exploit the optical triangulation principle shown in Figure1. However,
in this case, the laser I is equipped with a cylindrical lens, which expands the light
beam along one direction. Hence, a plane of light is generated, and multiple points of
the object are illuminated at the same time. In the figure, the light plane is denoted by
S, and the illuminated points belong to the intersection between the plane and the
unknown object (line AB).

The measurement of the location of all the image points from A to B at plane
allows the determination of the 3D shape of the object in correspondence with the
illuminated points. For dense reconstruction, the plane of light must scan the scene
.An interesting enhancement of the basic principle exploits the B iris principle . A laser
source produces two stripes by passing through the Bi-Iris component (a mask with
two apertures).

The measurement of the point location at the image plane can be carried out on a
differential basis, and shows decreased dependence on the object reflectivity and on
the laser speckles.
One of the most significant advantages of laser triangulators is their accuracy, and
their relative insensitivity to illumination conditions and surface texture effects.
Single-point laser triangulators are widely used in the industrial field,
Sri venkateswara institute of technology

for the
Page 6

3D IMAGING SENSORS
measurement of distances, diameters, thicknesses, as well as in surface quality control
applications. Laser stripes are becoming

popular for quality control, reverse

engineering, and modeling of heritage. Examples are the Vivid910 system (Konica
Minolta, Inc.), the sensors of the Smart Ray series(Smart Ray GmbH, Germany) and
the Shape Grabber systems(Shape Grabber Inc., CA,USA).
3D sensors are designed for integration with robot arms, to implement pick and
place operations in de-palletizing stations. An example is the Ranger system (Sick,
Inc.) .The Next Engine system (Next Engine, Inc., CA, USA) and the Tri Angles
scanner (Com Tronics, Inc., USA) device deserve particular attention, since they show
how the field is progressing. These two systems, in fact, break the price barrier of
laser-based systems, which is a limiting factor in their diffusion.

1.2.2 structured light


Structured light based sensors share the active triangulation approach above
mentioned. However, instead of scanning the surface, they project bi-dimensional
patterns of non-coherent light, and elaborate them to obtain the range information
for each viewed point simultaneously. A single pattern as well as multiple patterns can
be projected. In both cases, the single light plane S shown in Figure1 is replaced by a
bundle of geometrical planes, schematically shown in Figure2
The planes are indexed along the LP coordinate at the projector plane. The depth
information at object point S Is obtained as the inter section between line sight SS and
the plane indexed by LP=LPS. The critical point is to guarantee that different object
points are assigned to different indexes along the LP coordinate.

Sri venkateswara institute of technology

Page 7

3D IMAGING SENSORS

Fig 1.2 Principle of measurement in structured light systems.


To this aim, a large number of projection strategies have been developed. The
projection of grid patterns , of dot patterns , of multiple vertical slits and of multicolor projection patterns have been extensively studied. Particular research has been
carried out on the projection of fringe patterns, that are particularly suitable to
maximize the measurement resolution.
As an example, in Figure3, three fringe patterns are shown. The first one is formed
by fringes of sinusoidal profile, the second one is obtained by using the superposition
of two patterns with sinusoidal fringes at different frequencies , and the last one is
formed by fringes of rectangular profile.

Sri venkateswara institute of technology

Page 8

3D IMAGING SENSORS

Fig 1.3 Example of fringe projection schemes.


Recent trends in the development of structured light systems are aimed at
increasing the speed of projection of multiple patterns, in order to allow the realtime acquisition of the surfaces, especially for motion planning and human body
acquisitions. The Mobil Cam 3D system (Via LUX GmbH, Germany), and the
Broad way scanner (Artec Group, Inc., CA, USA) belong to this category.

1.2.3 Stereo vision


The stereo vision method represents the passive version of structured light
techniques. Here, two (or more) cameras concurrently capture the same scene. In
principle, the reconstruction by stereo approach uses the following sequence of
processing steps: (i)image acquisition, (ii)camera modeling, (iii) feature extractions,
(iv) correspondence analysis and, (v) triangulation.
Sri venkateswara institute of technology

Page 9

3D IMAGING SENSORS

No further equipment (e.g. specific light sources) and no special projections are
required. The remarkable advantages of the stereo approach are the simplicity and the
low cost; the major problem is the identification of common points within the image
pairs, i.e., the solution of the well-known correspondence problem. Moreover, the
quality of the shape extraction depends on the sharpness of the surface texture
(affected by variations in surface reflectance). Stereo vision has significant
applications in robotics and in computer vision, where the essence of the problem is
not the accurate acquisition of highly quality data, but, rather ,their interpretation (for
example for motion planning, collision avoidance and grasping applications).
A variant of the stereo vision approach implies the use of multiple images from a
single camera, which captures the object in a sequence of images from different views
(shape from video). A basic requirement forth is technique is that the scene does not
contain moving parts. 3D urban modeling and object detection are typical applications.
Stereo vision based on the detection of the object silhouette has recently become
very popular. The Stereo approach is exploited, but the 3D geometry of the object is
retrieved by using the object contours under different viewing directions. One of the
most prominent applications of this method is the modeling of 3D shapes for heritage,
especially when the texture information is captured together with the range. Among
the devices on the market, the Prompt. Stereo system (Solving 3D GmbH, Germany)
shows high performance and simplicity of use.

1.2.4 Photo grammetry


Photo grammetry obtains reliable 3D models by means of photo graphs. In digital
photo grammetry digital images are used. The elaboration pipe line consists basically
of the following steps: camera calibration and orientation, image point measurements,
3D point cloud generation, surface generation and texture mapping. Camera
calibration is crucial in view of obtaining accurate models. Image measurement can be
Sri venkateswara institute of technology

Page 10

3D IMAGING SENSORS
performed by means of automatic or semi-automatic procedures. In the first case, very
dense point clouds are obtained, even if at the expense of in accuracy and missing
parts. In the second case, very accurate measurements are obtained, over a
significantly smaller number of points, and at increased elaboration and operator times.
Typical applications of photo grammetry, besides the aerial photo grammetry, are
in close range photo grammetry, forcity modeling, medical imaging and heritage.
Reliable packages are now commercially available for complete scene modeling or
sensor calibration and orientation. Examples are the Australis software (Photo metrics,
Australia), Photo modeler (Eos Systems, Inc., Canada), Leica Photo grammetry Suite
(Leica Geo systems GIS & Mapping), and Menci (Menci Software srl, Italy).
Both stereo vision and photo grammetry share the objective of obtaining 3D
models from images and are currently used in computer vision applications. The
methods rely on the use of camera Calibration techniques. The elaboration chain is
basically the same, even if linear models for camera calibration and simplified point
measurement procedures are used in computer vision applications. In fact, in computer
vision the automation level of the overall process is more important with respect to the
accuracy of the models.

In recent times, the photo grammetric and computer vision communities have
proposed combined procedures, for increasing both the measurement performance and
the automation levels. Examples are in the use of laser scanning and photo grammetry
and in the use of stereo vision and photo grammetry for facadere construction and
object tracking . On the market, combinations of structured light projection with photo
grammetry are available (AtosII scanner, GomGmbH, Germany) for improved
measurement performance.

1.2.5 Time of Flight

Sri venkateswara institute of technology

Page 11

3D IMAGING SENSORS
Surface range measurement can be made directly using the radar time-of-flight
principle. The emitter unit generates a laser pulse, which impinges on to the target
surface. A receiver detects the reflected pulse, and suitable electronics measures the
round trip travel time of the returning signal and its intensity. Single point sensors
perform point-to-point distance measurement, and scanning devices are combined to
the optical head to cover large bi-dimensional scenes. Large range sensors allow
measurement ranges from 15m to 100m (reflective markers must be put on the target
surfaces).
Medium range sensors can acquire 3D data in shorter ranges. The measurement
resolutions vary with the range. For large measuring ranges, time-of flight sensors give
excellent results. On the other side, for smaller objects, about one meter in size,
attaining 1 part per 1, 000 accuracy with time-of-flight radar requires very high speed
timing circuitry, because the time differences are in the Pico-second range. A few
amplitude and frequency modulated radars have shown promise for close range
distance measurement.
In many applications, the technique is range-limited by allow able power levels of
laser radiation, determined by laser safety considerations. Additionally, time-of-flight
sensors face difficulties with shiny surfaces, which reflect little back-scattered light
energy except when oriented perpendicularly to the line of sight.

1.2.6 Interferometry
Interferometry methods operate by projecting a spatially or temporally varying
periodic pattern on to a surface, followed by mixing the reflected light with a reference
pattern. The reference pattern demodulates the signal to reveal the variation in surface
geometry. The measurement resolution is very high, since it is a fraction of the
wavelength of the laser radiation. For this reason, surface quality control and micro

Sri venkateswara institute of technology

Page 12

3D IMAGING SENSORS
profilometry are the most explored applications. Use of multiple wave lengths and of
double heterodyne detection lengthens the non-ambiguity range.
Systems based on this principle are successfully used to calibrate CMMs.
Holographic Interferometry relies on mixing coherent illumination with different wave
vectors. Holographic methods typically yield range accuracy of a fraction of the light
wavelength over microscopic fields of view.

1.2.7 Moir fringe range contours


Moir fringe range contours are obtained by illuminating the scene with a light
source passing through a grating, and viewing the scene with a laterally displaced
camera through an identical second grating. The resulting interference patterns how
equidistant contour lines, but there is no information about the corresponding depth
values.
In addition, it is impossible to have direct information about the sign of the
concavities .Moir methods can have phase discrimination problems when the surface
does not exhibit smooth shape variations.

1.2.8 Shape from focusing


Depth and range can be determined by exploiting the focal properties of a lens. A
camera lens can be used as arrange finder by exploiting the depth of view
phenomena. In fact, the target image is blurred by an amount proportional to the
distance between points on the object and the in-focus object plane. This technique has
evolved from the passive approach to an active sensing strategy. In the passive case,
surface texture is used to determine the amount of blurring.
Thus, the object must have surface texture covering the whole surface in order to
extract shape. The active version of the method perates by projecting light on to the
Sri venkateswara institute of technology

Page 13

3D IMAGING SENSORS
object to avoid difficulties in discerning surface texture. Most prior work in active
depth from focus has yielded moderate accuracy up to one part per 400 over the field
of view. The reisa direct trade-off between depth of view and field of view;
satisfactory depth resolution is achieved at the expense of a sub-sampling the scene,
which in turn requires some form of mechanical scanning to acquire range
measurement over the whole scene. Moreover, spatial resolution is non-uniform.
Specifically, depth resolution is substantially less than resolution perpendicular to
the observation axis. Finally, objects not aligned perpendicularly to the optical axis
and having a depth dimension greater than the depth of view will come into focus at
different ranges, complicating the scene analysis and interpretation.

1.2.9 Shape from shadows


This technique is a variant of the structured light approach. The 3D model of the
unknown object is rebuilt by capturing the shadow of a known object projected on to
the target when the light is moving. Low cost and simple hard ware are the main
advantages of this approach, even if at the expense of low accuracy.

1.2.10 Shape from texture


The idealists find possible transformations of texture elements (texels) to reproduce
the object surface orientation. For example, given a planar surface, the image of a
circular texelisan ellipse with varying values of its axes depending on it orientation.
Are view of the techniques developed in this area is in reference . These methods yield
simple, low cost hard ware.

1.2.11 shape from shading


Shape-from-shading requires the acquisition of the object from one viewing angle
and varying the position of the light source, which results in varying the shading on
Sri venkateswara institute of technology

Page 14

3D IMAGING SENSORS
the target surface. A variant of this method implies the variation of the lighting
conditions. Are view of the algorithms able to extract the shape information, starting
from the reflectance map of the image, is in reference. The acquisition hard ware is
very simple, and low cost.
However, low accuracy is obtained, especially in the presence of external factors
influencing the object reflectance. The measurements of two is rather complex.

1.2. 12 Strengths and weaknesses of 3D imaging techniques


The main characteristics of optical range imaging techniques are summarized in
Table2 .The comments on the table are quite general in nature, and a number of
exceptions are known. Strengths and weaknesses of the different techniques are
strongly application-dependent. The common at tribute of being non-contact is an
important consideration in those applications characterized by either fragile,
deformable objects, or on-line measurements or hostile environments.
Acquisition time is another important aspect, when the over all cost of the
measurement process suggests the reduction of the operator time even at the expense
of deteriorating the quality of the data . On the other hand, active vision systems using
a laser beam that illuminates the object inherently involve safety considerations, and
possible surface interaction effects.
In addition, the speckle effect introduces disturbances that represent a serious limit
to the accuracy of the measurements. Fortunately, recent advances in LEDs technology
yield a valuable solution of this problem, since they are suitable candidates as light
projectors active triangulation systems.
The dramatic evolution of CPU and memories has led in the last three years to
elaboration performances that were by far impossible before, at low costs. For this

Sri venkateswara institute of technology

Page 15

3D IMAGING SENSORS
reasons, techniques that are computation demanding (e.g., passive stereo vision) are
now more reliable and efficient.
The selection of which sensor types would be used to solve a given depth
measurement problem is a very complex task, that must consider (i) the measurement
time, (ii) the budget, and (iii) the quality expected from the measurement.
In this respect, 3D imaging sensors may be affected by missing or bad quality data.
Reasons are related to the optical geometry of the system, the type of project or and /or
acquisition optical device, the measurement technique and the characteristics of the
target objects. The sensor performance may depend on the dimension, the shape, the
texture, the temperature and the accessibility of the object.
Relevant factors that influence the choice are also the ruggedness, the portability,
and the adaptiveness of the sensor to the measurement problem, the easiness of the
data handling, and the simplicity of use of the sensor.

1.2.13 The software for the elaboration of 3D data


Besides the acquisition of 3 draw data, the construction of 3D models requires the
following steps:
o Registration of multiple images.
o Integration of registered images into a fused single point cloud and
determination of the tesselmesh that models the surface.
o Acquisition of reflection data, for surface visualization.
The registration step consists of the alignment in a common reference frame of
all the views captured to cover the whole surface. A relevant number of techniques
have been proposed. ICP based algorithms, for example, allow the pair wise
registration of adjacent views.

Multi-view registration methods

carryout the alignment of multiple views

following a global approach.


Sri venkateswara institute of technology

Page 16

3D IMAGING SENSORS
The integration step has been widely studied by the computer graphics community,
to develop efficient methods for the creation of meshes. Volumetric methods [64,
65], mesh stitching [66,67], region growing techniques [68,69] ,and sculpting
based methods [70] are of utmost importance in this field.
TECHNOLOGY
Laser triangulators

STRENGTH

WEAKNESS

Relative
simplicity

with the use of laser source

Performance

Structured Light

Safety constraint associated

Limited

range

and

generally

measurement

independent of

Missing

ambient

light

correspondence

High

data

occlusions and shadows

volume

data

in
with

acquisition rate

Cost

High

Safety constraints, if laser

data

acquisition rate

based

Intermediate

middle-complex

measurement

Computationally

Missing data in correspondence

volume

with occlusions and

Performance

cost

shadow

generally
dependent

of

ambient light
Stereo Vision

Simple and in

Computation

expensive High

demanding

accuracy

Sparse

on

well-defined
targets

Sri venkateswara institute of technology

data

covering

Limited towel defined scenes

Low data acquisition rate


Page 17

3D IMAGING SENSORS
Photo grammetry

and

High accuracy

Medium to large

Cost

measurement

Accuracy

Simple
expensive

Time-of-Flight

range Good data


acquisition

is

inferior

to

triangulation at close ranges

rate

Performance
generally
independent
ambient light
Sub-micron

Interferometry

accuracy
Moirfringe
contours

of

in

Measurement

capability

limited to quasi

micro-ranges
range Simple and low
cost Short ranges

Flatsurfaces

Limited

applicability

inindustrial
Table2.Comparison of optical range imaging technique
The acquisition of reflectance data is required for the correct visualization of the 3D
models .Typical applications are for constructing 3D models for virtual reality,
animation, and cultural heritage. The research developed in this area has led to
approaches based either on parametric reflectance models or on a set of colour images
of the object. Excellent reviews of these methods are in references.

There are a number of commercial packages available for manipulation of 3D


data.

Examples

are

Poly

Works

(InnovMetric,Inc.,Canada),

Geomagic(RaindropGeomagic,Inc.,USA),Rapid form (Inus Technology, Inc.


&Rapidform,Inc.),Rinhoceros(Rino3DInc.,USA), Surfacer (Powershape,Inc.,UK).
They provide view importing, editing, alignment and fusion. Specific modules
allow mesh-editing, compression, rendering, visualizing and inspecting. Some of
them provide the tools for the construction of the CAD model from the mesh. It is
Sri venkateswara institute of technology

Page 18

3D IMAGING SENSORS
worth noting that in general, the operations are performed in a semi-automatic way,
i.e., the operator skill is crucial to optimize the various steps, and to keep under
control the influence of the alignment errors and of the editing operation over the
3D models.

Sri venkateswara institute of technology

Page 19

3D IMAGING SENSORS

CHAPTER 2
SENSORS FOR 3D IMAGING
2.1 COLOUR 3D IMAGING TECHNOLOGY
Machine vision involves the analysis of the properties of the luminous flux
reflected or radiated by objects. To recover the geometrical structures of these
objects, either to recognize or to measure their dimension, two basic vision strategies
are available.
Passive vision, attempts to analyze the structure of the scene under ambient
light. Stereoscopic vision is a passive optical technique. The basic idea is that two or
more digital images are taken from known locations. The images are then processed
to find the correlations between them. As soon as matching points are identified, the
geometry can be computed.
Active vision attempts to reduce the ambiguity of scene analysis by structuring
the way in which images are formed. Sensors that capitalize on active vision can
resolve most of the ambiguities found with two-dimensional imaging systems. Lidar
based or triangulation based laser range cameras are examples of active vision
technique. One digital 3D imaging system based on optical triangulation were
developed and demonstrated.
2.2 AUTO SYNCHRONIZED SCANNER
The auto-synchronized scanner, depicted schematically on Figure 1, can
provide registered range and colour data of visible surfaces. A 3D surface map is
captured by scanning a laser spot onto a scene, collecting the reflected laser light, and
Sri venkateswara institute of technology

Page 20

3D IMAGING SENSORS
finally focusing the beam onto a linear laser spot sensor. Geometric and photometric
corrections of the raw data give two images in perfect registration: one with x, y, z
co-ordinates and a second with reflectance data. The laser beam composed of
multiple visible wavelengths is used for the purpose of measuring the colour map of a
scene (reflectance map).

Fig 2.1 Auto Synchronized Scanner

2.3 SYNCHRONIZATION CIRCUIT BASED UPON DUAL PHOTOCELLS


This sensor ensures the stability and the repeatability of range measurements
in environment with varying temperature. Discrete implementations of the so-called
synchronization circuits have posed many problems in the past. A monolithic version
of an improved circuit has been built to alleviate those problems.
2.4 LASER SPOT POSITION MEASUREMENT SENSORS
High-resolution 3D images can be acquired using laser-based vision systems.
With this approach, the 3D information becomes relatively insensitive to background
illumination and surface texture. Complete images of visible surfaces that are rather
featureless to the human eye or a video camera can be generated.
Sri venkateswara institute of technology

Page 21

3D IMAGING SENSORS

2.5 POSITION SENSITIVE DETECTORS


Many devices have been built or considered in the past for measuring the
position of a laser spot more efficiently. One method of position detection uses a
video camera to capture an image of an object electronically. Image processing
techniques are then used to determine the location of the object .For situations
requiring the location of a light source on a plane, a position sensitive detector (PSD)
offers the potential for better resolution at a lower system cost.
The PSD is a precision semiconductor optical sensor which produces output
currents related to the centre of mass of light incident on the surface of the device.
While several design variations exist for PSDs, the basic device can be described
as a large area silicon p-i-n photodetector. The detectors can be available in single
axis and dual axis models.
2.6 DUAL AXIS PSD
This particular PSD is a five terminal device bounded by four collection
surfaces; one terminal is connected to each collection surface and one provides a
common return. Photocurrent is generated by light which falls on the active area of
the PSD will be collected by these four perimeter electrodes.

Sri venkateswara institute of technology

Page 22

3D IMAGING SENSORS

Fig 2.2 a typical dual axis psd


The amount of current flowing between each perimeter terminal and the
common return is related to the proximity of the centre of mass of the incident light
spot to each collection surface. The difference between the up current and the
down current is proportional to the Y-axis position of the light spot .Similarly ,the
right current minus the left current gives the X-axis position. The designations
up,down, rightand leftare arbitrary; the device may be operated in any
relative spatial orientation.

Sri venkateswara institute of technology

Page 23

3D IMAGING SENSORS

CHAPTER 3
LASER SENSORS FOR 3D IMAGING
The state of the art in laser spot position sensing methods can be divided
into two broad classes according to the way the spot position is sensed. Among those,
one finds continuous response position sensitive detectors (CRPSD) and discrete
response position sensitive detectors (DRPSD)

3.1 continous response position sensitive detectors (CRPSD)


The category CRPSD includes lateral effect photodiode.

Fig3.1 Basic structure of a single-axis lateral effect photodiode


Figure illustrates the basic structure of a p-n type single axis lateral effect
photodiode. Carriers are produced by light impinging on the device are separated in
the depletion region and distributed to the two sensing electrodes according to the
Ohms law. Assuming equal impedances, the electrode that is the farthest from the
centroid of the light distribution collects the least current. The normalized position of
the centroid when the light intensity fluctuates is given by P =I2-I1/I1+I2.
The actual position on the detector is found by multiplying P by d/2, where d is
the distance between the two sensing electrodes. A CRPSD provides the centroid of
Sri venkateswara institute of technology

Page 24

3D IMAGING SENSORS
the light distribution with a very fast response time (in the order of 10 MHz). Theory
predicts that a CRPSD provides very precise measurement of the centroid versus a
DRPSD. By precision, we mean measurement uncertainty.
It depends among other things on the signal to noise ratio and the quantization
noise. In practice, precision is important but accuracy is even more important. A
CRPSD is in fact a good estimator of the central location of a light distribution.

Fig 3.2 colour 3d imaging technologytriangulation technique


3.2 DISCRETE RESPONSE POSITION SENSITIVE DETECTORS
(DRPSD)
DRPSD on the other hand comprise detectors such as Charge Coupled
Devices (CCD) and arrays of photodiodes equipped with a multiplexer for sequential
reading. They are slower because all the photo-detectors have to be read sequentially
prior to the measurement of the location of the peak of the light distribution. DRPSDs
are very accurate because of the knowledge of the distribution but slow. Obviously,
not all photo-sensors contribute to the computation of the peak. In fact, what is
required for the measurement of the light distribution peak is only a small portion of
Sri venkateswara institute of technology

Page 25

3D IMAGING SENSORS
the total array. Once the pertinent light distribution (after windowing around an
estimate around the peak) is available, one can compute the location of the desired
peak very accurately.

Fig 3.3 In a monochrome system, the incoming beam is split into two components

Fig 3.4 artistic view of a smart sensor with colour capabilities


Figure shows schematically the new smart position sensor for light spot
measurement in the context of 3D and colour measurement. In a monochrome range
camera, a portion of the reflected radiation upon entering the system is split into two
beams (Figure 4a). One portion is directed to a CRPSD that determines the location
of the best window and sends that information to the DRPSD.
In order to measure colour information, a different optical element is used
to split the returned beam into four components, e.g., a diffractive optical element
(Figure 3b). The white zero order component is directed to the DRPSD, while the
Sri venkateswara institute of technology

Page 26

3D IMAGING SENSORS
RGB 1storder components are directed onto three CRPSD, which are used for colour
detection . The CRPSDs are also used to find the centroid of the light distribution
impinging on them and to estimate the total light intensity The centroid is computed
on chip with the well-known current ratio method i.e. (I1-I2)/ (I1+I2) where I1 and I2
are the currents generated by that type of sensor.
The weighed centroid value is fed to a control unit that will select a sub-set
(window) of contiguous photo-detectors on the DRPSD. That sub-set is located
around the estimate of the centroid supplied by the CRPSD. Then the best algorithms
for peak extraction can be applied to the portion of interest.

Sri venkateswara institute of technology

Page 27

3D IMAGING SENSORS

CHAPTER 4
PROPOSED SENSOR COLORANGE
Many devices have been built or considered in the past for measuring the
position of a light beam. Among those, one finds continuous response position
sensitive detectors (CRPSD) and discrete response position sensitive detectors
(DRPSD). The category CRPSD includes lateral effect photodiode and geometrically
shaped photo-diodes (wedges or segmented).
DRPSD on the other hand comprise detectors such as Charge Coupled
Devices(CCD)

and

arrays (1-D or 2-D)

of

photodiodes

equipped

with

multiplexer for sequential reading . One cannot achieve high measurement speed
like found with continuous response position sensitive detectors and keep the
same accuracy as with discrete response position sensitive detectors.

Fig 4.1 A typical situation where stray light blurs the measurement of the real but much
narrower peak.
On Figure 1, we report two basic types of devices that have been proposed
in the literature. A CRPSD provides the centroid of the light distribution with a very
fast response time (in the order of 10 MHz) [20]. On the other hand, DRPSD are slower

Sri venkateswara institute of technology

Page 28

3D IMAGING SENSORS
because all the photo-detectors have to be read sequentially prior to the measurement of
the location of the real peak of the light distribution.
People try to cope with the detector that they have chosen. In other words, they
pick a detector for a particular application and accept the tradeoffs inherent in their
choice of a position sensor. Consider the situation depicted on Figure 2, a CRPSD
would provide A as an answer. But a DRPSD can provide B, the desired response .This
situation occurs frequently in real applications.
The elimination of all stray light in an optical system requires sophisticated
techniques that increase the cost of a system. Also, in some applications, background
illuminated cannot be completely eliminated even with optical light filters. We propose
to use the best of both worlds. Theory predicts that a CRPSD provides very precise
measurement of the centroid versus a DRPSD (because of higher spatial resolution).
By precision, we means measurement un certainty. It depends among other
things on the signal to noise ratio, the quantization noise, and the sampling noise. In
practice, precision is important but accuracy is even more important. A CRPSD is in
fact a good estimator of the central location of a light distribution. Accuracy is
understood to be the true value of a quantity. DRPSD are very accurate (because of the
knowledge of the distribution).
Their slow speed is due to the fact that all the pixels of the array have to be
read but they dont all contribute to the computation of the peak. In fact, what is
required for the measurement of the light distribution peak is only a small portion.

4.1 ARCHITECTURE
An object is illuminated by a collimated RPROPOSED SENSOR.
GB laser spot and a portion of the reflected radiation upon entering the system is
split into four components by a diffracting optical element as shown in figure 4b. The
white zero order component is directed to the DRPSD, while the RGB 1storder
components are directed onto three CRPSD,
Sri venkateswara institute of technology

Page 29

3D IMAGING SENSORS
Which are used for colour detection. The CRPSDs are also used to find the centroid
of the light distribution impinging on them and to estimate the total light intensity
The centroid is computed on chip with the well-known current ratio method i.e. (I1I2)/ (I1+I2) where I1 and I2 are the currents generated by that type of sensor. The
weighed centroid value is fed to a control unit that will select a sub-set (window) of
contiguous photo-detectors on the DRPSD. That sub-set is located around the
estimate of the centroid supplied by the CRPSD. Then, the best algorithms for peak
extraction can be applied to the portion of interest.

Fig4.1 The proposed architecture for the coloring characteristics


4.2 32 PIXEL PROTOTYPE CHIP

Sri venkateswara institute of technology

Page 30

3D IMAGING SENSORS
Figure shows the architecture and preliminary experimental results of a first
prototype chip of a DRPSD with selectable readout window. This is the first block of
a more complex chip that will include all the components.

Fig4.2 Block diagram of the 32-pixel prototype array.


The prototype chip consists of an array of 32 pixels with related readout
channels and has been fabricated using a 0.mm commercial CMOS process. The
novelties implemented consist in a variable gain of the readout channels and a
selectable readout window of 16 contiguous pixels. Both features are necessary to
comply with the requirements of 3D single laser spot sensors, i.e., a linear dynamic
range of at least 12 bits and a high 3D data throughput. In the prototype, many of the
signals, which, in the final system are supposed to be generated by the CRPSDs, are
now generated by means of external circuitry. The large dimensions of the pixel are
required, on one side to cope with speckle noise and, on the other side, to facilitate
system alignment. Each pixel is provided with its own readout channel for parallel
reading. The channel contains a charge amplifier and a correlated double sampling
Sri venkateswara institute of technology

Page 31

3D IMAGING SENSORS
circuit (CDS). To span 12 bits of dynamic range, the integrating capacitor can assume
five different values. In the prototype chip, the proper integrating capacitor value is
externally selected by means of external switches C0-C4. In the final sensor,
however, the proper value will be automatically set by an on chip circuitry on the
basis of the total light intensity as calculated by the CRPSDs.
During normal operation, all 32 pixels are first reset at their bias value and
then left to integrate the light for a period of 1cms. Within this time the CRPSDs and
an external processing unit estimate both the spot position and its total intensity and
those parameters are fed to the window selection logic. After that, 16 contiguous
pixels, as addressed by the window selection logic, are read out in 5ms, for a total
frame rate of 6cms. Future sensors will operate at full speed, i.e. an order of
magnitude faster. The window selection logic, LOGIC_A, receives the address of the
central pixel of the 16 pixels and calculates the address of the starting and ending
pixel. The analog value at the output of the each CA within the addressed window is
sequentially put on the bit line by a decoding logic, DECODER, and read by the
video amplifier .LOGIC-A generates also synchronization and end-of-frame signals
which are used from the external processing units. LOGIC_B instead is devoted to
the generation of logic signals that derive both the CA and the CDS blocks. To add
flexibility also the integration time can be changed by means of the external switches

Sri venkateswara institute of technology

Page 32

3D IMAGING SENSORS

CHAPTER 5
FEATURES
5.1 ADVANTAGES

Reduced size and cost

Better resolution at a lower system cost

High reliability that is required for high accuracy 3D vision systems

Complete images of visible surfaces that are rather featureless to the


human eye or a video camera can be generated.

5.2 DISADVANTAGES

The elimination of all stray light in an optical system requires


sophisticated techniques.

5.3 APPLICATIONS

Intelligent digitizers will be capable of measuring accurately and


simultaneously colour and 3D

For the development of hand held 3D cameras

Multi resolution random access laser scanners for fast search and tracking
of 3D features

Sri venkateswara institute of technology

Page 33

3D IMAGING SENSORS

CHAPTER 6
CONCLUSION
6.1 CONCLUSION
The results obtained so far have shown that optical sensors have reached a
high level of development and reliability those are suited for high accuracy 3D vision
systems. The availability of standard fabrication technologies and the acquired knowhow in the design techniques, allow the implementation of optical sensors that are
application specific: Opto-ASICs.
The trend shows that the use of the low cost CMOS technology leads
competitive optical sensors. Furthermore post-processing modules, as for example
anti reflecting coating film deposition and RGB filter deposition to enhance
sensitivity and for colour sensing, are at the final certification stage and will soon be
available in standard fabrication technologies. The work on the Colorange is being
finalized and work has started on a new improved architecture.

6.2 FUTURE SCOPE


Anti reflecting coating film deposition and RGB filter deposition can be used to
enhance sensitivity and for colorinsing

Sri venkateswara institute of technology

Page 34

3D IMAGING SENSORS

REFERENCE

[1]

L.GONZO, A.SIMONI, A.GOTTARDI, DAVID STAPPA, J.A BERNALDIN,


Sensors

optimized

for

3D

digitization,

IEEE

transactions

on

instrumentation and measurement, vol 52, no.3, June 2003, pp.903-908.


[2]

P.SCHAFER, R.D.WILLIAMS. G.K DAVIS, ROBERT A. ROSS, Accuracy


of position detection using a position sensitive detector, IEEE transactions
on instrumentation and measurement, vol 47, no.4, August 1998, pp.914-918

[3]

J.A.BERALDIN,Design of Bessel type pre-amplifiers for lateral effect


photodiodes, International Journal Electronics, vol 67, pp 591-615, 1989.

[4]

A.M.DHAKE,Television and video engineering, Tata Mc Graw Hill

[5]

X.ARREGUIT,

F.A.VAN

SCHAIK,

F.V.BAUDUIN,

M.BIDIVILLE,

E.RAEBER,A CMOS motion detector system for pointing devices, IEEE


journal solid state circuits, vol 31, pp.1916-1921, Dec 1996
[6]

P.AUBERT, H.J.OGUEY, R.VUILLEUNEIR, Monolithic optical position


encoder with on-chip photo diodes, IEEE J.solid state circuits, vol 23,
pp.465-473, April 1988

[7]

K.THYAGARAJAN, A.K.GHATAK, Lasers-Theory and applications,


Plenum Press

Sri venkateswara institute of technology

Page 35

3D IMAGING SENSORS

[8]

N.H.E.Weste, Kamran Eshraghian, Principles of CMOS VLSI design, A


systems perspective, Low Pri

Sri venkateswara institute of technology

Page 36

You might also like