You are on page 1of 37

STEVEN ARROYO RAJOSHI BISWAS PETER HOKANSON HANA WANG

Project Goals
Create a structured light program that generates a 3D point cloud (x,y,z coordinates) Create embedded system capable of running our structured light program Display point cloud results using open source 3D modeling software

What is structured lighting?


An encoded set of pixels projected onto a scene The deformation created by a surface can be used to calculate depth and surface information of the object 3D-Point Cloud

What does SL Solve?

Eliminates need for correspondence search with scan lines used in Stereo
Texture less objects

Occlusions/patterns (repetitions)

Higher resolution than Kinect

Portable Structured Light 3D Scanner

TI DLP Pico Projector


Resolution : HVGA [480 x 320] Brightness: 7 lumens Dimensions: 44 x 67 x 14 mm Low Power consumption Sync Signal Output for camera Allows up to 2400Hz SL patterns

VGA Camera Board


# of Pixels: VGA (640 x 480 pixles Hardware compatible with Beagleboard Drivers included in Angstrom distribution 2x telephoto lens used to help match projector focal length

BeagleBoard-xM

http://narcissus.angstrom-distribution.org/

Narcissus Tool

1 2

OpenCV Setup

Pkg-config program provides compiler and linker flags


pkg-config --cflags opencv

pkg-config -- libs opencv

Build with native toolchain

Demo!

When would you not use SL?


In bright/noisy environments With Non-Lambertian /reflective/dark objects

When would you use SL?

3D object scanning for use in:


Reverse engineering Precision shape measurement for

production control Classification of grinding materials Volume measurement Forensic science inspections Skin surface measurement Body shape measurement

Temporally encoded Structured Light

Temporally encoded pattern


High Resolution

Static Scenes

Gray Code
Low decoding

error Reversible

L. Zhang, B. Curless, and S. M. Seitz. Rapid Shape Acquisition Using Color Structured Light and Multi-pass Dynamic Programming. 3DPVT 2002

Simplest Case: Parallel images


Image planes of cameras are parallel to each other and to the baseline Camera centers are at same height Focal lengths are the same Then, epipolar lines fall along the horizontal scan lines of the images

Ashok Veeraraghavan Lect 16

Depth from disparity


X

z x f O Baseline B x f O

Depth is inversely proportional to disparity!


Ashok Veeraraghavan Lect 16

Decoding the Gray Code


Find difference between positive and negative projected patterns Apply a threshold to the difference to remove unlit pixels Positive values are ones, negative values are zeros

Example: A Tale of Two Pixels


Column 335
Column 392

Example: A Tale of Two Pixels

0 1

Example: A Tale of Two Pixels


We also project the patterns inverse!

0 1

Example: A Tale of Two Pixels

01 11

Example: A Tale of Two Pixels

010 110

Example: A Tale of Two Pixels

0100 1101

Example: A Tale of Two Pixels

01000 11010

Example: A Tale of Two Pixels

010000 110100

Example: A Tale of Two Pixels

0100001 1101001

Example: A Tale of Two Pixels

01000011 11010011

Example: A Tale of Two Pixels

010000111 110100111

Decoding the Example


Yellow Pixel Decoding Gray Code: 110100111 Binary: 100111010 Projector column: 314 Camera column: 335 Disparity: 335 314 = 21 Unscaled results: x = Camera column = 335 y = Camera row z = (Disparity)-1 = 0.0476 Red Pixel Decoding Gray Code: 010000111 Binary: 011111010 Projector column: 250 Camera column: 392 Disparity = 392 250 = 142 Unscaled results: x = Camera column = 392 y = Camera row z = (Disparity)-1 = 0.007042

Process C Implementation for Image Reconstruction


Project 18 patterns, with odd images being inverted patterns of the previous inverted image, capture deformed images Convert the images to grayscale For each pixel, compare the pixel value in the even and odd image. The min value is set as 0 and the max value is set as 1. If they are the similar, the pixel is discarded. Convert gray code to binary and get the decimal value = column number of the projector Calculate disparity, depth and plot the 3D point cloud

Results with OpenGL


Not yet compatible with Angstrom on Beagleboard

Results from Meshlab

Results from Meshlab

Issues from Last Time

The difference in the focal length and resolutions of the camera and projector may introduce errors in the calculation and the resulting image will be skewed.

The homography is currently unknown. Therefore, the epipolar lines are not horizontal.
The camera and projector are not calibrated yet and hence the scaling is not accurate without the knowledge of focal length. Also the images are still distorted. Need to eliminate redundant camera pixels in captured images due to high camera resolution

Issues from Last Time

The difference in the focal length and resolutions of the camera and projector may introduce errors in the calculation and the resulting image will be skewed. 2x telephoto lens cuts the difference by half: focal length Resolutions are normalized before calculating disparity The homography is currently unknown. Therefore, the epipolar lines are not horizontal. By placing the camera in parallel, we force the epipolar lines to be horizontal. The camera and projector are not calibrated so images are not to scale. Project scope does not include recreating scale models

Live Results!

You might also like