The fist seven chapters contain the newer applications of vision like micro vision, grasping using vision, behavior based perception, inspection of railways and humanitarian demining.. T
Trang 1Vision Systems
Applications
Trang 3Vision Systems
Applications
Edited by Goro Obinata and Ashish Dutta
I-TECH Education and Publishing
Trang 4Published by the I-Tech Education and Publishing, Vienna, Austria
Abstracting and non-profit use of the material is permitted with credit to the source Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher No responsibility is accepted for the accuracy of information contained in the published articles Publisher assumes no responsibility liability for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained inside After this work has been published by the Advanced Robotic Systems International, authors have the right to republish it, in whole or part, in any publication of which they are an author or editor, and the make other personal use of the work
© 2007 I-Tech Education and Publishing
A catalog record for this book is available from the Austrian Library
Vision Systems: Applications, Edited by Goro Obinata and Ashish Dutta
p cm
ISBN 978-3-902613-01-1
1 Vision Systems 2 Applications 3 Obinata & Dutta
Trang 5V
Preface
Computer Vision is the most important key in developing autonomous navigation systems for interaction with the environment It also leads us to marvel at the functioning of our own vision system In this book we have collected the latest applications of vision research from around the world It contains both the conventional research areas like mobile robot naviga-tion and map building, and more recent applications such as, micro vision, etc
The fist seven chapters contain the newer applications of vision like micro vision, grasping using vision, behavior based perception, inspection of railways and humanitarian demining The later chapters deal with applications of vision in mobile robot navigation, camera cali-bration, object detection in vision search, map building, etc
We would like to thank all the authors for submitting the chapters and the anonymous viewers for their excellent work
re-Sincere thanks are also due to the editorial members of Advanced Robotic Systems tions for all the help during the various stages of review, correspondence with authors and publication
publica-We hope that you will enjoy reading this book and it will serve both as a reference and study material
EditorsGoro Obinata Centre for Cooperative Research in Advanced Science and Technology
Nagoya University, Japan
Ashish Dutta Dept of Mechanical Science and Engineering
Nagoya University, Japan
Trang 7VII
Contents
Preface V
1 Micro Vision 001
Kohtaro Ohba and Kenichi Ohara
2 Active Vision based Regrasp Planning for
Capture of a Deforming Object using Genetic Algorithms 023
Ashish Dutta, Goro Obinata and Shota Terachi
3 Multi-Focal Visual Servoing Strategies 033
Kolja Kuehnlenz and Martin Buss
4 Grasping Points Determination Using Visual Features 049
Madjid Boudaba, Alicia Casals and Heinz Woern
5 Behavior-Based Perception for Soccer Robots 065
Floris Mantz and Pieter Jonker
6 A Real-Time Framework for the Vision
Subsystem in Autonomous Mobile Robots 083
Paulo Pedreiras, Filipe Teixeira, Nelson Ferreira and Luis Almeida
7 Extraction of Roads From Out Door Images 101
Alejandro Forero Guzman and Carlos Parra
8 ViSyR: a Vision System for Real-Time Infrastructure Inspection 113
Francescomaria Marino and Ettore Stella
9 Bearing-Only Vision SLAM with Distinguishable Image Features 145
Patric Jensfelt, Danica Kragic and John Folkesson
Trang 810 An Effective 3D Target Recognition
Imitating Robust Methods of the Human Visual System 157
Sungho Kim and In So Kweon
11 3D Cameras: 3D Computer Vision of wide Scope 181
Stefan May, Kai Pervoelz and Hartmut Surmann
12 A Visual Based Extended Monte Carlo
Localization for Autonomous Mobile Robots 203
Wen Shang and Dong Sun
13 Optical Correlator based Optical Flow
Processor for Real Time Visual Navigation 223
Valerij Tchernykh, Martin Beck and Klaus Janschek
14 Simulation of Visual Servoing Control and Performance
Tests of 6R Robot Using Image- Based and Position-Based Approaches 237
M H Korayem and F S Heidari
15 Image Magnification based on the Human Visual Processing 263
Sung-Kwan Je, Kwang-Baek Kim, Jae-Hyun Cho and Doo-Heon Song
16 Methods of the Definition Analysis of Fine Details of Images 281
S.V Sai
17 A Practical Toolbox for Calibrating Omnidirectional Cameras 297
Davide Scaramuzza and Roland Siegwart
18 Dynamic 3D-Vision 311
K.-D Kuhnert , M Langer, M Stommel and A Kolb
19 Bearing-only Simultaneous Localization and
Mapping for Vision-Based Mobile Robots 335
Henry Huang, Frederic Maire and Narongdech Keeratipranon
20 Object Recognition for Obstacles-free
Trajectories Applied to Navigation Control 361
W Medina-Meléndez, L Fermín, J Cappelletto, P Estévez,
G Fernández-López and J C Grieco
Trang 9IX
21 Omnidirectional Vision-Based Control From Homography 387
Youcef Mezouar, Hicham Hadj Abdelkader and Philippe Martinet
22 Industrial Vision Systems, Real Time and
Demanding Environment: a Working Case for Quality Control 407
J.C Rodríguez-Rodríguez, A Quesada-Arencibia and R Moreno-Díaz jr
23 New Types of Keypoints for
Detecting Known Objects in Visual Search Tasks 423
Andrzej gluzek and Md Saiful Islam
24 Biologically Inspired Vision Architectures:
a Software/Hardware Perspective 443
Francesco S Fabiano, Antonio Gentile, Marco La Cascia and Roberto Pirrone
25 Robot Vision in the Language of Geometric Algebra 459
Gerald Sommer and Christian Gebken
26 Algebraic Reconstruction and Post-processing in
Incomplete Data Computed Tomography: From X-rays to Laser Beams 487
Alexander B Konovalov, Dmitry V Mogilenskikh,
Vitaly V Vlasov and Andrey N Kiselev
27 AMR Vision System for Perception,
Job Detection and Identification in Manufacturing 521
Sarbari Datta and Ranjit Ray
28 Symmetry Signatures for Image-Based Applications in Robotics 541
Kai Huebner and Jianwei Zhang
29 Stereo Vision Based SLAM Issues and Solutions 565
D.C Herath, K.R.S Kodagoda and G Dissanayake
30 Shortest Path Homography-Based
Visual Control for Differential Drive Robots 583
G López-Nicolás, C Sagüés and J.J Guerrero
31 Correlation Error Reduction of Images in Stereo
Vision with Fuzzy Method and its Application on Cartesian Robot 597
Mehdi Ghayoumi and Mohammad Shayganfar
Trang 111 Micro Vision
Kohtaro Ohba and Kenichi Ohara
National Institute of Advanced Industrial Science and Technology (AIST)
on the micro manipulation but not on the micro observation and measurement, which might
be very important to operate Actually, the micro operation includes the scale factors; i.e the van der Waals forces are larger than the Newton force in the micro environments Furthermore the micro vision has the “optical scale factors” on this micro observation, i.e the small depth of a focus on the microscope, which could not allow us to feel the micro environments, intuitively
For example, if the focus is on some objects in the microscope, the actuator hands could not
be observed in the same view at the same time with the microscope On the other hand, if the focus is on the actuator hands, the object could not be observed Figure 2 shows a simple 3D construction example constructing a micro scarecrow, 20um height, with 4 um six grass balls and one grass bar on the micro wafer And Fig.3 show the two typical microscopic views putting the second glass ball onto the first glass ball Left figure (a) in Fig.3 shows the first glass ball in focused, but the gripper is blurring at almost same position, because of the different depth And right figure (b) shows the gripper in focused Therefore, the operator has to change the focal distance with the microscope to observe while operating the micro-actuator, simultaneously
Figure 1 Micro Factory
Trang 12Vision Systems: Applications
2
Figure 2 Micro Scarecrow
(a) (b) Figure 3 Typical Microscopic Images with Micro Manipulation
Even though the big effort of the micro vision for the micro operation, there are few computer vision researches especially for the micro environments
In the micro vision system, there could be categorized into two areas,
1 The micro measurements techniques to measure the micro object position for micro operation
2 The micro observation techniques to show the 3D image only for human to know the interesting objects
We will summarize these two areas into the following two sections
Trang 13Micro Vision 3
2 Measurement Method for Micro Objects in Micro Environment
In the field of the computer vision, there are several 3D modelling criteria, so called “shape
from X'' problem Most of these criteria are categorized as follows,
Each method has particular characteristics with algorithm, speed, range and resolution of
measurements
In the macro environments, the triangular surveying is mainly applied for the robotics,
because of the simply to use But it requires more than one set of two cameras or laser
equipments to measure and soft/hard calibration, and big calculation cost in correlation
with two images Furthermore in the micro environments, because of the small depth of
fields, the measurements range is quite limited in the cross in focus depth area of the two
cameras
Generally speaking, we have the “small depth of a focus” is one of the drawbacks for the
micro operation with the microscope, as mentioned before However, as the matter of fact,
this small depth of a focus is one of the big benefits for the vision system to obtain the good
resolution of the depth with the “shape from focus/defocus” criteria In this section, the
measurement method based on characteristics of the micro environment is mainly
discussed Secondly, the real-time micro VR camera system is reviewed with the two main
factors
2.1 Optics
The “depth from focus” criterion is based on the simple optical theory, which is focused on
the depth of a focal range in the optical characteristic Theoretically, the “depth of a focus”
and the “depth of a photographic subject” are different as shown in Fig 4 In this section,
the optical criteria are briefly reviewed
The optical fundamental equation:
f x X
1 1 1
=
is well known as the Gaussian lens law In this equation, X, x and f depict the object
distance, the image distance and the focal distance of the lens, respectively
which holds the image in focus on the focal plane, as shown in Fig 4 (a)
Trang 14Vision Systems: Applications
4
where D and D' are the diameter of lens and the iris, respectively The focus obviously
depends on the radius of the circle of confusion delta, which caused by the resolution of the
sensor device of camera
object and lens as shown in Fig 4 (b), which holds the sharpness on the focal plane;
2 2
2
) (
2
f X D
f
f X XfD X
−
−
−
= Δ
δ
δ
(4)
In this equation, the depth of a photographic subject obviously depends on the distance of
principle focus f and the distance between object and lens X.
Equation (2) or (3) decides the resolution of object distance with the depth of focus criteria
In the calibration process between the object distance and the image plane distance, the
equation (4) is utilized
Figure 4 Depth of a focus and Depth of Photographic Subject
Trang 15Micro Vision 5
Figure 5 Concept Image of how to obtain the all-in-focus image
2.2 Depth from Focus Criteria
Figure.5 shows the concept of the “depth from focus” criteria based on the optics above Each image points hold the optical equations (1) If several numbers of images are captured with the different image distances, then optimal in-focus point at each image points could be defined with the optical criteria Then, the 3D object construction could be obtained with equation (1) of “image distance”; x or “focal distance”; f value at each pixel Also, synthesized in-focus image could be obtained with mixing the in-focus image areas or points, which we call “all-in-focus image”
Actually, this criteria is quite simple but useful especially in the microscopic environments However, the bad resolution with this criterion on the long distance, more than 2m, is well known because of the large “depth of a photographic subject” with the ordinal optical configuration on large objective distance
Further more, the depth from defocus criteria is well known to estimate the 3D construction with several blurring images and optical model It is not necessary to move the focal distance in the process of 3D reconstruction, but could not achieve the all-in-focus image, which might be important for the 3D virtual environments In this section, “Depth from Focus Theory” is focused
2.3 Image Quality Measurement
In the previous section, the optimal focal distance with particular objects could be obtained with the optical criteria In this section, the criterion to decide the optimal focal distance with the image processing technique is reviewed
Trang 16Vision Systems: Applications
6
To decide the optimal focal distance with images, the Image Quality Measure (IQM), which
could detect the in-focus area in the image, is defined with the follow equation,
−
i f
i c
c r
r
x
x x y
y y L
L p L
L q
q y p x I y x I D
frequency and the smoothing, respectively [7].And D is the total pixel number to make
standard the image quality measure value with the number of pixels in the area
With some variation of the focus values, once a peak of the IQM value at particular position
of image pixel is detected, the optimal in-focus image point on each pixel points could be
easily defined Then the corresponding local intensity value and the focus value are finally
the depth map and the all-in-focus image, respectively
2.3 Real-time micro VR camera system
To realize the real-time micro VR camera system with the depth from focus criteria above,
there are two big issues to be solved,
Unfortunately, most of the vision system seems to be based on the video frame rate
30frame/sec This video frame rate is good enough for human vision, but not good enough
as a sensor system
To realize a real-time micro VR camera with the depth from focus criteria mentioned before,
a high-speed image capture and processing system is required For example, if eight images
are applied to obtain the depth map and the all-in-focus image with 30frame/sec, a 240
frame/sec image sequence is necessary to capture and process
Furthermore, to change the focal distance with the microscope, motor control system could
be used But the range of frequency of the motor system is not enough frequency for the
real-time micro VR camera system
Next, section, we will show some of the proto type of the real-time micro VR camera
systems, Finally product specification of Micro VR Camera System is shown
2.3.1 First Prototype
At first, we developed the micro VR camera system shown in Fig.6 with a dynamic focusing
lens[8] as shown in Fig 7 and a smart sensor, an IVP C-MOS vision chip (MAPP2200) that
has a resolution of 256*256pixel, a column parallel ADC architecture, and DSP processing
A sample object in Fig 8 and its sample images at four particular focal distances are shown
in Fig 9 The objects for demonstration were constructed in a four-step pyramidal shape:
3mm-5mm In real-usage cases, such as less than 1 mm size, the IQM value could be
obtained with the original texture on the object without any artificial texture
Trang 17Micro Vision 7
Figure 6 Micro VR Camera System
Figure 7 Dynamic Focusing Lens
Trang 18Vision Systems: Applications
8
Figure 8 Sample Object for Evaluation of Micro VR Camera System
Figure 9 Sample of the Single Focus Image
Trang 19Micro Vision 9The spatial resolution depends on the optical setting For this demonstration, the view area
is almost 16mm square with 256 pixels, and the spatial resolutions are 62.5um The depth resolution is 1.67mm (21 frames with 35mm depth range, each 3V input voltage from -30V to +30V to charge the PZT), which directly depends on the number of input frames in the range of variable focusing The “all-in-focus image” and the micro VR environments from one image sequence are shown in Fig 10 and 11, respectively The “all-in-focus image” gives
a clear image to observe the whole object However, the resolution of depth without any interpolation in Fig.11 does not seem enough A simple way to increase the resolution of depth is to capture more images with other focal distances, which could also require a higher calculation cost
(a) Processing Part
The processing time with an IVP chip is almost 2sec for one final VR output This is caused because the ADC/processing performance is not good enough for the gray level intensity on the vision chip MAPP2200 Actually, MAPP2200 has a good performance for binary images
of more than 2000frame/sec
(b) Optical Part
Changing the focus with usual optical configuration is quite difficult to actuate because of its dynamics We had developed a compact and quick-response dynamic focusing lens, which is including the PZT bimorph actuator and the glass diaphragm shown in Fig 7 This lens is capable to be a convex lens or concave lens with the voltage to drive the PZT bimorph, and was evaluated the robustness with more than 150Hz high frequency See details in [10] We applied this lens with the combination of the micro zoom lens
Figure 10 Sample of All-in-Focus image
Trang 20Vision Systems: Applications
10
Figure 11 Sample of the Depth Image for Sample Object with Texture Based on All-in-Focus Image
2.3.2 Second Prototype
This paragraph shows the second prototype of the micro VR camera systems
(a) Processing Part
Recently, the large-scale FPGA (Field Programmable Gate Array) has dramatically improved its performance and is being widely used because of its programmable capability Then, in the second system shown in Fig.12, one FPGA (APEX EP20K600E, ALTERA) and SDRAM in the image-processing test board (iLS-BEV800, INNOTECH Co.) are used to calculate the IQM in equation (5) at each pixel all over image 512*480 pixel, 8bits with 240Hz, which has the TMDS (Transition Minimized Display Signaling) architecture interface
to connect the sensor part and processing part as shown in Fig.13 Then, the image data 512*480 is captured with two parallel interfaces, and high-speed transmission 60Mbyte/sec (512*480*240Hz) from HSV to the dual-port SDRAM is realized As a result, the performance
of the FPGA is good enough to calculate the IQM value with 240Hz, and the total execution performance is less than 20% of the performance of FPGA
Figure 12 Sencond Prototype Systems