1. Trang chủ
  2. » Luận Văn - Báo Cáo

Development of machine vision systems on object classification and measurement for robot manipulation

137 5 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 137
Dung lượng 4,64 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

國立高雄科技大學 機械工程系 博士班 博士論文 操縱機器人所需物件分類和量測機械視覺系統之開發 Development of machine vision systems on object classification and measurement for robot manipulation... 操縱機器人所需物件分類和量測機械視覺系統之開發 Devel

Trang 1

國立高雄科技大學

機械工程系 博士班

博士論文

操縱機器人所需物件分類和量測機械視覺系統之開發

Development of machine vision systems on object

classification and measurement for robot manipulation

Trang 2

操縱機器人所需物件分類和量測機械視覺系統之開發

Development of machine vision systems on object classification and

measurement for robot manipulation

指導教授:許光城 教授 Advisor: Prof Quang-Cherng Hsu

國立高雄科技大學 機械工程系 博士論文

A Dissertation Submitted to Department of Mechanical Engineering National Kaohsiung University of Science and Technology

in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy

in Mechanical Engineering

January, 2019 Kaohsiung, Taiwan, Republic of China

中華民國 108 年 1 月

Trang 4

操縱機器人所需物件分類和量測機械視覺系統之開發

研究生:吳玉武 指導教授:許光城 教授

國立高雄科技大學 機械工程系 博士班

中文摘要

本研究介紹了機器視覺系統在物體分類和機器人操縱測量方面的研究。首先, 在不同的照明條件下開發了一套可用於金屬零件的自動分類和量測的機械視覺系 統,並將其應用於具 6 個自由度(DOF)的機器手臂。為了獲得準確的定位資訊, 整個圖像藉由工作平台上方的 CMOS 攝影機來進行取像。本研究中亦探討不同照明 條件下對本系統的影響,在正向打光的條件中有四種不同的打光狀態,且在每種條 件,皆使用全域和局部閾值來獲得較佳的圖像品質。研究中主要利用張氏方法、線 性轉換函式與二階轉換函式來取得影像座標與世界座標的關係。

實驗結果顯示,在背向打光相較於前向打光其所獲得的物體中心更為準確,而 在校正結果的部分,二階轉換函式比其他校正方法更準確。透過使用二階轉換函式 的校正偏差,在 X 和 Y 軸其最大正偏差分別為 0.48 mm 和 0.38 mm 而最大負偏差分 別為-0.34 mm 和-0.43 mm。

本研究亦開發了一套機械視覺系統於六自由度(DOF)的機械手臂用以進行物件 顏色的分類和座標的量測。整個圖像藉由工作平台上方的兩個相機以藉此獲得準確 的定位資訊,並在二維(2-D)和三維(3-D)的校正過程中採用二階轉換函式與透視投影 法來建立影像坐標和世界坐標間的關係。在二維校正的部分,於 X 和 Y 軸上的最大 正偏差分別為 1.29 mm 和 1.12 mm;而最大負偏差分別為-1.48 mm 和-0.97 mm。在 三維校正的部分,X,Y 和 Z 方向的偏差分別為 0.07、-0.418 和-0.063 mm。本研究 所提出的視覺識別系統可以對物件進行分類並取得該物件之三維坐標。

關鍵字:機械視覺、機械手、相機校正、影像分析、物件分類、光源

Trang 5

Development of machine vision systems on object classification and

measurement for robot manipulation

Graduate student: Ngo Ngoc Vu Advisor: Prof Quang-Cherng Hsu

Department of Mechanical Engineering National Kaohsiung University of Science and Technology

ABSTRACT

This research presents development of machine vision systems on object classification and measurement for robot manipulation Firstly, a machine vision system for the automatic metal part classification and measurement process is developed under different lighting conditions, and has been applied to the operation

of a robot arm with 6 degrees of freedom (DOF) In order to obtain accurate positioning information, the overall image is captured by a CMOS camera which is mounted above the working platform The effects of back-lighting and front-lighting conditions to the proposed system were investigated With the front-lighting condition, four different conditions were performed For each condition, global and local threshold operations were used to obtain good image quality The relationship between the image coordinates and the world coordinates was determined through Zhang’s method, the linear transformation and the quadratic transformation during the calibration process Experimental results show that in a back-lighting environment, the image quality is improved, such that the positions of the centers of objects are more accurate than in a front-lighting environment According to the calibration results, the quadratic transformation is more accurate

Trang 6

than other methods By calculating the calibration deviation using the quadratic transformation, the maximum positive deviation is 0.48 mm and 0.38 mm in the X and Y directions, respectively The maximum negative deviation is -0.34 mm and -0.43 mm in X and Y directions, respectively The proposed system is effective, robust, and can be valuable to industry

The second, a machine vision system for color object classification and measurement process for robot arm with six degree of freedom (DOF) is developed

In order to obtain accurate positioning information, the overall image is captured by

a double camera C615 and a camera C525 which are mounted above the working platform The relationship between the image coordinate and the world coordinate

is performed through calibration procedure The quadratic transformation and generalized perspective transformation algorithms were used to transform coordinates in 2-D and 3-D calibration process, respectively According to calibration results, with 2-D calibration, the positive maximum deviation is 1.29

mm and 1.12 mm in X and Y directions, respectively The negative maximum deviation is -1.48 mm and -0.97 mm in X and Y directions, respectively With 3-D calibration, the deviation is 0.07 mm, -0.418 mm, and -0.063 mm in X, Y and Z directions, respectively The proposed system can catch the three dimensional coordinates of the object and perform classification and assembly automatic operations by the data from visual recognition system

Keywords: machine vision, robot arm, camera calibration, image analysis, object

recognition, lighting source

Trang 7

ACKNOWLEDGMENTS

The fulfillment of over three years of study at National Kaohsiung University of Science and Technology (NKUST) has brought me into closer relations with many enthusiastic people who wholeheartedly devoted their time, energy, and support to help me during my studies Therefore, this is my opportunity to acknowledge my great debt of thanks to them

I wish to express my thanks and gratitude to my academic supervisor, Prof Dr Quang-Cherng Hsu, for his continuous guidance, valuable advice, and helpful supports during my studies He has always been supportive of my research work and gave me the freedom to fully explore the different research areas related with my study

I wish to acknowledge my deepest thanks to Vietnam Ministry of Education and Taiwan Ministry of Education for giving me a great opportunity, necessary scholarships to study at NKUST via VEST500 scholarship which is corporation between Vietnam Government and Taiwan Government, and many enthusiastic helps during my time in NKUST I am also particularly grateful to Thai Nguyen University of Technology (TNUT) provided me unflagging encouragement, continuous helps and support to complete this course

My gratitude also goes to all of the teachers, Dean and staffs of Department of Mechanical Engineering at NKUST for their devoted teaching, great helping and thoughtful serving during my study

Trang 8

I would also like to express my sincere gratitude to all of my colleagues at the Precision and Nano Engineering Laboratory (PANEL), Department of Mechanical Engineering, NKUST

I want to express my sincere thanks to all my Vietnamese friends in NKUST for their helpful sharing and precious helping me over the past time

I also wish to express my gratitude to all those who directly or indirectly helped me during my study in NKUST

Finally, my special thanks to my dad Ngo The Long and my mom Vu Thi Hai,

to my older sister Ngo Thi Phuong, to my adorable wife Duong Thi Huong Lien, to two lovely little daughters Ngo Duong Anh Thu and Ngo Phuong Linh, who are the most motivation for me over years in Taiwan!

Trang 9

CONTENTS

中文摘要……….i

ABSTRACT……… ii

ACKNOWLEDGMENTS……… iv

CONTENTS……… vi

LIST OF FIGURES……… xii

LIST OF TABLES……… xvi

NOMENCLATURE……… xvii

Chapter 1 Introduction……….1

1.1 Motivation of the research………1

1.2 Scopes of the research……… 8

1.3 Contributions………9

1.4 Organization of the dissertation……… 9

Chapter 2 Theory of image processing and machine vision………12

2.1 Image processing system………12

2.1.1 Basics in image processing……… 13

2.1.1.1 Pixels……… 13

2.1.1.2 Resolution of image……… 13

Trang 10

2.1.1.3 Gray level……… 14

2.1.1.4 Histogram……… 15

2.1.1.5 Image presentation……….16

2.1.1.6 Color models……… 17

2.1.1.7 Neighbors of pixels………18

2.1.2 Morphological image processing ……… 19

2.1.2.1 Erosion operation……… 19

2.1.2.2 Dilation operation……… 19

2.1.2.3 Opening and closing operations ………20

2.1.3 Blob analysis……… 21

2.1.3.1 Goal of blob analysis ………21

2.1.3.2 Feature extraction ………21

2.1.3.3 Steps to perform blob analysis ………22

2.2 Machine vision system ……….22

2.2.1 Lighting design ……… 23

2.2.2 Lens ……… … 25

2.2.3 Image sensors ……… … 27

Chapter 3 Coordinate calibration methods and camera calibration………….30

3.1 Two-Dimensional coordinate calibration……… 30

Trang 11

3.1.1 Linear transformation ……… 30

3.1.2 Quadratic transformation……….32

3.2 Three-Dimensional coordinate calibration ………36

3.2.1 Stereo imaging ……… 36

3.2.2 The generalized perspective transformation ……… 37

3.2.3 The perspective transformation with camera model ……… 39

3.3 Camera calibration……… 44

3.3.1 Intrinsic parameter ………44

3.3.2 Extrinsic parameter ……… 45

3.4 Lens Distortions ……… 46

3.4.1 Radial distortion ……….46

3.4.2 Tangential distortion ……… 47

3.5 Camera calibration using Matlab tool……….48

Chapter 4 Development of a metal part classification and measurement system under different lighting conditions………51

4.1 Materials and Experimental Setup……… 51

4.1.1 Platform Description ……… 51

4.1.2 Robot arm ……… 53

4.2 Research methodology………54

Trang 12

4.2.1 Image processing……….54

4.2.2 Camera calibration……… 55

4.2.3 Coordinate calibration method………57

4.2.4 Resolution of the Measurement System…….……….58

4.2.5 Calculating the Coordinate Deviation ……… 58

4.3 Experimental Procedures………59

4.3.1 Illumination conditions ……… 59

4.3.2 Calibration work ……….59

4.4 Image Segmentation……… 61

4.4.1 Using backlighting ……… 61

4.4.2 Using front-lighting ……… 63

4.5 Algorithm for object classification……… 65

4.6 Algorithm for the bolt head determination ……… 66

4.7 Results and Discussion ……….68

4.7.1 Calibration results…… ……… 68

4.7.2 Coordinate deviation among different lighting conditions ……….70

4.8 Implementation ……… ………73

Chapter 5 Development of a color object recognition and measurement system… 77

5.1 Platform Description……… 77

Trang 13

5.1.1 Experimental setup … ……… 77

5.1.2 The implementation process of the proposed system……… 79

5.2 Calibration work……….80

5.2.1 2-D coordinate calibration using the quadratic transformation 80

5.2.2 3-D calibration coordinate using the perspective transformation 81

5.3 Image analysis and object segmentation……….83

5.4 Algorithm for color object classification………86

5.4.1 Algorithm for solid classification………86

5.4.2 Algorithm for hole classification……….87

5.5 Determination of orientation for triangular holes……… 90

5.6 Results and discussion………93

5.6.1 2-D calibration and spatial position measurement results……… 93

5.6.2 3-D calibration and spatial position measurement results……… 95

5.6.3 Determination process for the first points of triangular holes………….97

5.7 Implementation……… 98

Chapter 6 Conclusions and future works……… 100

6.1 Conclusions……… 100

6.1.1 Conclusion for the classification and measurement system for metal parts 100 6.1.2 Conclusion for the classification and measurement system for color objects 101

Trang 14

6.2 Future works……….102

List of publications………103

References……… 105

Appendices……….113

Trang 15

LIST OF FIGURES

Figure 1.1 Flowchart of dissertation 10

Figure 2.1 Flowchart of image processing system 12

Figure 2.2 Positions and gray-scale values within an image 15

Figure 2.3 Histogram between intensity and frequency 15

Figure 2.4 The conversion from gray-scale image into binary image 16

Figure 2.5 Representation of image 17

Figure 2.6 Relationship between pixels 18

Figure 2.7 Erosion operation 19

Figure 2.8 Dilation operation 20

Figure 2.9 Closing operation 20

Figure 2.10 Result of blob analysis 21

Figure 2.11 Architecture of machine vision system 23

Figure 2.12 Front lighting source 24

Figure 2.13 Back lighting source 24

Figure 2.14 Side lighting source 25

Figure 2.15 Lens model 25

Figure 2.16 Depth of field 26

Trang 16

Figure 2.17 Field of view 27

Figure 2.18 Chip size 29

Figure 3.1 The schematic diagram of a stereo imaging system 36

Figure 3.2 The specific perspective projection model 37

Figure 3.3 Pinhole model 38

Figure 3.4 The generalized perspective projection model 39

Figure 3.5 Converting from object to camera coordinate system 45

Figure 3.6 Radial distortion 46

Figure 3.7 Tangential distortion 47

Figure 3.8 Chessboard pattern 49

Figure 4.1 Structure of experimental system 52

Figure 4.2 Articulated and Cartesian coordinate system 53

Figure 4.3 The Camera Calibration using Matlab 60

Figure 4.4 Calibration board 61

Figure 4.5 Binary thresholding operation 62

Figure 4.6 Classification and determining coordinates of centers of objects 62

Figure 4.7 Setting the global threshold and binary threshold result 64

Figure 4.8 Setting the local threshold 65

Figure 4.9 Result of local threshold process 65

Trang 17

Figure 4.10 Using the closed function with 3 times 65

Figure 4.11 Flowchart of algorithm for object classification process……… 66

Figure 4.12(a) Determining the head of bolts left side 67

Figure 4.12(b) Determining the head of bolts right side 67

Figure 4.13 Result of determining the head of bolts 67

Figure 4.14 The calibration error, using Matlab toolbox 69

Figure 4.15 Diagram of calibration error of linear transformation method ………69

Figure 4.16 Diagram of calibration error of the quadratic transformation ……….70

Figure 4.17 Diagram of deviation percentage of bolts 71

Figure 4.18 Diagram of deviation percentage of washers 72

Figure 4.19 Diagram of deviation percentage of nuts 72

Figure 4.20 Flowchart of the implementation process 74

Figure 4.21 Model of robot arm 74

Figure 4.22 End effector of robot arm 75

Figure 4.23 Experiment result for top view 75

Figure 4.24 Experiment result for front view 76

Figure 5.1 Experimental system 78

Figure 5.2 Flowchart of the implementation process of the proposed system 80

Figure 5.3 2-D calibration board 81

Trang 18

Figure 5.4 3-D calibration pattern 82

Figure 5.5 Determining coordinates of 3-D calibration part on CMM 82

Figure 5.6 Flowchart of algorithm for shape classification process……….87

Figure 5.7 Flowchart of algorithm for hole classification process………88

Figure 5.8 Assembly parts 88

Figure 5.9 Recognition results of assembly parts 89

Figure 5.10 Assembly models 89

Figure 5.11 Recognition results of assembly models 89

Figure 5.12 Scan line to find the first point of triangular holes 91

Figure 5.13 All orientations of triangular holes 91

Figure 5.14 Determination of first point for triangular holes of the first world orientation 91

Figure 5.15 Determination of first point for triangular holes of the second world orientation 92

Figure 5.16 Determination of first point for triangular holes (Left image plane) 92

Figure 5.17 Diagram of calibration error 94

Figure 5.18 The calibration system verification screen 95

Figure 5.19 Calibration parameters and calibration accurate checking 96

Figure 5.20 Determining the orientations of triangular holes 97

Figure 5.21 The robot arm implemented in this work 98

Trang 19

LIST OF TABLES

Table 2 1 The type of the image forming device 28

Table 2 2 Advantages and shortcomings of scan modes 28

Table 4 1 Specifications of the CMOS camera (C910) 52

Table 4 2 Specifications of the Robot arm 54

Table 4 3 Determination of bolt orientation 68

Table 5 1 Specifications of the CMOS camera (C525) 79

Table 5 2 Specifications of the CMOS camera (C615) 79

Table 5 3 Specifications of CMM 83

Table 5 4 The world coordinates of calibration points measured by CMM 83

Table 5 5 Color recognition algorithms 85

Table 5 6 Threshold values using in this study 86

Table 5 7 The world coordinates of shapes 93

Table 5 8 The world coordinates of holes 96

Table 5 9 The world coordinates of first points of triangular 97

Trang 20

NOMENCLATURE

WCS The World Coordinate System

CCS Camera Coordinate System

ICS Image Coordinate System

CCDs Charge Coupled Devices

CMOS Complementary Metal Oxide Semiconductor 2-D Two Dimension space

3-D Three Dimension space

CMM Coordinate Measuring Machine

ATOS Advanced Topo-metric Optical Sensor SCARA Selective Compliance Assembly Robot Arm

CMYK Cyan, Magenta, Yellow, and Black

HSB Hue, Saturation, and Brightness

DFOV Diagonal Field Of View

Trang 21

LSM Least Square Method

DLT Direct Linear Transformation

VB 6.0 Visual Basic 6.0

MIL Matrox Image Library

ppi Pixel per inch

dpi Dot per inch

bit Binary digit

w h The world coordinate system

c h The camera coordinate system

θ The angle between the x-axis of the camera's image plane and the

X-axis of the world coordinate system

α The angle between the z-axis of the camera's image plane and the Z

axis of the world coordinate

Trang 22

C, G The translation matrix

P The perspective specific transformation matrix

T The extrinsic matrix

M The intrinsic matrix

tij The parameters of extrinsic matrix

 The angle between z and Z axis

 The angle between the x and X-axis

 The focal length in x-direction of the lens

 The focal length in y-direction of the lens

 The parameter describing the skew of the two image axes

u0 The coordinates of the principal point in x-direction

v0 The coordinates of the principal point in x-direction

k1 The first order distortion coefficient

k2 The second order distortion coefficient

k3 The third order distortion coefficient

p1 The first order tangential distortion coefficient

p2 The second order tangential distortion coefficient

LED Light Emitting Diode

Trang 23

 The percentage of deviation (%)

l j The real size of the objects

x n The coordinates of the center points of objects in x-direction

y n The coordinates of the center points of objects in y-direction

x i The coordinates of the targeted centers of the objects in x-direction

y i The coordinates of the targeted centers of the objects in y-direction

Pixel(X) The pixel values between the first point and the last point of the

calibration image in the x-direction

Pixel(Y) The pixel values between the first point and the last point of the

calibration image in the y-direction

X The world coordinates between the first point and the last point in

the x-direction

Y The world coordinates between the first point and the last point in

y-direction

Trang 24

Chapter 1 Introduction

1.1 Motivation of the research

Nowadays, flexible automatic assembly systems are one of the most useful tools for automated manufacturing processes These systems are expected to integrate machine vision systems for various applications within automation systems and production lines Therefore, machine vision is considered as a valuable system for automation process in industry

During the currently several years, the development of machine vision has been helped improving significantly both quality and productivity in manufacturing Machine vision uses illumination, image processing and blobs analysis to obtain features and positions of objects The machine vision technology is based on the human vision to detect objects in the physical world The information is processed

by a personal computer installed image processing software, and then the world coordinates of objects can be determined

Machine vision method is often applied in a lot of fields, such as: quality inspection [1, 2, 3], optical measurement systems [4, 5], industrial automation [6], electronic semiconductors [7, 8], medical [9, 10], defect inspection [11, 12, 13, 14], etc Among applications of machine vision in manufacturing and industry, Dworkin,

S B and Nye, T J [15] investigated an algorithm of machine vision to measure parts

in hot forming process In this study, to acquire images with visible light, charge coupled device (CCD) cameras was used After getting the images, thresholding operations were performed Finally, the images were analyzed and processed Derganc, J et al [16] presented a machine vision system for determining automatic measurement eccentricity of bearings The proposed system can obtain 100%

Trang 25

inspection of bearings and it is a valuable system for testing quality of the high-end-product Shuxia, G et al [17] presented a machine vision system for a cutting device’s diameter and the maximum rotating diameter of a mini-milling machine was measured A CCD camera which is Sony XC-55 was used for capturing images In this study, to identify the edges of mini milling cutter, a sub-pixel threshold segmentation algorithm was applied using gray level histogram These features are robustly and accurately determined by the Hough transform and linear regression Their system was fast and accurate Hsu, Q C et al [18] presented an automatic optical inspection system (AOI) for defect detection of dental floss picks

In this study, five webcams and seven sets of white LED modules as lighting to support the vision environment were used to perform image acquisition With this system, the dental floss picks were measured and then classified

Furthermore, applications of machine vision have also used for non-contact precision measurement Precision measurement is very important in manufacturing technology using machine vision Traditional measurement methods include contact and non-contact measurement With contact measurement method, CMM (Coordinate Measuring Machine) is used normally This equipment has very high precision and to be widely used in research institutes, factories and universities With non-contact measurement method such as White Light Interference method, likes ATOS (Advanced Topo-metric Optical Sensor) is a selection with high precision and to be used for parts which has complex geometries However, both CMM and ATOS are very expensive and take a long time to measure Today, with development of sciences and technology, automated optical measurement method using machine vision technology is a suitable selection for precision measurement with low cost and high performance In a study of Ngo, N V et al [19], a machine

Trang 26

vision system was developed for non-contact measurement in three dimensional (3-D) sizes This study used a 6-point calibration algorithm to transform the image coordinate into the world coordinate The image data from a double CMOS was sent to a computer to measure the sizes With this system, sizes of part are fast and accurately determined Specially, this system is not complex and it is easy for operation The study of Ali M.H et al [20] is very useful for improving precision measurement The proposed system was to measure gear profile to replace traditional methods which may face danger in measurement process Besides, the existing methods are either time consuming or expensive Experimental results of the proposed system were compared with the existing systems These results showed that their method has great advantages over existing methods in practical application In a study of Rejc J et al [21], the authors measured dimension of a protector using automated visual inspection system They used a linear and a polynomial approximation for defining edges of selected structures of the protector Pixel to metric unit transformation was performed by using a higher order polynomial approximation The measurement accuracy of the proposed system is in the range of ±0.02 mm The measurement time is less than a second However, it cannot replace the current measurement system It can be only use in the company testing laboratory In a study of Martínez, S et al [22], they presented a quality inspection system for machined metal parts using an image fusion technique The machine vision system can perform the detection of flaws on textured surfaces This system works effectively with a low value of false rejections In a study of Abhinesh

B et al [23], an automatic inspection algorithm of lead-frames to identify stamping defects was presented Blobs analysis using morphological closing and image subtraction for detecting defects of the stamping was performed The vision-based

Trang 27

system was integrated with a conveyor The experimental results showed that the system can detect defects of the stamping on the lead-frames success detecting rate

of this system is close to 98.7% In a research of Quinn, M.K et al [24], an automated inspection system for pressure-sensitive paint (PSP) was developed using machine vision camera systems This research presented relevant imaging characteristics and the applicability of such imaging technology for PSP The experimental results show that this machine vision system has advanced to apply for quantitative measurements of a PSP system In a study of Zhang, X et al [25] a high precision quality inspection system for steel bars using machine vision was presented In this study, the sub-pixel boundary location method (SPBLM) and fast stitch method (FSM) were proposed Steel bar diameter, spacing, and quantity were detected The results show that this system has a high accuracy for measuring diameter and spacing

Among applications of machine vision in agriculture field, a research of Cubero,

S. et al [26] has been developed a machine vision system to classify the objects that reach the line into four categories, finding broken fruit attending, basically, to the shape of the fruit The image acquisition system used two cameras and a computer and back-lighting source By extracting morphological features, pieces of skin and other raw material were automatically identified The classified results of the system based on machine vision obtained 93.2% of sound segments In study of Jarimopas, B and Jaisin, N [27], they developed a machine vision experimental sorting system for sweet tamarind pods based on image processing techniques The sorting system used a CCD camera, a TV card, microcontrollers, sensors, and a microcomputer The proposed vision sorting system could separate Sitong tamarind pods at an average sorting efficiency of 89.8%

Trang 28

In recent years, machine vision is also applied for robots Various types of image processing hardware have been studied to increase the performance of vision system In particular, a machine vision feedback procedure can be used to guide an object held in a robot manipulator Movement instructions of a robot is programmed with a general set In a research of Blasco, J et al [28], they installed two vision systems on a machine One of them was to identify weeds and send their coordinates to control of the robot arm, and the other for calibrating inertial perturbations induced in the position of the gripper These systems were demonstrated to be capable of properly locating 84% of weeds and 99% of lettuces

Di Fulvio, G et al [29] proposed a stereoscopic vision system for conducting dimensional measurements in an industrial robotics application In their research, a camera mounted on a six-axis robotic manipulator was able to calculate the world coordinates of the target objects Experimental results have shown that the accuracy

of the measurements were strongly influenced by the accuracy of the calibration Phansak, N et al [30] developed a flexible automatic assembly system for robot arm (SCARA - Selective Compliance Assembly Robot Arm) using machine vision

A prototype of the flexible automatic pick and place assembly system were designed With this vision system, the robot could also pick each correct assembly part and place it into the assembly location perfectly In a research of Tsarouchi, P

et al [31], to detect of randomly placed objects for robotic handling, the authors proposed a 2-D machine vision system including a high resolution camera This camera was mounted on a fixed position from the conveyor In their study, Matlab software was used for image processing, determination of objects’ poses, calculation and transformation of the 2-D coordinates Among those applications of machine vision for robot arm, Iscimen, B et al [32] developed a smart robot arm

Trang 29

motion using machine vision This system could detect objects from images automatically and implement given tasks Specially, artificial neural network was used recognize objects and 98.30 % overall accuracy of recognition was achieved

In addition, Rai, N et al [33] used a computer vision system for detection and autonomous object manipulation placed randomly on a target surface and controls

an educational robotic arm with 6 degree of freedom (DOF) to pick it up and place

it to target location The system applied Centre-of-Mass based computation, filtering and color segmentation algorithm to determine the target and the position for movement of the robotic arm In the other research, Tsarouchi, P et al [34] used also a vision system to recognize objects whose location are random in workspace for a robot This research proposed a method to calculate the coordinates of object

in the world system from images captured by camera It was performed using a tool

of Matlab software and shaver handles’ feeding was used as a case study Juang, J

et al [35] presented an application of optical word recognition and fuzzy control to

a smartphone automatic test system The proposed system consists of a robot arm and two cameras Two cameras is to capture and recognize the commands and the words on the monitor of control panel and the tested smartphone Object coordinates provided by the camera which is used to recognize the tested smartphone will be sent to robot arm, and then robot arm moves to the target positions and presses the desired buttons

Within a machine vision system, calibration work is very important Its performance depends on accuracy of calibration process Shin, K Y et al [36] proposed a new calibration method for a multi-camera setup using a wand dance procedure With this method, the calibration for the 3-D frame parameters were first estimated using the Direct Linear Transformation (DLT) method Then, the

Trang 30

parameters estimated in the first step were improved iteratively through a nonlinear optimization using the wand dance procedure Experimental results showed that the proposed calibration method holds great promise to increase the overall accuracy of the DLT algorithm and provide better user convenience Y Ji et al [37] proposed an automatic calibration method for camera sensor network using a 3-D texture map information of environment given in this research A new image descriptor was presented It based on quantized line parameters in the Hough space (QLH) to perform a particle filter-based matching process between line features and the 3-D texture map information.Deng, Li, et al [38] investigated a relationship model for camera calibration In this study, the geometric parameters and the lens distortion effect of the camera were taken The study results show that the proposed algorithm had the ability to avoid local optimums, and could complete the visual identification tasks accurately You, X et al [39] presented a two vanishing point calibration method for a roadside camera This method is accurate and practical To improve the accuracy of the calibration results, they also employed the multiple observations

of the vanishing points and presented the other calibration method which is dynamic

to calibrate the camera’s parameters This method is suitable for outdoors Ivo S et

al [40] presented a novel structured light pattern for 3-D structured light scanner to obtain accurate anthropometric body segment parameters in a fast and reliable manner Volumetric parameters of both an artificial object and a human body segment were obtained through 3-D scanning

Beyond the applications mentioned, machine vision provides innovative solutions in the direction of industrial automation [41] and benefits in industrial applications These applications include manufacturing the electronic components [42], productions for textile [43], metal products [44], glass manufacturing [45],

Trang 31

machined products [46], products of printing [47] and automatic optical inspection for quality of granite [48], products of integrated circuits (IC) advanced manufacturing technology [49] and many others Machine vision technology improves both productivity and quality management and provides many advantages

to industries

In summary of the literature survey, we can see that machine vision has applied

in many fields, especially in the area of industrial automation With machine vision technology, we can obtain an effective solution towards improving both the flexibility of performance and the accuracy of measurement, when applying a machine vision for various applications Therefore, in this study, an attempt has been produced for development of object classification and measurement systems using machine vision This proposed systems use CMOS cameras with high resolution, personal computers with Visual Basic 6.0 and MIL - Matrox Imaging Library Coordinate calibration algorithms were investigated and compared with other traditional algorithms

1.2 Scopes of the research

Scopes of this research include two mainly objects The first object studies on application of machine vision for metal part classification and measurement system

in 2-D spatial space under different lighting conditions for robot manipulation using quadratic transformation algorithm in calibration process The second object studies

on application of machine vision for the color object classification and measurement system in space using different calibration algorithms including the quadratic transformation for 2-D and the perspective transformation for 3-D

These vision systems use CMOS cameras with high resolution, personal computer with Visual Basic 6.0 and MIL for image processing, extracting an

Trang 32

object’s features within an image

1.3 Contributions

Contributions of this study can described as follows:

The classification and measurement system for metal parts including bolts, nuts and washers in 2-D coordinate system was developed successfully under different lighting conditions This system can work in different lighting conditions which is the same with industrial environments and can affect the recognition and measurement of target objects For this system, the quadratic transformation which

is plane-to-plane transformation was applied for calibration process This proposed calibration method is accurate and simple to use in industrial environment To assess accuracy of the proposed system, camera calibration using Camera Calibrator tool of Matlab and traditional calibration methods of Tsai and Zhang are also investigated and compared

Along with the above vision system, a classification and measurement system for color objects in 3-D coordinate system was also presented This system can recognize and classify exactly different color objects such as sphere, square cubic, hexagonal cubic and triangle cubic which are in red, green, blue and yellow color in 2-D physical space Besides, circle, triangle and square holes in 3-D physical space were also recognized and classified by this system Both the perspective transformation and plane-to-plane transformation algorithms were used for calibration processes

These systems were used for robot manipulation in determination of position and shape of 2-D and 3-D objects in the physical world

1.4 Organization of dissertation

In this dissertation, the research procedure is organized as shown in Figure 1.1

Trang 33

Chapter 1 presents the brief content and background of the research The

motivation, scope of research work, purposes and contributions of the project are also discussed

Chapter 2 describes a theory of image processing and machine vision such as main

components of machine vision, and the basic concepts

Chapter 3 presents coordinate calibration methods which is important in machine

vision The generalized perspective transformation and plane-to-plane transformation algorithms which are to determine the relationship between image coordinate system and the world coordinate system were shown Besides, camera calibration work is also presented

Figure 1.1 Flowchart of dissertation

Trang 34

Chapter 4 gives development of a metal part classification and measurement

system under different lighting conditions, and has been applied to robot manipulation

Chapter 5 presents development for an automatic classification and measurement

process for color objects during working processes of robot arm with six degree of freedom (DOF)

Chapter 6 shows conclusions of this research and suggestions for future works

Trang 35

Chapter 2 Theory of image processing and machine vision

The purpose of this chapter is to introduce several concepts related to digital image and machine vision This chapter is divided into two main sections Section 2.1 briefly summarizes the basics in digital image and image processing Section 2.2 discusses components of a machine vision system, lights, lens and image sensors

2.1 Image processing system

Image processing techniques have been developed tremendously during the past six decades It has become increasingly importance because digital devices, relating automated vision detection and inspection and object recognition, are widely used today [50] Typical image processing systems usually include the following components:

Figure 2.1 Components of an image processing system

a Image acquisition: Image acquisition is the first stage of any image processing system Images can be received through color camera or black and white camera Normally, these images are digital images

Trang 36

b Image preprocessing: After image acquisition stage, images are noise and their contrast is low Therefore, images need to be processed to enhance quality using image preprocessing

c Image segmentation: Image segmentation operation subdivides an image into constituent regions or objects to analysis Segmentation is one of the most difficult tasks in image processing technique

d Image representation: After an image has been segmented into regions, the results aggregated of segmented pixels are usually represented and described in a suitable form for further computer processing

e Image recognition and interpretation: Image recognition and interpretation are the

image determination processes based on preparation with stored images before

2.1.1 Basics of image processing

2.1.1.1 Pixels

In digital imaging, a pixel (picture element) is a physical point in a raster image

It is the smallest controllable element of a picture represented on the screen Each pixel is a sample of an original image The intensity of each pixel is variable In color imaging systems, a color is typically represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black

2.1.1.2 Resolution of image

Resolution refers to the number of pixels in a unit length within an image, which is usually calculated with several pixels per inch Resolution can be divided into the following three categories:

a Image resolution [53] is a number of pixels contained in a unit of length within a picture or an image

Trang 37

For example: 300 ppi means 300 pixels per inch

b Display resolution: It indicates the range which can display

For example, if the size of the Windows Desktop is set from (640×480) to (800×600), it can accommodate more things, things displayed will become smaller

c Printer resolution: It indicates the print quality of a printer

For example: 720 dpi means that 720 dots can be ejected in one inch

Image resolution affects the print quality and it does not affect the display on the screen

The printer resolution does not affect the print size The print size is determined

by the resolution of the image, as shown in Equation 2.1 Increasing printer resolution improves the print quality

300  and the height is 600 2 inches

300  The print size must be 2.67×2 inches

2.1.1.3 Gray level

Each pixel, as shown in Figure 2.2, has a position and a gray value Grayscale images include different shades of the gray [51] If each pixel has 8 bits of the gray, there are 28 = 256 variations, that is from 0 to 255 Where 0 is black color and 255

is white color

Trang 38

Figure 2.2 Positions and gray-scale values within an image

2.1.1.4 Histogram

Figure 2.3 Histogram between intensity and frequency

The characteristic distribution of an image is the distribution of frequency which is appearance of a digital set group, and the gray distribution is the distribution of the number of pixels appearing at various gray levels The horizontal axis of the grayscale distribution is the grayscale value (0255), and the vertical

Trang 39

axis represents the total number of all pixels of the grayscale value, as shown in Figure 2.3

2.1.1.5 Image representation

a Binary image: Binary images are the simplest type of images It displays two colors which are black and white, or ‘0’ and ‘1’ [51] Each pixel of a binary image has a spatial representation of 1 bit because it takes only 1 binary digit to present Binary image often is used in machine vision applications Binary image is often created from gray-scale image The action of the threshold setting is equivalent to convert a grayscale image into a binary image, and the binary image cannot be converted back to a grayscale image A binary image is created from a gray-scale image, as shown in Figure 2.4

Figure 2.4 The conversion from gray-scale image into binary image

b Gray-scale images: Gray-scale images are referred to as monochrome, or one-color, images They contain brightness information only, no color information Each pixel of a gray-scale image has a spatial representation of 8 bits, which allows

us to have 256 (0255) different brightness levels There are no red, green and other

Trang 40

colors in a gray-scale image

e Color images: Color images can be modeled as three band monochrome image data, where each band of data corresponds to a different color Each pixel within a color image is a mixture of R (red), G (green), and B (blue) colors The three primary colors are divided into 8 bits Each pixel has 24 bits (16,777,216)

Figure 2.5 shows color image, gray image and binary image

a Color image b Gray image c Binary image

Figure 2.5 Representation of image

2.1.1.6 Color models

The purpose of color model is to facilitate the specification of colors in some standards A color model is a specification of a coordinate system and a subspace within that system where each color is represented by a single point [52] There are some color models such as RGB (Red, Green, Blue), CMY (Cyan, Magenta, Yellow), CMYK (Cyan, Magenta, Yellow, Black) and HSI (Hue, Saturation, Intensity)

a The RGB model:

This color model is based on a Cartesian coordinate system The three primary colors of this model are red, green, and blue The synthesis of colored light is an additive principle The more colors are mixed, the higher the brightness

b The CMY and CMYK models:

Ngày đăng: 30/03/2021, 09:15

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w