16 Figure 2.8: Example of color, gray and binary image.. In reality, binary images are used in many applications since they are the simplest to process, but they are such an impoverished
Trang 1Development of vision-based systems for
structural health monitoring
Trang 2Development of vision-based systems for
structural health monitoring
Ho Hoai Nam
A Dissertation Submitted to the Department of Civil and Environmental
Engineering, and the Graduate School of Sejong University
in partial fulfillment of the requirements
for the degree of Doctor of Philosophy
December 2013
Approved by
Jong-Jae Lee
Major Advisor
Trang 4i
TABLE OF CONTENTS
TABLE OF CONTENTS i
LIST OF TABLES vi
LIST OF FIGURES vii
ABSTRACT xii
CHAPTER 1 INTRODUCTION 1
1.1 Introduction 1
1.2 Scope and Objectives 4
1.3 Organization 4
CHAPTER 2 FUNDAMENTALS OF DIGITAL IMAGE PROCESSING AND PATTERN RECOGNITION 6
2.1 Digital image formation and representation 6
2.1.1 Image formation 6
2.1.2 Image presentation 7
2.1.3 Coordinate convention 9
2.2 Color fundamentals 10
2.2.1 Basic concepts 10
2.2.2 Color models 14
2.3 Basic image processing operations 25
2.3.1 Elementary image processing operators 25
Trang 5ii
2.3.2 Binary image processing operations 26
2.3.3 Geometric transform 27
2.4 Edge detection algorithms 28
2.4.1 Roberts Cross edge detector 29
2.4.2 Sobel edge detector 31
2.4.3 Prewitt gradient edge detector 34
2.4.4 Laplacian edge detector 35
2.4.5 Canny edge detection 36
2.5 Image segmentation algorithms 39
2.5.1 Threshold techniques 39
2.5.2 Edge-based methods 40
2.5.3 Region-based techniques 40
2.5.4 Connectivity-preserving relaxation methods 40
2.6 Pattern recognition 41
2.6.1 Principal component analysis (PCA) 41
2.6.2 Support Vector Machines (SVM) 45
2.6.3 Artificial Neutral Networks (ANN) 49
CHAPTER 3 ADVANCED VISION-BASED SYSTEMS FOR DISPLACEMENT AND ROTATION ANGLE MEASUREMENT 53
3.1 Introduction 53
3.2 Advanced vision-based displacement measurement system 53
Trang 6iii
3.2.1 Development of the advanced vision-based system 54
3.2.2 Displacement measurement of high-rise buildings 72
3.3 High resolution rotation angle measurement system 85
3.3.1 A vision-based rotation angle measurement system 86
3.3.2 Verification tests 88
3.3.3 Testing on a single-span bridge 93
CHAPTER 4 AN EFFICIENT VISION-BASED 3-D MOTION MONITORING SYSTEM FOR CIVIL STRUCTURES 97
4.1 Introduction 97
4.2 Development of 3-D vision-based motion monitoring system 99
4.3 Experimental verifications 107
CHAPTER 5 AN IMAGE-BASED SURFACE DAMAGE DETECTION SYSTEM FOR CABLE-STAYED BRIDGES 111
5.1 Introduction 111
5.2 Cable inspection system 112
5.3 Damage detection algorithm 114
5.4 Experimental verifications 119
Trang 7iv
CHAPTER 6 SONAR IMAGE ENHANCEMENT APPLYING FOR UNDERWATER
CIVIL STRUCTURES 124
6.1 Fundamentals of Sonar 124
6.1.1 Beam pattern 126
6.1.2 Resolution 128
6.1.3 Backscatter 132
6.1.4 Distortions 132
6.2 Reading sonar image file 135
6.3 Sonar image enhancement techniques 136
6.3.1 Median filter 136
6.3.2 Morphological filter 139
6.3.3 Weiner filter 144
6.3.4 Frost filter 146
6.3.5 Contrast enhancement 149
6.4 Efficient image enhancement algorithm 153
CHAPTER 7 CONCLUSIONS AND FUTURE STUDY 156
7.1 Conclusions 156
7.1.1 Synchronized real-time vision-based systems applying for displacement and rotation measurement 156
7.1.2 An 3D motion measurement system for civil infra-structures 157
Trang 8v
7.1.3 An efficient image-based damage detection for cable surface in cable-stayed
bridges 157
7.1.4 Sonar image enhancement applying for underwater civil structures 158
7.2 Recommendations for future study 158
REFERENCES 160
APPENDICES 175
Trang 9vi
LIST OF TABLES
Table 3.1: List of hardware components 75
Table 3.2: Static test results 80
Table 3.3: Commercial tilt sensors 86
Table 3.4: Static test results 91
Table 4.1: Static displacement test results 109
Table 4.2: Static rotation test results 109
Table 5.1: Typical camera specifications [95] 113
Table 5.2: Laboratory test results 120
Table 6.1: Rules for Dilation and Erosion 141
Trang 10vii
LIST OF FIGURES
Figure 2.1: Model of a digital formation system 7
Figure 2.2: Combination and variant of RGB color 8
Figure 2.3: Coordinate convention of digital images 10
Figure 2.4: Energy spectrum [37] 11
Figure 2.5: Mixture of light and Mixture of pigments [36] 12
Figure 2.6: The wavelengths of the vision spectrum 14
Figure 2.7: RGB color cube 16
Figure 2.8: Example of color, gray and binary image 17
Figure 2.9: CMY color cube 19
Figure 2.10: HIS color cube 22
Figure 2.11: Image rotation 27
Figure 2.12: Shear along horizontal axis 28
Figure 2.13: Roberts Cross convolution kernels 30
Figure 2.14: Results of applying Roberts Cross convolution kernels 31
Figure 2.15: Sobel convolution kernels 32
Figure 2.16: Outputs of Sobel and Roberts edge detectors 34
Figure 2.17: Masks for the Prewitt gradient edge detector 34
Figure 2.18: Three small common kernels of Laplacian edge detector 35
Figure 2.19: Comparison of the edge detection methods 38
Trang 11viii
Figure 2.20: Main components (PC1 and PC2) 44
Figure 2.21: PCA example [63] 44
Figure 2.22: SVM concept [67] 45
Figure 2.23: Linear classifier using SVM 46
Figure 2.24: Fundamental block of ANN 51
Figure 2.25: Multilayer neuron network 51
Figure 3.1: Schematic of the system 55
Figure 3.2: Flowchart of the software 58
Figure 3.3: Software interface 59
Figure 3.4: Binary image conversion 60
Figure 3.5: Target recognition 61
Figure 3.6: Time synchronization process 63
Figure 3.7: Processing time per frame 64
Figure 3.8: Shaking table and target size 65
Figure 3.9: Experimental location setup 66
Figure 3.10: Shaking table test results 68
Figure 3.11: Time synchronization test 69
Figure 3.12: Time lag between the subsystems 70
Figure 3.13: Displacement with excitation frequencies of 1 Hz and 4 Hz 71
Figure 3.14: Partitioning method proposed by Park, et al [74] 74
Figure 3.15: Camera and target location 76
Trang 12ix
Figure 3.16: Experimental setup 78
Figure 3.17: Dynamic test results 79
Figure 3.18: Experimental setup of the field test 81
Figure 3.19: Measured and estimated data under sinusoidal excitation 83
Figure 3.20: Measured and estimated data under random excitation 84
Figure 3.21: Vision-based dynamic rotation angle measurement 87
Figure 3.22: Application examples 88
Figure 3.23: Experimental setup 90
Figure 3.24: Dynamic test results 92
Figure 3.25: Testing plan 93
Figure 3.26: Static load test results 95
Figure 3.27: Dynamic load test results 96
Figure 4.1: Binary image conversion 100
Figure 4.2: Results of binary conversion 101
Figure 4.3: Target and camcorder installation for 3-D measurement 103
Figure 4.4: Processing time per frame 104
Figure 4.5: Software interface 106
Figure 4.6: Testing setup 107
Figure 4.7: Dynamic test results 110
Figure 5.1: An overview of the cable inspection system 112
Figure 5.2: PCA-based damage detection algorithm 116
Trang 13x
Figure 5.3: PCA training process 118
Figure 5.4: Three cable specimens for laboratory tests and example images for training 120
Figure 5.5: Online damage detection 121
Figure 5.6: Post-processing damage detection 121
Figure 5.7: A closed box embedded cameras and LED lightning system 123
Figure 6.1: Sidescan sonar survey scenario [111] 125
Figure 6.2: Geometry of sidescan sonar and definitions of some basic parameters 126
Figure 6.3: Typical beam pattern of a sidescan sonar [114] 127
Figure 6.4: Thickness of a sonar pulse [115] 128
Figure 6.5: The across-track resolution of a sonar increases with range [115] 129
Figure 6.6: The along-track resolution [115] 130
Figure 6.7: The horizontal resolution of a sidescan sonar 131
Figure 6.8: Towfish instabilities, which degrade the quality of the sonar data [112] 133
Figure 6.9: Slant-range distortion 134
Figure 6.10: Structure of the XTF data file 135
Figure 6.11: Median filter 137
Figure 6.12: Applying median filter 138
Figure 6.13: Operation of morphological filters 140
Figure 6.14: Morphological operation 143
Figure 6.15: Weiner filter 145
Figure 6.16: Frost filter algorithm 147
Trang 14xi
Figure 6.17: Frost filter result 148
Figure 6.18: Histogram equalization process 150
Figure 6.19: Histogram equalization result 151
Figure 6.20: Unsharp masking result 152
Figure 6.21: SONAR image enhancement algorithm 154
Figure 6.22: Results of the proposed algorithm 155
Trang 15xii
ABSTRACT
Development of vision-based systems for
structural health monitoring
The advanced vision-based systems are developed to remotely measure the displacement and rotation of large civil structures The system consists of software, master PC and slave PCs The data of each slave system including system time are wirelessly transferred to the master PC and vice versa In addition, synchronization process is carried out to guarantee the time
Trang 16An efficient image-based damage detection system for cable-stayed bridge that can automatically identify damages to the cable surface through image processing techniques and pattern recognition is successfully developed Initially, image data from the cameras are wirelessly transmitted to a server computer located on a stationary cable support To improve the overall quality of the images, the system utilizes an image enhancement method together with a noise removal technique Then the input images are projected into the feature space, the Mahalanobis square distance is used to evaluate the distances between the input images and training pattern data The smallest distance is found to be a match for an input image The proposed damage detection system is verified through laboratory tests on three types of cables Test results show that the proposed system could be used to detect damage for bridge cables Finally, an efficient Sonar image enhancement is proposed The algorithm consists of two steps, noise removal and image sharpening The Wiener filter and median filter are utilized to suppress noise Then the filtered image is de-blured and enhanced using unsharp masking and histogram equalization The proposed algorithm is successfully applied on many sonar images
Trang 17Two general approaches to structural health monitoring include local detection techniques and global vibration-based techniques Local methods (which usually utilize many kinds of sensor techniques such as X-rays, radiography, acoustics, strain gauges, optical fibers, etc.) focus on the detection of local damages of structures and require the prior knowledge of the area, where the possible damage will occur, and the structural location to be examined should be easily accessible Hence, the local techniques might not be the efficient solutions to damage diagnosis, especially for large-scale structures subjected to cost limitations, accessing difficulties and structural complexity Global vibration–based methods investigate the health condition or the potential damages of a structure through the changes of dynamic characteristics such as natural frequencies, damping ratios, mode shapes Integrated with appropriate identification techniques,
Trang 182
these methods are able to detect the damage at the early stage, estimate the damage locations, quantify the damage severity and predict the remaining service life of the structure Other main advantages of the global methods are ready for automatic processing and have little reliance on engineers’ judgments or on the analytical model of structure A lot of studies in the field of structural health monitoring were extensively surveyed in the Ref [1, 2]
With the advent of inexpensive and high performance image capture devices and associated image processing techniques, vision-based systems become a popular tool of great interest for wide applications in medicine, archaeology, biomechanics, astronomy, public safety, etc The vision-based system is regaining its popularity in various engineering disciplines due to a recent remarkable evolution in cost-effective and high-performance image capture devices
In civil engineering, vision-based systems have been successfully applied to wide range of applications such as detection of unsafe actions of a construction worker [3], extraction of cracks from concrete beams [4], finding cracks on a bridge deck [5, 6], road pavement assessment [7], filling holes in the road [8], detection of building crack [9, 10], etc
In civil structures, structural members are exposed to severe environment conditions in their life cycle such as earthquakes, tornadoes, typhoons etc., so structural health monitoring plays a key role to sustain long service-life of structures The structural health monitoring starts with the inspection of the structural members to determine the present condition For a long time, the inspection has been carried out by field inspectors and/or using contact-type sensors [11, 12] In reality, the inspection of the large structures suffers from high traffic volumes and/or difficult access conditions which make the inspection time-consuming, difficult and expensive [13] In
Trang 193
many cases, the inspection process exposes inspectors to dangerous situations when the inspector needs to access desired locations using a lifting vehicle or lifting trolley [13, 14] The issues are the main motivations for the development of image-based detection systems which can be used to remotely and non-contact inspected the structure conditions to reduce the inspection time, cost, and to increase the effectiveness, safety of structural health monitoring Displacement is an important physical quantity to be monitored in the framework of structural health monitoring Vision-based displacement measurement is one of the most common non-contact measurement techniques in civil engineering applications [15-20] In addition, rotation angle measurement systems are widely used nowadays to monitor deformations of large-scale civil structures such as long-span bridges, tunnels, dams, high-rise buildings using commercial tilt-meter sensors or high-resolution rotation measurement systems [21, 22]
In a cable-stayed bridge, cables are the key structural members and face with severe environment conditions including natural hazards like typhoons, earthquakes, etc., so the cables should be regularly and carefully inspected for any possible defect Some vision-based systems for cable damage assessment [23-26] have been introduced to reduce the manual inspection time and increase reliability of the damage inspection
The SONAR (SOund Navigation And Ranging) system was initially created for underwater
sensing [27] Sonar systems are important and powerful tools for survey underwater civil structures where the optical capture devices are difficult or impossible to reach However, sonar images are often corrupted by signal-dependent noise called speckle [28], thus some de-noising methods [29-32] were introduced to enhance the images for visualization and/or classification
Trang 204
1.2 Scope and Objectives
This study concerns with vision-based systems utilizing in non-contact assessment of civil structures which have low natural frequencies The main objective of this study is to develop vision-based systems for deformation measurement and damage detection of structures The specific objectives of this study are:
• Develop an efficient synchronized multi-point vision-based displacement system applying for civil structures
• Develop a high-resolution rotation angle measurement for civil structures
• Extend the multi-point vision-based displacement for 3D real-time measurement of structures
• Develop an effective image-based system for damage detection of bridge cables, and propose an efficient sonar image enhancement algorithm applying for underwater civil structures
Trang 215
Chapter 3 develops the vision-based displacement and rotation measurement systems for large civil structures The vision-based measurement system consists of software and hardware (low-cost PCs, commercial frame grabbers, camcorders, and a wireless LAN access point) The image data of targets are captured by camcorders and streamed into the PC via frame grabbers Then the final results are calculated using image processing techniques with pre-measured calibration parameters
Chapter 4 presents an efficient real-time 3D motion measurement The proposed system composes of hardware (industrial camcorders, commercial PC and frame grabber) and software The efficient software and measurement scheme, and automatic optimal threshold selection algorithm are also introduced to obtain the dynamic motion with six degrees of freedom (DOF)
in real-time
Chapter 5 develops a surface damage detection system for cable-stayed bridges using image processing and pattern recognition techniques Image data captured by three cameras moving along the target cable are wirelessly transmitted to a server computer located at the cable support Surface damage of cables is detected by deploying principle component analysis Chapter 6 quickly reviews some fundamental knowledge and important image enhancement techniques on sonar image, then proposes an efficient sonar image enhancement algorithm The proposed algorithm consists of two steps including noise removal and image sharpening Initially, the noise is removed using Wiener and median filters, and then the filtered image is sharpened by applying unsharp masking and histogram equalization
Chapter 7, the final chapter, concludes this dissertation and recommends future study
Trang 226
CHAPTER 2
FUNDAMENTALS OF DIGITAL IMAGE PROCESSING
AND PATTERN RECOGNITION
2.1 Digital image formation and representation
2.1.1 Image formation
Digital image formation is the initial step in any digital image processing application Basically, the digital image formation system consists of an optical system, the sensor and the digitizer The optical signal is usually transformed to an electricity signal by using a sensing device (e.g a CCD sensor) The analog signal is transformed to a digital one by using a video digitizer
An image is the optical representation of an object illuminated by a radiating source Thus, the image formation process composes of following elements such as object, radiating source, and image formation system
Trang 237
Figure 2.1: Model of a digital formation system
2.1.2 Image presentation
Any method chosen for image representation should fulfill two requirements:
• It should facilitate convenient and efficient processing by means of computer
• It should encapsulate all information that defines the relevant characteristics of the image
The digital image can be conveniently represented by an M×N matrix i of the form:
, 1 0
, 1
1 , 1 1
, 1 0
, 1
1 , 0 1
, 0 0
, 0 ,
N M i M
i M
i
N i i
i
N i i
i y x i
A typical color digital image is mostly represented by a triplet of values, one for each of the
color channels, as in the frequently used RGB color scheme The letters R, G, B stand for red, green, and blue The individual color values are usually used 8-bit values resulting in a total of
24 bits (or 3 bytes) per pixel This yields a three-fold increase in the storage requirements for
Trang 241,00
,0,1,10
,1
1,00
,0,1,10
,1
1,00
,
0
,,,,
,
,
N M b M
b
N b b
N M g M
g
N g g
N M r M
r
N r r
y x b y x g y
The three primary colors may be used to synthesize any one of 2 24 approximately 16 million
colors In each color square the red and blue color values are varied in increments of
3
1 , with the amount of green increasing in the same manner from left to right Normalized value range
1
,
,
0≤r g b≤ for specifying color values have been used
Figure 2.2: Combination and variant of RGB color
Binary images are images whose pixels have only two possible intensity values, they are
normally displayed as black and white [33] In reality, binary images are used in many applications since they are the simplest to process, but they are such an impoverished representation of the image information that their use is not always possible [33] However, they
Trang 25of these output images
Binary images are often produced by thresholding a grey-scale or color image, in order to separate objects in the image from the background The color of the object (usually white) is
considered as the foreground color The remaining (usually black) is referred to as the
background color
2.1.3 Coordinate convention
The result of sampling and quantization is a matrix of real numbers [34] Assuming that an image i( )x,y is sampled so that the resulting image has M rows and N columns It is often said that the image is of size M×N, and the values of the coordinates ( )x, y are discrete quantities In many image processing books, the image origin is defined to be at (x, y) ( )= 0,0
Trang 26On the other hand, a body that favors reflectance in a limited range of the visible spectrum exhibits some shades of color [36] For example, green objects reflect light with wavelengths
Trang 2711
primarily in the 500nm to 570nm range, while absorbing most of the energy at other wavelengths
Figure 2.4: Energy spectrum [37]
Because of structure of the human eye, all colors are seen as variable combinations of the
three so-called primary colors red (R), green (G), and blue (B) [38] For the purpose of
standardization, the following specific wavelength values are used to the three primary colors:
Trang 2812
using the word primary has been widely misinterpreted to mean that the three standard
primaries, when mixed in various intensity proportions, can produce all visible colors This interpretation is not correct, unless the wavelength is also allowed to vary
The primary colors can be added to produce the secondary colors of light:
+ Magenta (red + blue),
+ Cyan (green + blue), and
+ Yellow (red + green)
Figure 2.5: Mixture of light and Mixture of pigments [36]
Trang 2913
Mixing the three primaries, or a secondary with its opposite primary color, in the right intensities produces white light [36] This result is shown in Figure 2.5, which also illustrates the three primary colors and their combinations to produce the secondary colors
Differentiating between the primary colors of light and the primary colors of pigments or colorants is important A primary color of pigment is defined as one that subtracts or absorbs a primary color of light and reflects or transmits the other two [35, 36, 39] Thus the primary colors of pigments are magenta, cyan, and yellow, while the secondary colors are red, green, and blue These colors are shown in Figure 2.5 A proper combination of the three pigment primaries or a secondary with its opposite primary produces black [39]
The characteristics used to distinguish one color from other are:
Brightness: Brightness means the amount of chromatic intensity
Hue: Hue signifies the dominant wavelength in the mixture of light waves
Saturation: Saturation means the amount of white light mixed with hue For
example colors such as pink (red plus white) are less saturated
Hue and Saturation together are called chromaticity, so chromaticity and brightness can completely characterize a color The amount of red, green and blue needed to form any particular color are called tri-stimulus values Some basic definitions:
Luminous (measured in Lumens): this is defined as the measure of the energy an observer perceives from a light source
Radiance (measured in watts): this is the total amount of energy that flows from the light source
Trang 3014
2.2.2 Color models
A color can be specified as the sum of three colors, and Figure 2.6 shows the amounts of three primaries needed to match all the wavelengths of the visible spectrum [40]
Figure 2.6: The wavelengths of the vision spectrum
Human perception of color is a function of the response of three types of cones [34] Because
of that, color systems are based on three numbers These numbers are called tri-stimulus values There are numerous color spaces based on the tri-stimulus values The tri-stimulus values are proportional to the number of units of the primary colors that will match a given luminance The negative value indicates that adding up the primaries cannot exactly produce some colors The purpose of a color model is to facilitate the specification of colors in some standard, generally accepted way [41] A color model is in essence a specification of a 3-D coordinate system and a subspace within that system where each color is represented by a single point [42]
Trang 3115
Most color models in use today are oriented either toward hardware such as for color monitors and printers or toward applications where color manipulation is a goal such as in the creation of color graphics for animation
The hardware-oriented models most commonly used in practice are:
• RGB (red, green, blue) model for color monitors and a broad class of color video
cameras,
• CMY (cyan, magenta, yellow) model for color printers,
• YIQ model, which is the standard for color TV broadcast Y corresponds to luminance, I and Q are two chromatic components called in phase, and quadrature,
respectively
Among the models frequently used for color image manipulation are:
• HSI (hue, saturation, intensity) model,
• HSV (hue, saturation, value) model
Trang 3216
accomplished by dividing the color by its maximum intensity value For example, an 8-bit color
is normalized by dividing by 255
Figure 2.7: RGB color cube
The RGB model simplifies the design of computer graphics systems, but it is not an ideal solution for all practical applications In Figure 2.7, the red, green, and blue color components are highly correlated This makes it difficult to be utilized in some image processing algorithms, since many processing techniques, such as histogram equalization, mainly work on the intensity component of an image These processes are much easier implemented using the HSI color model
Images, in the RGB color model, consist of three independent image planes, one for each primary color When the RGB model is fed into an RGB monitor, these three images combine
on the phosphor screen to produce a composite color image [39] Thus the use of the RGB model for image processing makes sense when the images themselves are naturally expressed in
Trang 33terms of three-color planes Alternatively, most color cameras used for acquiring digital images utilize the RGB format [43], which alone makes this an important model in image processi
In many real applications, it
for example for hardcopy on a black and white printer
To convert an image from RGB color to gray scale, we can use the following equation:
This equation comes from the NTSC standard for luminance
from RGB color to gray scale is a simple average:
The equation is used in many applications
easy to set normalized values less than 0.5 to black and all others to white This is simple but does not produce the best quality
(a) Color image
Figure 2
17
color planes Alternatively, most color cameras used for acquiring digital images
, which alone makes this an important model in image processi
it is necessary to convert an RGB image into a gray scale image, for example for hardcopy on a black and white printer
To convert an image from RGB color to gray scale, we can use the following equation:
Gray scale intensity = 0.299R + 0.587G + 0.114B
This equation comes from the NTSC standard for luminance, another common conversion from RGB color to gray scale is a simple average:
Gray scale intensity = 0.333R + 0.333G + 0.333B
is used in many applications To further reduce the color to black and white, set normalized values less than 0.5 to black and all others to white This is simple but
e the best quality
2.8: Example of color, gray and binary image
color planes Alternatively, most color cameras used for acquiring digital images
, which alone makes this an important model in image processing necessary to convert an RGB image into a gray scale image,
To convert an image from RGB color to gray scale, we can use the following equation:
(2.3) another common conversion
(2.4)
To further reduce the color to black and white, it is set normalized values less than 0.5 to black and all others to white This is simple but
(c) Binary image
Trang 3418
2.2.2.2 YIQ color model
The YIQ model is widely used in color TV broadcasting Basically, YIQ is a recording of RGB for transmission efficiency and for maintaining compatibility with monochrome TV standards [44]
The Y component of the YIQ system provides all the video information required by a monochrome television set The RGB to YIQ conversion is defined as:
Y, and less bandwidth (bits) in representing I and Q
In addition to its being a widely supported standard, the principal advantage of the YIQ model
in image processing is that the luminance (Y) and color information (I and Q) are decoupled
Luminance is proportional to the amount of light perceived by the eye Thus the importance of this decoupling is that the luminance component of an image can be processed without affecting its color content
For example, as opposed to the problem with the RGB model mentioned, we can apply histogram equalization to a color image represented in YIQ format simply by applying
Trang 3519
histogram equalization to its Y component The relative colors in the image are not affected by
this process
2.2.2.3 CMY/CMYK color model
The CMY color space consists of cyan, magenta, and yellow This color space also provides a model for subtractive colors
In fact, it is the complement of the RGB model since cyan, magenta, and yellow are the complements of red, green, and blue respectively Cyan, magenta, and yellow are known as the subtractive primaries, and these primaries are subtracted from white light to produce the desired color
Figure 2.9: CMY color cube
Most of people are familiar with additive primary mixing which is used in the RGB color space In the RGB color space, red plus green produces yellow These equations and color
Trang 36As indicated previously, cyan, magenta, and yellow are the secondary colors of light or, alternatively, the primary colors of pigments It is easy to see that when a surface coated with cyan pigment is illuminated with white light, no red light is reflected from the surface Cyan subtracts red light from reflected white light, which itself is composed of equal amounts of red, green, and blue light Most devices that deposit colored pigments on paper, such as color printers and copiers, require CMY data input or perform an RGB to CMY conversion internally Since RGB and CMY are complements, thus it is easy to convert between the two color spaces using the equation below:
1.01.01.0
Trang 3721
more pure black than the combination of the other three colors [34] Pure black provides greater contrast There is also the added benefit that black ink is cheaper than colored ink To make the conversion from CMY to CMYK:
2.2.2.4 HIS color model
Hue is a color attribute that describes a pure color (pure yellow, orange, or red), saturation gives a measure of the degree to which a pure color is diluted by white light [34, 36]
Trang 3822
Figure 2.10: HIS color cube
The HSI color model owes its usefulness to two principal facts First, the intensity component,
I, is decoupled from the color information in the image Second, the hue and saturation
components are intimately related to the way in which human beings perceive color [34] These features make the HSI model an ideal tool for developing image processing algorithms based on some of the color sensing properties of the human visual system [36]
Examples of the usefulness of the HSI model range from the design of imaging systems for automatically determining the ripeness of fruits and vegetables, to systems for matching color samples or inspecting the quality of finished color goods
Trang 39For the HSI is modeled with cylindrical coordinates, see Figure 2.10 The hue (H) is
represented as the angle θ varying from 0o to 360o, saturation (S) corresponds to the radius varying from 0 to 1, and intensity (I) varies along the z axis with 0 being black and 1 being white When S = 1, the color is on the boundary of top cone base The greater the saturation, the
farther the color is from white/gray/black depending on the intensity Adjusting the hue will vary the color from red at 0o, through green at 120o, blue at 240o, and back to red at 360o When
I = 0, the color is black and therefore H is undefined When S = 0, the color is grayscale H is
also undefined in this case.`
By adjusting I, a color can be made darker or lighter By maintaining S = 1 and adjusting I,
shades of that color are created
The following formulae show how to convert from RGB space to HSI:
Trang 40If B is greater than G, then H = 360 0 – H
To convert from HSI model to RGB model, the process mainly depends on which color sector
H lies in For the RG sector (00 ≤ H ≤ 120 0):