A New Histogram Modification Based ReversibleData Hiding Algorithm Considering the Human Visual System Seung-Won Jung, Le Thanh Ha, and Sung-Jea Ko Abstract—In this letter, we propose an
Trang 1A New Histogram Modification Based Reversible
Data Hiding Algorithm Considering
the Human Visual System Seung-Won Jung, Le Thanh Ha, and Sung-Jea Ko
Abstract—In this letter, we propose an improved histogram
mod-ification based reversible data hiding technique In the proposed
al-gorithm, unlike the conventional reversible techniques, a data
em-bedding level is adaptively adjusted for each pixel with a
consid-eration of the human visual system (HVS) characteristics To this
end, an edge and the just noticeable difference (JND) values are
estimated for every pixel, and the estimated values are used to
de-termine the embedding level This pixel level adjustment can
effec-tively reduce the distortion caused by data embedding The
exper-imental results and performance comparison with other reversible
data hiding algorithms are presented to demonstrate the validity
of the proposed algorithm.
Index Terms—Data hiding, human visual system, just noticeable
difference, lossless watermarking.
I INTRODUCTION
R EVERSIBLE data embedding, which is often referred to
as lossless or invertible data embedding, is a technique
that embeds data into an image in a reversible manner In many
applications including art, medical, and military images, this
re-versibility is a very desirable characteristic, and thus
consid-erable amount of research has been done over the last decade
[1]–[6] In the conventional works, extensive efforts have been
devoted to increase the embedding capacity without
deterio-rating the visual quality of the embedded image
A key of reversible data embedding is to find an embedding
area in an image by exploiting the redundancy in the image
con-tent Early reversible algorithm [1] uses lossless data
compres-sion to find an extra area that can contain to-be-embedded data
In order to expand the extra space, the recent algorithms reduce
the redundancy by performing pixel value prediction [2]–[5]
and/or utilizing image histogram [5], [6] The state-of-the-art
techniques [4], [5] exhibit high embedding capacity without
severely degrading the visual quality of the embedded result
Manuscript received October 06, 2010; revised November 16, 2010; accepted
November 20, 2010 Date of publication December 03, 2010; date of current
version December 20, 2010 This work was supported by Korea University
Grant, Seoul Future Contents Convergence (SFCC) Cluster established by Seoul
R&BD Program (10570), and Mid-career Researcher Program through National
Research Foundation of Korea (NRF) grant funded by the Korea government
(MEST) (2010-0000449).
S.-W Jung and S.-J Ko are with the Department of Electrical
Engi-neering, Korea University, Seoul, Korea E(e-mail: jungsw@dali.korea.ac.kr;
sjko@korea.ac.kr).
L T Ha is with the Department of Information Technology, University of
Engineering and Technology, Vietnam National University, Hanoi, Vietnam
(e-mail: ltha@dali.korea.ac.kr).
Digital Object Identifier 10.1109/LSP.2010.2095498
However, since the subjective visual quality is not taken into account in the conventional methods, the quality of the resul-tant embedded images is often not satisfactory
In this letter, we propose a histogram modification based re-versible data embedding algorithm considering the human vi-sual system (HVS) In the proposed algorithm, a local causal window is used to predict a pixel value and estimate an edge Then, by taking a concept of the just noticeable difference (JND) [7], [8], the pixels in the smooth and edge regions are differently treated to reduce the perceptual distortion Experimental results demonstrate that as compared to conventional algorithms, the proposed algorithm produces subjectively higher quality em-bedded images while providing a similar embedding capacity This letter is organized as follows In Section II, the pro-posed scheme is described The performance of the propro-posed algorithm is evaluated and compared with the conventional al-gorithms in Section III, and finally, we conclude the paper in Section IV
II PROPOSEDALGORITHM
The proposed algorithm is based on histogram modification The conventional histogram modification methods embed a message bit into the histogram of pixel values [6] or the
his-togram of the pixel differences [5] Since Tai et al.’s method [5] outperforms Ni et al.’s method [6], Tai et al.’s method is chosen as our basic framework Compared to Tai et al.’s work,
our major contribution is to use a causal window to predict a pixel value, an edge, and the JND, and to exploit these predicted values when performing data embedding
Let and denote the original and the embedded image, respectively For each pixel coordinate , the pixel value is predicted by
(1)
where represents a causal window surrounding and
returns a cardinality of the set For instance, the causal window of size shown in Fig 1 contains 12 pixel positions and the average of the pixel values at these positions is used as a predicted value Then we calculate the pixel difference between the original and predicted values by
(2) where is the difference value used in the data embedding process
1070-9908/$26.00 © 2010 IEEE
Trang 2Fig 1 Causal window for computing E(i; j), Jnd(i; j), and ^x(i; j).
Fig 2 Visibility threshold against background luminance [7].
In the proposed method, the perceptual characteristic of the
HVS is exploited to alleviate the quality degradation caused by
data embedding To this end, the edge is simply estimated for
each pixel as follows:
if
where indicates whether the pixel is the edge or not,
represents the variance of pixel values in , and
is an edge threshold Since the HVS is known to perceive the
difference above the JND, the JND value is estimated after edge
detection as follows:
(4)
where and are two thresholds representing the luminance
adaptation and the activity masking of the HVS characteristics,
respectively, and [7] In order to estimate ,
back-ground luminance is first measured by taking the average value
of the local neighborhood Then, a piecewise linear
approxima-tion in Fig 2 is used with three parameters, , , and , described
in [7] Specifically, , , and for nonedge
for edge pixels (i.e., when ) In addition, is
de-fined as the maximum pixel difference value in the local
neigh-borhood Note that when computing in (3), background
luminance, and in (4), only the pixels in the causal window
are used because only these pixels are available at the data
ex-traction stage due to the raster scan order processing
Actual data embedding is performed by increasing the
differ-ence value and finding the extra space that can contain
to-be-embedded bits Thus, the overflow and underflow problem can happen when the embedded value exceeds a pixel value bound (0 to 255 in 8 bit images) To solve this problem, the orig-inal image histogram is shrunk from both sides by , where
is the embedding level To realize reversible data embedding, the overhead information describing this preprocessing is losslessly compressed and embedded together with pure payload data De-tailed description for preprocessing can be found in [5] For the notational simplicity, from now on, let denote the preprocessed version of the original image and then (1)–(4) are
applied to the preprocessed image Unlike Tai et al.’s method
adopting the fixed embedding level , we adaptively adjust the embedding level for each pixel according to the local image characteristics For the non-edge pixel, i.e., if , the embedding level is defined by
(5)
In other words, a maximum possible embedding level is chosen with a constraint that the pixel value change should be lower than the JND value This is because the distortion above the JND
in the smooth region is perceptually disturbing On the other hand, for the edge pixel of , is determined by
(6)
Namely, a minimum possible embedding level above the JND is used to embed a sufficient amount of data This is because it is difficult to find the extra space using the embedding level lower than the JND since the difference values in the edge region are high Besides, the increase of the JND in the edge region does not severely deteriorate the visual quality and sometimes an in-tentional increase of the JND in the edge region is employed in the image enhancement algorithm [9]
After estimating the edge, the JND, and finally the embed-ding level, we can try to embed a message bit for each pixel If
, the message bit is embedded by
(7)
Otherwise, if , data embedding is not performed but the difference value should be expanded to discriminate this pixel from the embedded pixels In this case, the output pixel value is obtained by
(8)
Since the pixel value is shifted by at preprocessing, under-flow or overunder-flow is prevented This data embedding process con-tinues until all to-be-embedded bits are inserted and the resultant embedded image can deliver the embedded information Given only the embedded image and the embedding level , the original image is recovered and the embedded bits are ob-tained at the data extraction process Because the pixel value prediction, edge and JND computation, and embedding level es-timation are performed by using the causal window, the same values can be derived at the extractor Here, the pixels at the
Trang 3Fig 3 Test images of 256 2 256 2 8 bits In the upper row and from left to
right: Airplane, Baboon, Boat In the lower row from left to right: Candy, Lena,
Peppers.
upper and left image boundaries are not modified to satisfy the
reversibility
message bit is extracted by
re-covered by
if if
(10) where represents ceiling operation
is compensated by
if
Since the overhead information bits describing the
pre-processing are also extracted, the original image is finally
recovered by shifting back the image histogram
III EXPERIMENTALRESULTS
In order to evaluate the performance of the proposed
algo-rithm, six commonly used grayscale images shown in Fig 3 are
used [10] First, the capacity versus distortion performance of
the proposed algorithm is illustrated in Fig 4 Here, the
dis-tortion is measured by the structural similarity (SSIM), which
effectively assesses the perceptual visual quality of the image
[11], and the capacity is represented by the average number of
embedded bit per pixel (bpp) The edge threshold and the
causal window size are empirically determined by 200 and 3,
respectively
For all test images, more bits can be embedded by increasing
the embedding level at the expense of the quality degradation
Because data embedding is dependent on the redundancy in the
image content, images containing a large smooth area such as
Fig 4 SSIM versus watermark capacity for test images.
Fig 5 Performance comparison among Tai’s, Hu’s, and proposed methods for
the Lena image: (a) watermark capacity versus PSNR, (b) watermark capacity
versus SSIM.
Candy can embed a large number of bits, whereas images with
complicated textures such as Baboon can contain a relatively
small number of bits
Trang 4Fig 6 Magnified regions of the original (first column) and watermarked
(second to third columns) images: (a) Lena, (b) Tai’s (PSNR: 28.98 dB, SSIM:
0.860, capacity: 0.829 bpp), (c) the proposed (PSNR: 31.04 dB, SSIM: 0.922,
capacity: 0.802 bpp), (d) Peppers, (e) Tai’s (PSNR: 29.70 dB, SSIM: 0.861,
capacity: 0.885 bpp), (f) the proposed (31.03 dB, SSIM: 0.915, capacity: 0.807
bpp), (g) Airplane, (h) Tai’s (PSNR: 34.20 dB, SSIM: 0.927, capacity: 0.776
bpp), (i) the proposed (PSNR: 34.02 dB, SSIM: 0.931, capacity: 0.773 bpp).
Fig 5 shows the performance comparison results of the
pro-posed, Hu et al.’s [4], and Tai et al.’s algorithms [5] The PSNR
results in Fig 5(a) reveal that the capacity versus distortion
per-formance of the proposed algorithm is comparable to the
con-ventional ones at the low capacity region and the proposed
al-gorithm exhibits slightly improved performance at the high
ca-pacity region In addition, the performance is degraded when
the causal window of size is used This is because a
simple average prediction in (1) does not perform well for large
window sizes A more improved performance is expected by
using a higher order prediction or changing the window size
adaptively
The SSIM comparison results in Fig 5(b) more clearly
show that the proposed algorithm outperforms the
conven-tional methods The subjective visual quality evaluation is also
performed in Fig 6 To facilitate comparison, the magnified
regions of the original and two watermarked images obtained
by Tai et al.’s and the proposed algorithms are shown At
the similar watermark capacity, we can see that the proposed
algorithm provides higher quality embedded images without
producing annoying artifacts
Since the proposed algorithm produces perceptually
im-proved watermarked images, a public user who does not have
a knowledge on the original image could not recognize the
existence of the watermark Thus the proposed algorithm is
suitable to the conventional applications of the reversible data hiding, such as art, medical, and military imaging In addition, note that the proposed algorithm produces the embedded im-ages exhibiting sharper image details compared to the original images Therefore, even though the image enhancement is not
a concern in reversible data hiding, the embedded image can replace the original image in some applications, where the sharp image details are preferred In such applications, the proposed algorithm can be used to perform the image enhancement and reversible data hiding at the same time
IV CONCLUSION
In this letter, we have presented an improved histogram modification based reversible data hiding technique In the pro-posed algorithm, unlike the conventional reversible techniques, the HVS characteristics are extensively exploited to alleviate the distortion caused by data embedding The edge and JND values are estimated by using the causal window, and thus no additional overhead is required to be embedded By using the estimated values, the embedding level is adaptively adjusted for each pixel The experimental results demonstrated that this pixel level adaptive embedding method can provide the superior visual quality of the embedded images
The proposed technique effectively exploited the well-known HVS characteristics for reversible image data embedding When the proposed algorithm is applied to reversible video data em-bedding, the video related HVS characteristics such as motion blur and motion sharpening can be additionally considered to produce perceptually pleasant video sequences
REFERENCES [1] J Fridrich, M Goljan, and R Du, “Lossless data embedding-new
para-digm in digital watermarking,” EURASIP J Appl Signal Process., vol.
2002, no 2, pp 185–196, Feb 2002.
[2] J Tian, “Reversible data embedding using a difference expansion,”
IEEE Trans Circuits Syst Video Technol., vol 13, pp 890–896, Aug.
2003.
[3] S Weng, Y Zhao, J.-S Pan, and R Ni, “Reversible watermarking
based on invariability and adjustment on pixel pairs,” IEEE Signal
Process Lett., vol 15, pp 721–724, Nov 2008.
[4] Y Hu, H.-K Lee, and J Li, “DE-based reversible data hiding with
improved overflow location map,” IEEE Trans Circuits Syst Video
Technol., vol 19, pp 250–260, Feb 2009.
[5] W.-L Tai, C.-M Yeh, and C.-C Chang, “Reversible data hiding based
on histogram modification of pixel differences,” IEEE Trans Circuits
Syst Video Technol., vol 19, pp 906–910, Nov 2009.
[6] Z Ni, Y Q Shi, N Ansari, and W Su, “Reversible data hiding,” IEEE
Trans Circuits Syst Video Technol., vol 16, pp 354–362, Mar 2006.
[7] W Lin, L Dong, and P Xue, “Visual distortion gauge based on
dis-crimination of noticeable contrast changes,” IEEE Trans Circuits Syst.
Video Technol., vol 15, pp 900–909, Jul 2005.
[8] I Höntsch and L Karam, “Adaptive image coding with perceptual
distortion control,” IEEE Trans Image Process., vol 11, no 3, pp.
213–222, Mar 2002.
[9] A Polesel, G Ramponi, and V J Mathews, “Image enhancement via
adaptive unsharp masking,” IEEE Trans Image Process., vol 9, no 3,
pp 505–510, Mar 2000.
[10] CVG-USR Image Database, [Online] Available: http://decsai.ugr.es/
cvg/dbimagenes [11] Z Wang, A C Bovik, H R Sheikh, and E P Simoncelli, “Image
quality assessment: From error visibility to structural similarity,” IEEE
Trans Image Process., vol 13, no 4, pp 600–612, Apr 2004.