Skip to main content

Adaptive reversible image watermarking algorithm based on IWT and level set

Abstract

In order to improve the robustness, imperceptibility, and anti-malicious extraction capability of reversible image watermarking, an adaptive reversible image watermarking algorithm based on IWT and level set is proposed in this paper. Firstly, the stable edge profile is extracted by using the Laplace operator and the level set methods. Secondly, the unit circle in the stable edge profile is determined. Finally, the inscribed square area of the determinate unit circle is divided into non-overlapping blocks. Each sub-block is performed by IWT, and the HVS is used to embed the watermark adaptively. The simulation results show that the algorithm has good invisibility and can resist various attacks. The algorithm not only has strong robustness but also can lossless recover the original image.

1 Introduction

As an effective means of copyright protection, the digital watermarking technology has attracted wide attention. Traditional digital watermarking [1] can cause some distortion of the original image after embedding some information. But in some special applications, it is required to restore the original image; thus, the reversible digital watermarking [2] emerges as the times require. Different from traditional digital watermarking method, reversible watermarking technology can recover the original host information without distortion after extracting the watermark information, which is badly needed in the area of military intelligence, medical records, and legal argumentation demanding, so it has been widely developed in recent years.

Reversible image watermarking can be traced back to Barton’s patent in 1997 [3] and Honsigner’s patents in 1999 [4]. Barton algorithm obtains additional available space to carry watermark load through lossless compression of original data. The algorithm improves the authentication ability of digital media including image and video after digital image compression encoding standard JPEG and MPEG encoding. Honsigner’s patents are the second lossless data hiding schemes for vulnerable authentication. They use the modular addition operation in the spatial domain. Its disadvantage is that pepper noise phenomenon exists in the watermarked image. These two patented technologies use reversible embedding method to embed the watermark in the image; the extraction algorithm can not only extract the previously embedded watermark but also completely restore the original image. Reversible watermarking has been widely and deeply studied in recent years. It is mainly divided into three categories: (1) reversible watermarking based on data compression, (2) reversible watermarking based on difference expansion, and (3) reversible watermarking based on histogram translation.

An ideal reversible watermarking algorithm for copyright protection should have good imperceptibility, considerable capacity of embedding, and strong robustness against various attacks. At present, the vast majority of reversible image watermarking is fragile watermarking [5], which is mainly used for image authentication and secret communication. The robust reversible image watermarking has robustness of watermark extraction [6], which is badly needed in some applications, such as those who want to verify the authenticity and integrity of the carrier and need to reversibly record the copyright information application of carrier owner. A robust reversible watermarking algorithm with good performance can perfectly complete the functions of authentication and copyright information recording. In terms of reversible image watermarking, robust reversible watermarking technology against various attacks plays an increasingly important role in the reversible watermarking research. Attacks, especially geometric attacks, can destroy the synchronization relationship of the watermark embedding and extraction, causing difficulties in detecting and extracting watermark. The anti-attack reversible image watermarking algorithm has become the important and difficult points of research in recent years.

In order to improve the robustness, imperceptibility, anti-attack ability, and anti-malicious extraction ability of reversible image watermarking, we propose an adaptive reversible image watermarking algorithm based on IWT and level set. Firstly, we find the stable edge profile by using the Laplace operator and the level set methods. Secondly, the unit circle in the stable edge profile is determined. Finally, the inscribed square area of the determinate unit circle is divided into non-overlapping blocks. Each sub-block is performed by IWT, and the HVS is used to embed the watermark adaptively. The experimental results show that the algorithm has good visual quality, which not only can effectively resist geometric attacks but also has strong robustness to the conventional signal processing.

2 Related research

In general, the watermark will cause some distortion of the original image in the process of embedding or extraction, but in some areas where higher requirements on image quality is needed and any changes of the original image in the watermark extraction are not allowed, lossless reconstruction of the original image is essential. Given this, researches in the early reversible image watermarking are mainly focused on algorithm embedding rate and perceptibility, such as those based on histogram shifting algorithm [7, 8], compression algorithm [9, 10], and the difference expansion algorithm [1113]. Ni et al. [7] proposed an algorithm to embed the watermark by adjusting the image histogram. This algorithm utilizes the zero or the minimum points of the histogram of an image and slightly modifies the pixel grayscale values to embed watermark into the image. The algorithm is simple and fast and embedding image distortion is small, but the watermark embedding capacity will be affected by the original carrier image characteristics. Li et al. [8] proposed a reversible data hiding scheme based on histogram shifting of n-bit planes (nBPs). This scheme extracts nBPs from an 8-bit plane for each pixel to generate the bit plane truncation image (BPTI), and then, block division is used in the BPTI. These operations can make the peak point of the block histogram more concentrated and improve the probability of the zero point in the block histogram. This proposed scheme achieves higher hiding capacity than previous histogram-based schemes, and its visual quality is very satisfactory. Mohammad Awrangjeb [10] used the arithmetic of encoding to compress the recovery information and then connected it with the watermark information, both of which were embedded into the original image. Tian [11] proposed a reversible data embedding algorithm based on difference expansion. The algorithm can provide larger embedding capacity. This difference expansion technique has good plasticity and can be used for the different difference such as the integer Haar wavelet coefficients and image prediction errors. It could transform into embedding algorithms suitable for different purposes. Literature [12] proposed a method of combination of difference expansion and difference histogram shifting technique. This method greatly improves image quality under the same embedding rate. It discusses the difference expansion and the prediction error expansion method from five versions in detail. The algorithm can depend on the size of the load to choose the appropriate threshold T and adjust the embedding capacity, so that the image quality can reach the best under current load. The algorithm needs to store the lowest bit of the external pixels, and the pixels are replaced by LSB. The substitution reduces the quality of the carrier image but does not bring any capacity increases. Literature [13] presented a novel algorithm that improved a generalized integer transform-based reversible watermarking scheme. In this algorithm, two main improvements have been achieved: adaptive thresholding and efficient location map encoding. With regard to adaptive thresholding, suitable threshold t is selected adaptively, which ensures enough embedding capacity for the watermark while keeps the distortion introduced as low as possible. Moreover, efficient location map encoding helps in reducing the location map size, which downs to 0.4 of the one unmodified in average. The algorithm has higher embedding capacity and better visual quality, but it is not robust. In addition, Xiang et al. [14] proposed a non-integer expansion embedding technique which can proceed non-integer prediction errors (NIPE) for embedding data into an audio or image file by only expanding integer element of a prediction error while keeping its fractional element unchanged. The advantage of the NIPE embedding technique is that the NIPE technique can really bring a predictor into full play by estimating a sample/pixel in a non-causal way in a single pass since there is no rounding operation. A new non-causal image prediction method to estimate a pixel with four immediate pixels in a single pass is included in the proposed scheme. The experimental results have shown that the NIPE technique with the new non-causal prediction strategy can reduce the embedding distortion for the same embedding payload, but it is not robust.

Because the robust reversible image watermarking has robustness, it is very important for some applications. A robust reversible watermarking algorithm with good performance can perfectly complete the function of authentication and copyright protection, and its research is of great significance.

In research on robust image reversible watermarking, the representative is de vleeschouwer [15] who proposed a reversible watermarking scheme based on Hash theory. The scheme has certain ability to resist joint photographic expert group (JPEG) loss compression, but there will be a pixel jump, and vision will produce disturbing phenomenon which is similar to salt and pepper noise.

Ni et al. [16] proposed a robust reversible image watermarking scheme, which was based on the arithmetic mean value of the block, and the watermark was embedded into the image. Compared with the scheme in literature [14], the proposed scheme has strong robustness and high peak signal to noise ratio. Zeng et al. [17] proposed a robust reversible image watermarking scheme, which divided the image into non-overlapping image blocks, and calculated the arithmetic difference of each block. When embedding the information bit “1,” the arithmetic difference of the block is modified; when the information bit “0” is embedded, the arithmetic difference of the block is not modified. The scheme can obtain some robustness. Compared with the above robust watermarking algorithm, the robust performance of the proposed algorithm is improved. An et al. [1820] further studied several robust reversible image watermarking schemes. Among them, literature [18] was to embed the watermark by the frequency domain technique and the modified block wavelet coefficient’s average. This scheme effectively improves the robustness. Literature [19] presented a novel statistical quantity histogram shifting and clustering-based robust lossless data hiding (RLDH) method or SQH-SC for short. The proposed scheme is completely reversible and has a strong robustness against loss compression and random noise as well as good imperceptibility and high capacity.

Sahraee et al. [21] proposed a robust blind watermarking algorithm based on quantization of distance among wavelet coefficients for copyright protection. Wavelet coefficients are divided into some blocks, and the first, second, and third maximum coefficients in each block are obtained. Then, the first and second maximum coefficients are quantized according to binary watermark bits. Using the block-based watermarking, the watermark can be extracted without using the original image or watermark. The experimental results show that the proposed method is quite robust under either non-geometry or geometry attacks. The algorithm overcomes the defect of the traditional algorithm, but the algorithm is not reversible.

Gu et al. [22] proposed a novel reversible robust watermarking algorithm based on chaotic system. This algorithm can realize robust reversible watermark embedding by looking for the best position and the threshold value of embedding watermark. In this algorithm, the watermarking reversibility is very dependent on the selection of threshold, and the inappropriate threshold may directly lead to the watermarking irreversible.

Agoyi et al. [23] proposed a novel watermarking scheme based on the discrete wavelet transform (DWT) in combination with the chirp z-transform (CZT) and the singular value decomposition (SVD). Firstly, the image is decomposed into its frequency sub-bands by using 1-level DWT. Then, the high-frequency sub-band is transformed into z-domain by using CZT. Afterward by SVD, the watermark is added to the singular matrix of the transformed image. Finally, the watermarked image is obtained by using inverse of CZT and inverse of DWT. This algorithm combines the advantages of all three algorithms. The algorithm is imperceptible and robust to several attacks and signal processing operations, but the algorithm is not reversible.

Thabit et al. [24] presented a novel robust reversible watermarking scheme based on the Slantlet transform matrix to transform small blocks of the original image and hid the watermark bits by modifying the mean values of the carrier sub-bands. The problem of overflow/underflow has been avoided by using histogram modification process. Extensive experimental tests based on 100 general images and 100 medical images demonstrate the efficiency of the proposed scheme. The proposed scheme has robustness against different kinds of attacks, and the results prove that it is completely reversible with improved capacity, robustness, and invisibility.

Choi et al. [25] proposed a robust lossless digital watermarking scheme based on a generalized integer transform in spatial domain. In the proposed method, data bits are hidden into the cover media by a reversible generalized integer transform with bit plane manipulation. With the reversible transform, data can be hidden in the cover media, and the stego media can be restored to its original form after extraction of the hidden data. In embedding procedure, adaptive bit plane manipulation is applied to increase robustness of the algorithm while keeps good visual quality. To further increase the robustness of the algorithm, watermark bits are repeatedly embedded and majority voting is used to decode the hidden information in extraction procedure. Furthermore, a threshold is introduced in the algorithm, which helps in choosing regions that would result lower variance for embedding, as regions with lower variance are more robust against JPEG compression.

Ansari et al. [26] proposed a robust reversible watermarking scheme based on Slantlet transform and artificial bee colony. This scheme utilizes the mean value coefficients of HL and LH bands for the watermark embedding, which provides a very good imperceptibility and robustness. The study further suggested that there is a need to find an optimal value of embedding strength to obtain a trade-off between the imperceptibility and robustness. So it utilizes artificial bee colony optimization to find the optimal values of embedding strength. The proposed scheme is completely reversible and has a high capacity of data hiding as well as anti-attack ability.

On the whole, reversible image watermarking technologies have made continuous progress in recent years, but the researches are not very adequate. Reversible image watermarking algorithms still exist such as reversible image watermarking performance (especially the robustness) needs to be further improved, and poor ability of anti-attack in legal geometric problems should be solved and potential application fields of reversible image watermarking also need to be further widened.

3 Preliminary

This paper is a study of reversible image watermarking algorithm, and we focus on robustness and anti-attack ability. Specifically speaking, in order to improve the robustness and anti-attack ability of the image watermarking, the watermark embedding capacity and better visual quality can be guaranteed. Based on this, the main preparation works of this paper are as follows.

3.1 Image feature analysis based on HVS

The human vision system (HVS) has the texture masking effect, the frequency masking effect, the spatial masking effect, the luminance masking effect, and so on [27]. For an image, the sensitivity of HVS to different location information is not the same. In this paper, based on the masking effect of HVS, the image is divided into different categories by the combination of texture masking effect and high luminance masking effect. According to the complexity of the texture image, the image can be divided into texture smooth area G1, texture moderately complex area G2, and texture complex area G3. Texture masking effect shows that the HVS of complex texture region is not sensitive to change. The higher its visual threshold is, the more the secret information is embedded. In the smooth region of high sensitivity, the visual threshold is low and can only be embedded in a small amount of secret information.

According to the gray characteristics of the image, the image can be divided into low gray area H3, middle gray area H2, and high gray area H1. Luminance masking effect shows that the human eye has different sensitivity to the pixels of different gray values, which is the most sensitive to medium gray area, but the embedded secret information is fewer, and the resolution ability of high gray area and low gray level is decreased, and the change of pixel value is not easy to distinguish.

In summary, G3 can be embedded more secret information, G2 is the second, and G1 is embedded less secret information, embedded capacity G3 > G2 > G1; H1 and H3 can be embedded more secret information, and H2 is embedded less secret information, embedded capacity H1 > H2 and H3 > H2.

According to the above two kinds of visual masking effect, the cover image is divided into non-overlapping sub-block of m × n, and different amount of secret information is embedded in different sub-blocks. By calculating the information entropy of each block, the block will be distinguished as a smooth area or texture complex area. The maximum entropy of 50 and 75% can be divided as thresholds of smooth block, medium complex block, and complex texture block. According to the non-linear characteristics of the two curves of the gray level, low gray block, middle gray block, and high gray block are divided. For the 256 gray level images, 80 is considered to be the boundary between low gray level and middle gray level, and the 180 is the boundary between middle and high gray level.

For size 4 × 4 sub-block image, in order to obtain a higher visual quality, it is assumed that a smooth texture block can embed 1-bit watermark, a medium complex texture block can embed 2-bit watermark, a complex texture block can be embedded 3-bit watermark, a low gray block can embed 2-bit watermark, a middle gray block can embed 1-bit watermark, and a high gray block can embed 2-bit watermark. Therefore, the watermark bits are embedded in the corresponding sub-block, as shown in Table 1.

Table 1 HVS masking effect

3.2 Image contour extraction

The edge is the most basic feature of an image and concentrates most of the image information. Edge detection is the most basic treatment of all edge-based segmentation methods [28]. The Laplace operator is the second-order differential operator independent of the edge direction [29] and is a commonly used second-order derivative operator. For a continuous function f(x, y), its Laplace operator expression at the position (x, y) is

$$ {\nabla}^2 f\left( x, y\right)=\frac{\partial^2 f\left( x, y\right)}{\partial {x}^2}+\frac{\partial^2 f\left( x, y\right)}{\partial {y}^2} $$
(1)
$$ G\left( i, j\right)=\left|4 f\left( i, j\right)- f\left( i+1, j\right)- f\left( i-1, j\right)- f\left( i, j+1\right)- f\left( i, j-1\right)\right| $$
(2)

The Laplace operator is a scalar rather than a vector and features linear characteristic and rotational invariance. It produces a steep zero-crossing at the edge. The advantage of the Laplace operator is to highlight the angular line and acnode of an image, as shown in Figs. 1 and 2. Upon completion of the image edge detection, the level set can be used for image segmentation.

Fig. 1
figure 1

Original Lena image

Fig. 2
figure 2

Lena image generated by edge detection

The level set method was first coined by Osher and Sethian [30]. It is a geometric deformation model. It is a numerical method for contour tracking and surface evolution. The basic idea is not to operate the contour directly but set the n-dimensional contour as the (n + 1-dimensional) zero-level set of a higher dimensional function. This high-dimensional function is called level set function φ(X, t), denoted by the differential equation. At time t, the motion contour can be obtained by extracting the zero level set from the differential equation c((X), t) = {φ(X, t) = 0}. Even though this way makes the problem a little more complicated, there are many advantages for solutions. The main advantage of using the level set is that it can deal well with the topological structural changes (merger or division) occurring in the closed curve implied in the level set function and can get the only weak solution to satisfy the entropy condition.

In the two-dimensional case, for example, the level set method is to view the closed curve C(t) in the two-dimensional plane as a {φ = 0} zero-level plane in the continuous function surface φ in the three-dimensional space, namely

$$ C(t)=\left\{\left( x, y\right)\Big|\varphi \left( x, y, t\right)=0\right\} $$
(3)

where t represents time. Resolve the partial derivative of time on both sides of the equation.

$$ \frac{\theta \varphi}{\theta t}+\frac{\theta \varphi}{\theta x} \cdot p\ \frac{\theta x}{\theta t}+\frac{\theta \varphi}{\theta y} \cdot p\ \frac{\theta y}{\theta t}=0 $$
(4)

In order to solve this equation, suppose the movement speed function in the normal direction of the surface is F(x, y)

$$ F\left( x, y\right)=\left[\frac{\theta x}{\theta t},\frac{\theta y}{\theta t}\right] \cdot p\ n $$
(5)

where n is the unit normal vector.

$$ n=-\frac{\nabla \varphi}{\left|\nabla \varphi \right|},\nabla \varphi =\left[\frac{\nabla \varphi}{\nabla x},\frac{\nabla \varphi}{\nabla y}\right] $$
(6)

φ is the gradient of φ in the two-dimensional plane, then

$$ \left[\frac{\theta x}{\theta t},\frac{\theta y}{\theta t}\right] \cdot p\ \left[\frac{\theta \varphi}{\theta x},\frac{\theta \varphi}{\theta y}\right]=- F\left|\nabla \varphi \right| $$
(7)

Hence,

$$ \frac{\theta \varphi}{\theta t}= F\left|\nabla \varphi \right| $$
(8)

The equation is the level set equation. Finally, to solve the problem of curve evolution is to solve Eq. (8), and the initial condition is

$$ \varphi \left( x, y, t=0\right)=\pm d\left( x, y\right) $$
(9)

In Eq. (9), d(x, y) is the signed distance function, signifying the shortest distance from the pixel (x, y) to the closed curve C(t). The symbol is determined according to the position of the pixel. If it falls outside of the closed curve, it is positive; otherwise, it is negative. At any moment, the points on the curve are a set of points that the distance function value is 0, which is the zero level set.

When pixels fall within C(t), φ(x, y, t) > 0;

When pixels fall outside of C(t), φ(x, y, t) < 0;

When pixels fall upon C(t), φ(x, y, t) = 0.

Finally, the image segmentation contour is acquired by the zero level set on the level set function surface (Fig. 3).

Fig. 3
figure 3

Contour extracted by level set

3.3 Unit circle position determination

Traditional digital watermarking algorithms have not found the effective way to find the unit circle. For example, in the scheme proposed by Kim [31], using the inscribed circle of images as a unit circle, when subjected to a shear attack, an accurate unit circle cannot be found, leading to the failure of watermark extraction. In the scheme proposed by Xin [32], both embedding and extracting need to store the location information of the unit circle. When extracting the watermark, it is necessary to use the search algorithm to extract the location information of each unit circle. If the position information is matched with the position of the original embedding unit circle, it is proved that the unit circle is the unit circle used in embedding, and then, the watermark is extracted. This method not only reduces the watermark extraction efficiency but also affects the watermark embedding rate. In this paper, a method of determining the position of a unit circle by using the level set method is proposed, and it can make up for the above defects.

Because the image’s stable edge contour is determined by the image content, its shape is affected less by the rotation and cropping attacks. Figure 4 shows the geometric shape of the different types of images when they are subjected to a horizontal set. Figure 5 is the stable edge contour by using the geometric contour model iteration when 512 × 512 Lena images are subject to rotation of 10°, 30°, and 45° and a variety of shear attacks. As can be seen from the graph, the shape of the iteration results is still stable when the geometric shape is rotated around different types of images and different angles.

Fig. 4
figure 4

Iterative results of geometric contour model for different types of images. a Original pepper image. b Pepper cut 5% from the left. c Original girl image. d Girl cut 10% from the left and the top, respectively. e Original Baboon image. f Baboon rotates 30°. g Original Lena image. h Lena rotates 45°

Fig. 5
figure 5

Iterative results of Lena image based on geometric contour activity model under rotation and shear attack. a Original Lena image. b Lena rotates10°. c Lena rotates 30°. d Lena rotates 45°. e Lena cut 10% from the left. f Lena cut 10% from the right. g Lena cut 10% from the top. h Lena cut 10% from the left and the top, respectively

Therefore, with the help of image’s stable edge contour, the circle center and radius of the unit are determined. Firstly, the Gauss filter is used to filter the weak edge of the image and preserve the stable edge of the image. Then, described in Section 3.2, based on level set method, a stable edge profile is obtained. The barycenter of contour is calculated as the circle center of the unit circle. The barycenter of image is geometric invariance under scaling, translation, and rotation attacks, so if the stable edge contour of the image is viewed as a sub-image, when the outline shape remain unchanged under geometric attacks, the barycenter of contour also remains unchanged. The formula for calculating the barycenter of contour is as follows:

$$ \left\{\begin{array}{c}\hfill {C}_x={M}_{10}/{M}_{00}\hfill \\ {}\hfill {C}_y={M}_{01}/{M}_{00}\hfill \end{array}\right. $$
(10)

M is p + q order geometric moments of image edge contour Ω in the formula (10). Its definition is as follows:

$$ \begin{array}{cc}\hfill {M}_{p q}={\displaystyle \sum_{\left( x, y\right)\in \varOmega}{x}^p{y}^q f\left( x, y\right)}\hfill & \hfill p, q = 0,1,2\dots \hfill \end{array} $$
(11)

where f(x, y) represents the gray value of the point (x, y).

Finally, according to the distance between the barycenter and the edge profile, the radius of the unit circle is determined. Through many experiments, it is found that the iteration results of the geometric contour model will produce a small error under attack. So in the calculation of the unit circle radius, firstly, the longest distance and nearest distance between the barycenter and the contour edge distance are calculated, and the average value r is obtained. Then, to exclude the decimal part of r, the single digit of r is operated with rounding. Finally, the unit circle radius R is got. Table 2 is the calculation results of the average r and the unit circle radius R under rotation attack and cropping attack for Lena image. As can be seen from this table, after being attacked by different attacks, it is still able to accurately calculate the radius R of the unit circle.

Table 2 The calculation results of average value r and radius R under different attacks

4 Algorithm design

In this paper, the carrier image is transformed into a low-frequency sub-band and three detail sub-bands by IWT, and then, the three detail sub-bands are fused to get the detail image. The contour of the detail image is extracted by means of level set to better determine the unit circle. In the determination of the unit circle, the watermark embedding and extraction are finally achieved. This watermarking algorithm mainly includes two parts: watermark embedding and watermark extraction.

4.1 Watermark embedding

Assuming the carrier image I is 8-bit gray-scale image of size 512 × 512, the watermark W is binary image of size 32 × 32. In this paper, the image watermark embedding process is shown in Fig. 6.

Fig. 6
figure 6

The flow chart of image watermark embedding

  1. 1.

    Arnold transform is used to scramble the watermark information W, and the scrambled watermark W′ is transformed into a one-dimensional vector.

    The traditional Arnold scrambling transform is simple [33]. When subjected to malicious attacks, it is easy to decrypt to restore the original watermark. The confidentiality and robustness of the traditional Arnold transform are not strong enough. In order to enhance the robustness and security of the digital image watermarking system, this paper improves the traditional Arnold scrambling transform. The improved scrambling method is as follows:

    $$ \begin{array}{cc}\hfill \left(\begin{array}{c}\hfill x\hbox{'}\hfill \\ {}\hfill y\hbox{'}\hfill \end{array}\right)={\left(\begin{array}{c}\hfill \begin{array}{cc}\hfill 1\hfill & \hfill 1\hfill \end{array}\hfill \\ {}\hfill \begin{array}{cc}\hfill 1\hfill & \hfill 2\hfill \end{array}\hfill \end{array}\right)}^c{\left(\begin{array}{c}\hfill \begin{array}{cc}\hfill 2\hfill & \hfill 1\hfill \end{array}\hfill \\ {}\hfill \begin{array}{cc}\hfill 1\hfill & \hfill 1\hfill \end{array}\hfill \end{array}\right)}^d\left(\begin{array}{c}\hfill x\hfill \\ {}\hfill y\hfill \end{array}\right) mod\ M\hfill & \hfill x\hbox{'}, y\hbox{'}\in \left\{0,1,2,\cdots, N-1\right\}\hfill \end{array} $$
    (12)

    where (x, y) and (x′, y′) are the pixel positions in the image before and after transform, M denotes the order of image matrix, and c and d denote scrambling times. The above scrambling of the image is conducted in two directions. In the horizontal direction and vertical direction, the image are scrambling c times and d times, respectively. The Arnold transform is a one-to-one mapping, and the transform parameters c and d are randomly generated. Compared with the traditional Arnold transform, it is not easy to be decoded, which enhances the robustness and security of the whole digital image watermarking system.

  2. 2.

    In order to improve the accuracy of image contour extraction, the carrier image I is transformed into a low-frequency sub-band LL and three detail sub-bands HL, LH, and HH by one-level IWT. Then, the three detail sub-bands are fused to get the detail image I 1 (0 ≤ I 1(x, y) ≤ 255). This is shown in Fig. 7b.

  3. 3.

    The edge detection of the detail sub-image of the carrier image is carried out by the Laplace operator (shown in Fig. 7c). The image contour of the detail sub-image after the edge detection is extracted based on the idea of the level set (shown in Fig. 7d).

    By fusing the three detail sub-bands to form a high-frequency image and by Laplace operator to detect the edge of the high-frequency image, the image contour can be extracted more accurately by level set, and the computation time is reduced.

  4. 4.

    With the aid of the extracted stable edge contour in detail sub-image and the method described in Section 3.3, the circle center (C x , C y ) and radius R of the unit circle are determined.

  5. 5.

    The inscribed square area of the original carrier image corresponding to the unit circle is selected as the watermark embedding domain. The inscribed square area is divided into non-overlapping blocks of size 4 × 4. If the block is not enough to be divided, then the region edge fills with zero. Assuming that the inscribed square area is N × N, the block number is \( S=\left\lceil \raisebox{1ex}{$ N$}\!\left/ \!\raisebox{-1ex}{$4$}\right.\right\rceil \cdot p\ \left\lceil \raisebox{1ex}{$ N$}\!\left/ \!\raisebox{-1ex}{$4$}\right.\right\rceil \). denotes rounding up.

  6. 6.

    The method described in Section 3.1 is used to analyze the characteristics of human visual system (HVS) of each sub-block in inscribed square region so as to determine the best embedding capacity of each sub-block.

  7. 7.

    Each sub-block of the watermark information to be embedded is carried out by IWT, and then, each sub-band obtained is carried out by singular value decomposition (SVD). The watermark is embedded in a series of connected singular values of four sub-bands by the parity quantization method. The watermark information is embedded by the parity quantization method, and the new image singular values of IWT coefficients are obtained. In order to embed the watermark by comparing the first singular value and the remaining singular value of each sub-block, successively, the remaining singular value string is taken out. Due to using the singular values of four sub-band coefficients LL, HL, LH, and HH to embed the watermark, the watermark can be embedded in a total of 7 bits. According to the HVS characteristic analysis of each sub-block, we learn that the maximum embedding capacity of each sub-block is 5 bits, so we use the singular value of LL, HL, and LH to embed the watermark. The assumption is that the singular values s 1, s 2, s 3,…,s 6 are generated in series of the order of LL, HL, and LH (if the rank of the 2 × 2 coefficient matrix is two, there are two singular values; if the rank is smaller than two, this paper will still use two singular values and assign zero for the second singular value), then

    When αs i /s 1 is an even,

    $$ {s}_i=\left\{\begin{array}{cc}\hfill \left\lceil \raisebox{1ex}{${s}_1\left(\left\lfloor {\scriptscriptstyle \frac{\alpha {s}_i}{s_1}}\right\rfloor +1\right)$}\!\left/ \!\raisebox{-1ex}{$\alpha $}\right.\right\rceil \hfill & \hfill {W}_i=1\hfill \\ {}\hfill \left\lceil \raisebox{1ex}{${s}_1\left(\left\lfloor {\scriptscriptstyle \frac{\alpha {s}_i}{s_1}}\right\rfloor \right)$}\!\left/ \!\raisebox{-1ex}{$\alpha $}\right.\right\rceil \hfill & \hfill {W}_i=0\hfill \end{array}\right. $$
    (13)

    When αs i /s 1 is an odd,

    $$ {s}_i=\left\{\begin{array}{cc}\hfill \left\lceil \raisebox{1ex}{${s}_1\left(\left\lfloor {\scriptscriptstyle \frac{\alpha {s}_i}{s_1}}\right\rfloor \right)$}\!\left/ \!\raisebox{-1ex}{$\alpha $}\right.\right\rceil \hfill & \hfill {W}_i=1\hfill \\ {}\hfill \left\lceil \raisebox{1ex}{${s}_1\left(\left\lfloor {\scriptscriptstyle \frac{\alpha {s}_i}{s_1}}\right\rfloor +1\right)$}\!\left/ \!\raisebox{-1ex}{$\alpha $}\right.\right\rceil \hfill & \hfill {W}_i=0\hfill \end{array}\right. $$
    (14)

    In (13) and (14), denotes the floor function, W i denotes the watermark information, α is an embedding adjustment factor, and Si denotes the first i singular value (the value range of i is self-adaptively determined by the HVS features of the sub-block). α value is determined by the ratio of S 1 and the selected minimum singular value. Assuming that S ai is the minimum singular value (except for the singular value of 0), the value of the alpha is \( \left\lceil \raisebox{1ex}{${s}_1$}\!\left/ \!\raisebox{-1ex}{${s}_{ai}$}\right.\right\rceil \). For each sub-block, the value of the embedding strength factor α is not the same.

  8. 8.

    By the inverse SVD and inverse IWT, each sub-block of containing watermark information is obtained. Removing the region by zero filling of each sub-block edge, in combination with other parts of the image that is not processed in the carrier image, the final watermarked image is obtained.

Fig. 7
figure 7

a Original Lena image. b Fused detail image. c Edge detected image. d Contour extracted by level set

4.2 Watermark extraction

Watermark extraction can be regarded as the inverse process of watermark embedding process. In this paper, the image watermark extraction process is shown in Fig. 8.

  1. 1.

    The watermarked image I is transformed into a low-frequency sub-band LL and three detail sub-bands HL, LH, and HH by one-level IWT, and then, the three detail sub-bands are fused to get the fused details image I 1 (0 ≤ I 1(x, y) ≤ 255) of watermarked image.

  2. 2.

    The edge detection of the detail sub-image of the watermarked image is carried out by the Laplace operator. The image contour of the detail sub-image after the edge detection is extracted based on the idea of the level set.

  3. 3.

    With the aid of the extracted stable edge contour in detail sub-image and the method described in Section 3.3, the circle center (C x , C y ) and radius R of the unit circle are determined.

  4. 4.

    The inscribed square area of the original watermarked image corresponding to the unit circle is selected to be watermark embedding domain. The inscribed square area is divided into non-overlapping blocks of size 4 × 4. If the block is not enough to be divided, then the region edge fills with zero. It is assumed that the inscribed square area is N × N, the block number is \( S=\left\lceil \raisebox{1ex}{$ N$}\!\left/ \!\raisebox{-1ex}{$4$}\right.\right\rceil \cdot p\ \left\lceil \raisebox{1ex}{$ N$}\!\left/ \!\raisebox{-1ex}{$4$}\right.\right\rceil \). denotes rounding up.

  5. 5.

    The method described in Section 3.1 is used to analyze the characteristics of human visual system (HVS) of each sub-block in inscribed square region as so as to determine the best embedding capacity of each sub-block.

  6. 6.

    Each sub-block is carried out by IWT, and then, the singular value decomposition of each sub-band is obtained. The watermark is extracted from a series of connected singular values of four sub-bands by the parity quantization method.

    In order to extract the watermark by comparing the first singular value and the remaining singular value of each sub-block, successively, the remaining singular value string is taken out. The watermark information “0” is extracted when αs i /s 1is even. The watermark information “1” is extracted when αs i /s 1 is odd. According to the HVS of each sub-block, the embedded watermark information is adaptively selected to extract the watermark information of each sub-block.

  7. 7.

    The extracted watermark information is transformed into the original watermark information by reverse Arnold scrambling.

  8. 8.

    The new singular value is obtained after extracting the watermark information by parity quantization. Then, the new LL, HL, and LH values are obtained by inverse SVD. The inscribed square area is restored by reverse IWT.

  9. 9.

    Finally, combining the restored square area and other parts of the watermarked image that is not processed, the original image is recovered.

Fig. 8
figure 8

The flow chart of image watermark extraction

5 Experimental results and analysis

In this paper, Lena, Girl, and other standard images are used as test images. The image size is 512 × 512, as shown in Fig. 9. The watermark image is binary image of size 32 × 32, as shown in Fig. 10. All experiments are carried out in the windows XP operating system, and MatlabR2012b is taking as the experimental platform.

Fig. 9
figure 9

Experimental test image. a Lena. b Baboon. c Girl. d Pepper

Fig. 10
figure 10

Watermark image

5.1 Integrity assessment

Generally, reversible watermarking algorithm requires carrier image can completely recover after extracting watermark. Therefore, it is measured by the normalized correlation (NC) of the original carrier image and the restored carrier image after the watermark is extracted. The calculation formula is shown as Eq. (15):

$$ {\mathrm{NC}}_1=\frac{{\displaystyle \sum_{i=0}^{L-1}{\displaystyle \sum_{j=0}^{K-1} I\left( i, j\right){I}^{\hbox{'}}\left( i, j\right)}}}{{\displaystyle \sum_{i=0}^{L-1}{\displaystyle \sum_{j=0}^{K-1}{\left[ I\left( i, j\right)\right]}^2}}} $$
(15)

where I(i, j) and I′(i, j), respectively, denote the pixel value at (i, j) of the original image and the restored carrier image after the watermarking is extracted. L and K denote, respectively, the rows and columns of the image. For the original carrier image and the restored carrier image, the NC1 value is required to be 1, namely, the carrier image is generally required to complete recovery.

Table 3 denotes the integrity of the results of the four different types of watermarked images without any attacks based on this algorithm. It shows that the original images can be recovered completely without the attacks. This indicates that the algorithm is reversible.

Table 3 Integrity assessment without attack

5.2 Robustness experiment of unit circle

The key to effectiveness of the algorithm is that the edge profile of the image is extracted by using the level set and then the robustness of the unit circle is determined. In this experiment, the average r and the unit circle radius R of Fig. 8a–d are calculated, respectively, under rotation attack. The calculation results are shown in Table 2. It can be seen from Table 4 that the calculation of the unit circle radius R of each image is still accurate after a rotation attack. So it is illustrated that the determination algorithm of the unit circle is robust in the Section 3.3.

Table 4 Average value of r and radius R under rotation attack

5.3 Scale invariance

In this experiment, the watermarked image Lena subjects to scaling attack. Scale scaling factor’s range is from 0.4 to 2. Before extracting the watermark, the watermarked image is recovered to 512 × 512. The experimental results are shown in Fig. 11.

Fig. 11
figure 11

Relationship between scale scaling factor and bit error rate

In this paper, the bilinear interpolation method is used to scale the image [34]. The amount of calculation of bilinear interpolation method is large, but the image quality is high after scaling, and discontinuity pixel value does not appear. The bilinear interpolation has low-pass filter nature of the damage to the high-frequency components, so it may make the image contour blurred to a certain extent. In this way, the edge contour extracted by the Laplace operator and the level set theory will be less affected under the image scaling. In this paper, according to the distance between the barycenter and the edge profile, the radius of the unit circle is determined. In the calculation of the unit circle radius, firstly, the longest distance and nearest distance between the barycenter and the contour edge distance are calculated, and the average value r is obtained. Then, to exclude the decimal part of r, the single digit of r is operated with rounding. Finally, the unit circle radius R is got. In the scaling process of the scaling factor which is [0.4, 2], except for the special conditions, the image unit circle radius obtained will not change. In special conditions such as the original image using the level set idea to get the average value of r is 75.6, the unit circle radius R is 80. If under the scaling attack, the average value of r can be changed to 74.6, the unit circle radius R is 70. At this time, the extracted watermark is greatly affected.

The quality of image itself will be affected when the image in the unit circle is scaled. For scaling factors [0.4, 2], based on the derivation of the bilinear difference method, when the scaling factor is larger, the image quality is almost not affected, and the watermark can be extracted very well (due to the limitation of the length of the article, the derivation process is omitted). When the scaling factor is smaller, the image quality has a little effect on the image quality.

To sum up, with scaling attacks on different types of images, the relationships between scale scaling factor and bit error rate are different. The experimental results show that the extracted watermark is completely correct when the scale scaling factor is greater than or equal to 0.85 when the watermarked image Lena subjects to scaling attacks.

5.4 Other signal processing attacks

This paper takes Lena image as an example and executes cutting, filtering, and other signal processing. Because these processing attacks (except a larger degree of shear) have little influence on the shape of the edge contour, it does not affect the normal extraction of the watermark.

The data in Table 5 shows that the scheme has strong robustness when the attack does not affect the extraction of the image edge contour.

Table 5 The results under other signal processing attacks

5.5 Algorithm performance evaluation

This algorithm embeds 32 × 32 binary images into the 512 × 512 carrier images. The watermark embedding is carried out by this algorithm and algorithm in literature [21], respectively. The PSNR and SSIM are shown in Table 6 (the average of data taken 20 tests).

Table 6 Comparison of PSNR (dB) and SSIM

Currently, the peak signal to noise ratio (PSNR) is one of the main indicators to evaluate the visual quality of the reversible watermarking. The greater PSNR value is, the less the representative image distortion is. Correspondingly, the visual quality of watermarked image is better. The smaller PSNR is, the more the representative image distortion is, and the loss of watermarked image visual quality is more serious. The calculation formula of PSNR is shown as Eq. (16):

$$ PSNR=10 \log \left(\frac{255^2}{\frac{1}{ M N}{\displaystyle \sum_{i=0}^{M-1}{\displaystyle \sum_{j=0}^{N-1}{\left[{I}^{\hbox{'}}\left( i, j\right)- I\left( i, j\right)\right]}^2}}}\right) $$
(16)

where I(i, j) and I′(i, j), respectively, denote the pixel values at (i, j) of the original image and the watermarked image. M and N, respectively, represent the rows and columns of the image. It is generally believed that if the lost image quality is partial and PSNR value on the visual quality of reconstruction is not less than 30 dB, its visual quality is still acceptable.

Although the PSNR has been widely used in stego image quality assessment, its mathematical definition has some limitations, and the pixel correlations and human visual system characteristics are not taken into account. The structural similarity (SSIM) has been used to evaluate the quality of the stego images. It is defined as (17)

$$ \mathrm{SSIM}\left( x, y\left| w\right.\right)=\frac{\left(2{\overline{w}}_x{\overline{w}}_y+{C}_1\right)\left(2{\sigma}_{w_x{w}_y}+{C}_2\right)}{\left({\overline{w}}_x^2+{\overline{w}}_y^2+{C}_1\right)\left({\sigma}_{w_x}^2+{\sigma}_{w_y}^2+{C}_2\right)} $$
(17)

where C 1 and C 2 are the small constants, \( {\overline{w}}_x \) is the mean value of the region w x , and \( {\overline{w}}_y \) is the mean value of the region w y . \( {\sigma}_{w_x}^2 \) is the variance of w x , and \( {\sigma}_{w_x{w}_y} \) is the covariance between the two regions. The higher value of the SSIM implies a higher quality level for the stego image.

The value range of SSIM is [0, 1]. SSIM = 0 indicates that the corresponding two images are completely different. SSIM = 1 indicates that the corresponding two images are exactly the same. In the evaluation of the reversible information hiding algorithm, the SSIM value of the hidden image is close to 1 as much as possible.

Compared with the algorithm in literature [21], the highest PSNR of the above four original carrier images in this algorithm is 49.34 dB. This means that this algorithm has better invisibility. At the same time, the SSIM is also higher than the results in literature [22]. From Table 6, it is easy to note that the proposed algorithm outperforms that in literature [21] in terms of same payload capacity with good SSIM and PSNR values. The results presented here demonstrate that the proposed algorithm significantly increases the quality of stego images. Specific effects of visual and extraction are shown in Table 7:

Table 7 Visual effects of the algorithm experiments

As observed from the figures, we find that human eyes do not feel the existence of watermark information in watermarked images. The corresponding PSNR value shows that images have better imperceptibility by using different types of algorithms. When the watermarked image has a good visual effect, the average PSNR value can be as high as 48.18 dB. From Tables 6 and 7, we can see that this algorithm has a good imperceptibility for different texture images and can meet the requirements of visual perception.

Table 8 denotes the performance results of the four watermarked images with conventional attacks based on this algorithm. The PSNR value of the watermarked image and the NC2 value of extracted watermark are all listed.

Table 8 The performance evaluation under the conventional attacks

As can be seen from Table 8, when the watermarked image is not subject to any attacks, it has a high visual quality. Special Pepper’s watermarked image, the PSNR runs up to 49.34 dB. When the watermarked image is subjected to a variety of attacks, it still has a high visual quality and a good imperceptibility.

The NC2 (normalized correlation, NC) is a measure of the watermarking robustness. The watermarked image will be disturbed inevitably by the noise; then, the quality and the accuracy of watermark extraction will be affected in the process of transmission. This requires the use of digital watermarking has a certain ability to resist noise. Experiment uses NC as a measure of the watermarking robustness.

Define w(i, j) and w′(i, j) as the pixel values at (i,j) of the original watermark image and the extraction watermark image, respectively. L and K, respectively, denote the rows and columns of the image. Its computation formula is shown as Eq. (18):

$$ {\mathrm{NC}}_2=\frac{{\displaystyle \sum_{i=0}^{L-1}{\displaystyle \sum_{j=0}^{K-1} w\left( i, j\right){w}^{\hbox{'}}\left( i, j\right)}}}{{\displaystyle \sum_{i=0}^{L-1}{\displaystyle \sum_{j=0}^{K-1}{\left[ w\left( i, j\right)\right]}^2}}} $$
(18)

NC2 is used to describe the quality of the extracted watermark. The closer to 1 NC2 value is, the more robust of watermarking algorithm is. When NC2 < 0.5, it is thought this extraction algorithm is failure and almost could not extract the watermark information.

Table 8 shows that this algorithm has good robustness against common geometric attacks. The results of Table 8 can be used to objectively evaluate the performance of the algorithm. This paper shows that the method can resist common attack effectively. Especially, when the watermarked image Lena subjects to gauss filter, PSNR is 47.24 dB, and the watermark NC2 is 0.982 under corresponding attack. This algorithm not only has the strong robustness but also can accurately extract the watermark.

In Table 9, the performance results can be obtained after the common geometric attacks (rotation, cropping, scaling) are conducted on the four different types of the watermarked images using the algorithm in this paper. Its PSNR value of the watermarked image and the NC2 value of extraction watermark have been listed as follows.

Table 9 The performance evaluation under the geometric attacks

As can be seen from Table 9, this algorithm has a high robustness against common geometric attacks. Especially when Lena enlarges 20%, the PSNR is 47.81 dB, and the watermark NC2 is 1 under corresponding attack. It shows that the algorithm has strong robustness and can extract the watermark completely and accurately in some geometric attacks.

For the same embedding capacity, the robustness of the proposed algorithm is superior to those in literatures [25] and [22]. Under the premise of keeping good visual quality, the literature [25] improves the robustness of the algorithm by using adaptive bit plane operation to repeat embedding watermark data bits in the input process and using majority voting strategy to extract the hidden watermark information in the extraction process. The literature [22] enhances the robustness of the algorithm through finding the best position of the embedded watermark and the threshold value for embedding watermark. But such algorithm is very dependent on the selection of the threshold, and inappropriate threshold may directly lead to the watermarking irreversible. In this paper, the algorithm is adopted to select the stable unit circle, in which the watermark is embedded by IWT and SVD. This paper algorithm is less affected by the attack and has stronger anti-attack ability.

In addition, the literature [24] algorithm also has high robustness. The algorithm uses the Slantlet transform matrix to transform small blocks of the original image and hides the watermark bits by modifying the mean values of the carrier sub-bands. The scheme has robustness against different kinds of attacks, but its robustness is slightly lower than that of this paper algorithm. Additionally, when the watermark is embedded in the same capacity, the visual quality of our algorithm is much higher than that in literature [24].

Under the same embedding capacity, the imperceptibility of our algorithm is equivalent to the literature [25] algorithm. In this paper, we use the HVS characteristics of the image to embed the watermark adaptively and get a better visual effect. The algorithm is mainly to improve the robustness of the watermarking algorithm as far as possible under the premise of ensuring better visual quality, and the emphasis is on improving the ability of anti-attack algorithm. The key point is that the visual quality of this algorithm has achieved very good results, and on this basis, further improving the algorithm’s imperceptibility is not significant.

6 Conclusions

In order to make the reversible image watermarking more robustness and imperceptibility, an adaptive reversible image watermarking scheme based on IWT and level set is proposed. The main contributions are illustrated as follows. A stable edge contour is obtained by using the geometric active contour model based on level set method, and the image stability edge contour extracted is determined by image content, so the contour shape has less impact with some attacks, and has strong robustness; since IWT has good time-frequency localization and multi-resolution analysis characteristic and SVD has the stability and anti-interference ability, the watermark embedding and extraction are implemented based on HVS feature of image by using IWT and SVD. This effectively improves the stability and robustness of the algorithm and guarantees better visual quality.

The experimental results show that the scheme not only makes up for the shortcomings of the traditional scheme which needs to be embedded the unit circle location information but also has better invisibility. This algorithm can not only resist various attacks with strong robustness but also restore the original image with lossless.

The algorithm needs to consume a certain amount of time with using the level set idea to find a stable edge profile. How to further reduce the running time under the premise of ensuring the algorithm with high performance will be the next step to study.

References

  1. W Lu, S Wei, L Hongtao, Novel robust image watermarking based on sub-sampling and DWT. Multimed Tools Appl 60, 31–46 (2012)

    Article  Google Scholar 

  2. S Weng, J-S Pan, Reversible watermarking based on multiple prediction modes and adaptive watermark embedding. Multimed Tools Appl 72, 3063–3083 (2014)

    Article  Google Scholar 

  3. JM Barton, Method and apparatus for embedding authentication information within digital data [P]. US: 5646997, 1997.

  4. CW Honsigner, PW Jones, M Rabbanl, Lossless recovery of an original image containing embedded data US, 6278791[P]. 2001- 08-21.

  5. S Priyanka, A Suneeta, An efficient fragile watermarking scheme with multilevel tamper detection and recovery based on dynamic domain selection. Multimed Tools Appl 75(14), 8165–8194 (2016)

    Article  Google Scholar 

  6. Y Navneet, S Kulbir, Robust image-adaptive watermarking using an adjustable dynamic strength factor. SIViP 9, 1531–1542 (2015)

    Article  Google Scholar 

  7. ZC Ni, YQ Shi, N Ansari, Reversible data hiding. IEEE Trans on Circuits Syst for Video Technol 16(3), 354–362 (2006)

    Article  Google Scholar 

  8. L Li, C Chin-Chen, W Anhong, Reversible data hiding scheme based on histogram shifting of n-bit planes. Multimed Tools Appl 75(18), 11311–11326 (2016)

    Article  Google Scholar 

  9. J Fridrich, M Golijan, R Du, Invertible authentication. Proc SPIE 4314, 197–208 (2001). Society of Photo-Optical Instrumentation Engineers, Bellingham

    Article  Google Scholar 

  10. M Awrangjeb, MS Kankanhalli, Lossless watermarking considering the human visual system [EB/OL]. http://link.springer.com/chapter/10.1007%252F978-3-540-24624-4_47. Accessed 20 Nov 2009.

  11. TIAN Jun, Reversible data embedding using a difference expansion. IEEE Trans Circuits Syst Video Technol 13(8), 890–896 (2003)

    Article  Google Scholar 

  12. DM THODI, JJ RODRIGUEZ, Expansion embedding techniques for reversible watermarking. IEEE Trans Image Process 16(3), 721–730 (2007)

    Article  MathSciNet  Google Scholar 

  13. C-M Pun, K-C Choi, Generalized integer transform based reversible watermarking algorithm using efficient location map encoding and adaptive thresholding. Computing 96, 951–973 (2014)

    Article  MATH  Google Scholar 

  14. X Shijun, W Yi, Non-integer expansion embedding techniques for reversible image watermarking. EURASIP Journal Adv in Signal Processing 2015, 56 (2015). doi:10.1186/s13634-015-0232-z

    Article  Google Scholar 

  15. C de Vleeschouwer, Delaigle J F, Macq B, Circular interpretation of bijective transformations in lossless watermarking for media asset management. IEEE Trans Multimedia 5(1), 97–105 (2003)

    Article  Google Scholar 

  16. ZC Ni, YQ Shi, N Ansari, W Su, QB Sun, X Lin, Robust lossless image data hiding designed for semi-fragile image authentication. IEEE Trans Circuits Syst Video Technol 18(4), 497–509 (2008)

    Article  Google Scholar 

  17. TZ Zeng, LD Ping, XZ Pan, A lossless robust data hiding scheme. Pattern Recogn 43(4), 1656–1667 (2010)

    Article  MATH  Google Scholar 

  18. LL An, XB Gao, XL Li, Robust reversible watermarking via clustering and enhanced pixel-wise masking. IEEE Trans Image Process 21(8), 3598–3611 (2012)

    Article  MathSciNet  Google Scholar 

  19. LL An, XB Gao, Y Yuan, DC Tao, Robust lossless data hiding using clustering and statistical quantity histogram. Neurocomputing 77(1), 1–11 (2013)

    Article  Google Scholar 

  20. LL An, XB Gao, Y Yuan, Content adaptive reliable robust lossless data embedding. Neurocomputing 79(3), 1–11 (2012)

    Google Scholar 

  21. MJ Sahraee, S Ghofrani, A robust blind watermarking method using quantization of distance between wavelet coefficients. SIViP 7, 799–807 (2013)

    Article  Google Scholar 

  22. QL Gu, TG Gao, A novel reversible robust watermarking algorithm based on chaotic system. Digital Signal Processing 23(1), 213–217 (2013)

    Article  MathSciNet  Google Scholar 

  23. A Mary, C Erbug, A Gholamreza, A watermarking algorithm based on chirp z-transform, discrete wavelet transform, and singular value decomposition. SIViP 9(3), 735–745 (2015)

    Article  Google Scholar 

  24. R Thabit, BE Khoo, Robust reversible watermarking scheme using Slantlet transform matrix. J Syst Softw 88(1), 74–86 (2014)

    Article  Google Scholar 

  25. C Ka-Cheng, P Chi-Man, Robust lossless digital watermarking using integer transform with bit plane manipulation. Multimed Tools Appl 75(11), 6621–6645 (2016)

    Article  Google Scholar 

  26. IA Ansari, M Pant, CW Ahn, Artificial bee colony optimized robust-reversible image watermarking. Multimedia Tools and Applications. (2016). doi:10.1007/s11042-016-3680-z.

  27. T Sridevi, SS Fathima, Watermarking algorithm based using genetic algorithm and HVS. Int J of Comput Appl 74(13), 26–30 (2013)

    Google Scholar 

  28. P Melin, CI Gonzalez, JR Castro, Study on semi-fragile edge-detection method for image processing based on generalized type-2 fuzzy logic. IEEE Trans Fuzzy Syst 22(6), 1515–1525 (2014)

    Article  Google Scholar 

  29. H Shijie, W Meng, H Richang, Spatially guided local Laplacian filter for nature image detail enhancement. Multimed Tools Appl 75(3), 1529–1542 (2014)

    Google Scholar 

  30. S Osher, J Sethian, Fronts propagating with curvature dependent speed: algorithms based the Hamilton Jacobi formulation. J Comput Phys 79(1), 12–49 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  31. HS Kim, HK Lee, Invariant image watermark using Zernike moments. IEEE Trans Circuits Syst Video Technol 13(8), 766–775 (2003)

    Article  Google Scholar 

  32. YQ Xin, S Liao, M Pawlak, Circularly orthogonal moments for geometrically robust image watermarking. Pattern Recogn 40(12), 3740–3752 (2007)

    Article  MATH  Google Scholar 

  33. N Anil Kumar, M Haribabu, C Hima Bindu, Novel image watermarking algorithm with DWT-SVD. Int J Comput Appl 106(1), 12–17 (2014)

    Google Scholar 

  34. L Yu-Chi, W Hsien-Chu, Y Shyr-Shen, Adaptive DE-based reversible steganographic technique using bilinear interpolation and simplified location map. Multimed Tools Appl 52(2), 263–276 (2011)

    Google Scholar 

Download references

Acknowledgements

This work is supported by the Jiangsu province Natural Science Foundation of China (no. BK20131069). At the same time, this work is also supported by the National Natural Science Foundation of China (NSFC) (no.61402192).

Authors’ contributions

ZWZ and LFW conceived and designed the study; ZWZ and SZX performed the experiments; LFW, SZX, and SBG analyzed the data; and ZWZ, SZX, and SBG jointly prepared the manuscript. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhengwei Zhang.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Z., Wu, L., Xiao, S. et al. Adaptive reversible image watermarking algorithm based on IWT and level set. EURASIP J. Adv. Signal Process. 2017, 15 (2017). https://doi.org/10.1186/s13634-017-0450-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-017-0450-7

Keywords