Skip to main content

A reduced-reference perceptual image and video quality metric based on edge preservation

Abstract

In image and video compression and transmission, it is important to rely on an objective image/video quality metric which accurately represents the subjective quality of processed images and video sequences. In some scenarios, it is also important to evaluate the quality of the received video sequence with minimal reference to the transmitted one. For instance, for quality improvement of video transmission through closed-loop optimisation, the video quality measure can be evaluated at the receiver and provided as feedback information to the system controller. The original image/video sequence--prior to compression and transmission--is not usually available at the receiver side, and it is important to rely at the receiver side on an objective video quality metric that does not need reference or needs minimal reference to the original video sequence. The observation that the human eye is very sensitive to edge and contour information of an image underpins the proposal of our reduced reference (RR) quality metric, which compares edge information between the distorted and the original image. Results highlight that the metric correlates well with subjective observations, also in comparison with commonly used full-reference metrics and with a state-of-the-art RR metric.

1 Introduction

For recent and emerging multimedia systems and applications, such as modern video broadcasting systems (including DVB/DVB-H, IPTV, webTV, HDTV,...) and telemedical applications, user requirements are going beyond requirements on connectivity, and users now expect the services to meet their requirements on quality. In recent years, the concept of quality of service (QoS) has been augmented towards the new concept of quality of experience (QoE), as the first only focuses on the network performance (e.g., packet loss, delay, and jitter) without a direct link to the perceived quality, whereas the QoE reflects the overall experience of the consumer accessing and using the provided service. The main target in the design of modern multimedia systems is thus the improvement of the (video) quality perceived by the user. For the provision of such quality improvement the availability of an objective quality metric well representing the human perception is crucial. Objective quality assessment methods based on subjective measurements are based either on a perceptual model of the human visual system (HVS) [1], or on a combination of relevant parameters tuned with subjective tests [2, 3].

It is also important to evaluate the quality of the received video sequence with minimal reference to the transmitted one [4]. For closed loop optimisation of video transmission, the video quality measure can be provided as feedback information to a system controller [5]. The original video sequence--prior to compression and transmission--is not usually available at the receiver side and it is important to rely at the receiver side on an objective video quality metric that does not need reference or needs minimal reference to the original video sequence. Figure 1 reports a schematic representation of an image/video processing system, consisting of a video encoder and/or a transmission network, with the calculation of a reduced reference (RR) quality metric. Reference features are extracted from the original image/video sequence and these are then compared with the same features extracted from the impaired video to obtain the RR quality metric.

Figure 1
figure 1

RR scheme.

We propose here a RR video quality metric well correlated with the perceived quality, based on the comparison of the edge information between the distorted image and the original one. The human eye is in fact very sensitive to the edge and contour information of an image, i.e., the edge and contour information gives a good indication of the structure of an image and it is critical for a human to capture the scene [6].

Some works in the literature proposed considering edge structure information. For instance in [7] the structural information error between the reference and the distorted image is computed based on the statistics of the spatial position error of the local modulus maxima in the wavelet domain. In [1] a parameter is considered to detect a decrease or loss of spatial information (e.g., blurring). This parameter uses a 13 pixel spatial information filter (SI13) to measure edge impairments rather than Sobel filtering. Differently from [1] we consider here the Sobel operator [8] for edge detection, since this is one of the most used methodologies to obtain edge information due to its simplicity and efficiency. Further details on this choice are reported in the following section.

A few RR metrics have been proposed, with different characteristics in terms of complexity, of correlation with subjective quality and of overhead associated to the transmission of side information.

The ITS/NTIA (Institute for Telecommunication Sciences/National Telecommunications and Information Administration) has developed a general video quality model (VQM) [1] that was selected by both ANSI and ITU as a video quality assessment standard based on its performance. This general model requires however a bit-rate of several Mbps (more than 4 Mbps for 30 fps, CIF size video) of quality features for the calculation of the VQM value, which prevents its use as a RR metric in practical systems. The possibility to use spatial-temporal features/regions was considered in [9] in order to provide a trade-off between the correlation with subjective values and the overhead for side-information. Later on a low-rate RR metric based on the full reference metric [10] ("10 kbits/s VQM") was developed by the same authors. A subjective data set was used to determine the optimal linear combination of the eight video quality parameters in the metric. The performance of the metric was presented in terms of a scatter plot with respect to subjective data, although numerical performance results are not provided in [10].

The quality index in [4] is based on features which describe the histograms of wavelet coefficients. Two parameters describe the distribution of the wavelet coefficients of the reference image using a generalized Gaussian density (GGD) model, hence only a relatively small number of RR features are needed for the evaluation of image quality.

The RR objective picture quality measurement tool of compressed video in [11] is based on a discriminative analysis of harmonic strength computed from edge-detected pictures to create harmonics gain and loss information that could be associated with the picture. The results achieved are compared by the authors with a VQEG RR metric [9, 12] and the performance of the proposed metric is shown to be comparable to the latter, with a reduction in overhead with respect to it and a global reduction of overhead with respect to full reference metrics of 1024:1. The focus is on the detection of blocking and blurring artifacts. This metric considers edge detection as our proposed metric, but in [11] edge detection is performed over the whole image and edge information is not used as side information, but just as a step for further processing of the image for the extraction of different side information.

The quality criterion presented in [13] presents relies on the extraction, from an image represented in a perceptual space, of visual features that can be compared to those used by the HVS (perceptual color space, CSF, psychophysical subband decomposition, masking effect modeling). Then a sim-ilarity metric computes the objective quality score of a distorted image by comparing the features extracted from this image to features extracted from its reference image. The performance is evaluated with the aid of three different databases with respect to three full reference metrics. The size of the side information is flexible. The main drawback of this metric is its complexity, since the HVS model (which is an essential part of the proposed image quality criterion) requires a high computation complexity.

In [14] an RR objective perceptual image quality metric for use in wireless imaging is proposed. Specifically, the normalized hybrid image quality metric (NHIQM) and a perceptual relevance weighted Lp-norm are designed, based on the observation that the HVS is trained to extract structural information from the viewing area. Image features are identified and measured based on the extent by which individual artifacts are present in a given image. The overall quality measure is then computed as a weighted sum of the features. The authors did not rely on public databases for performance evaluation, but performed their own subjective tests. The performance of this metric is evaluated with respect to full reference metrics and the metric in [14].

The metric in [15] is based on a divisive normalization image representation. No assumptions are made about the type of impairment. This metric requires training: before applying the proposed algorithm for image quality assessment, five parameters need to be learned from the data. These parameters are cross-validated with different selections of the training and testing data. Results are compared with the RR metric in [14] and with peak signal-to-noise ratio (PSNR).

In this article we propose a low complexity RR metric based on edge preservation which can be calculated in real time in practical image/video processing and transmission systems, performs comparably with the mostly used full reference metrics and requires a limited overhead for the transmission of side information.

The remainder of this article is organized as follows. Edge detection methodologies are introduced in Section 2. Section 3 presents the proposed RR image and video quality metric. Simulation set-up and results are reported in Section 4. Conclusions about the novelty and performance of the metric are then reported in Section 5.

2 Edge detection

There are many methods to perform edge detection. The majority of these may be grouped into two categories: gradient and Laplacian. The gradient method detects the edges by finding the maximum and minimum in the first derivative of the image. This method is characteristic of the gradient filter family of edge detection and includes the Sobel method. A pixel location is declared an edge location if the value of the gradient exceeds a threshold. Edges will have higher pixel intensity values than those surrounding it. Once a threshold is set, the gradient value can be compared to the threshold value and an edge is detected when the threshold is exceeded. When the first derivative is at a maximum, the second derivative is zero. As a result, an alternative to finding the location of an edge is to locate the zeros in the second derivative. This method is known as the Laplacian.

The aforementioned methods can be extended to the 2D case. The Sobel operator performs a 2D spatial gradient measurement on an image. Typically it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image. The Sobel edge detector uses a pair of 3 × 3 convolution masks, one estimating the gradient in the x-direction (columns) and the other estimating the gradient in the y-direction (rows). The mask is then slid over the image, manipulating a square block of pixels at a time.

The Sobel operator can detect edges by calculating partial derivatives in 3 × 3 neighborhood. The main reason for using the Sobel operator is that it is relatively insensitive to noise and it has relatively smaller masks than other operators such as the Roberts operator and the two-order Laplacian operator.

The partial derivatives in x and y directions are given as:

S x = f ( x + 1 , y - 1 ) + 2 f ( x + 1 , y ) + f ( x + 1 , y + 1 ) - f ( x - 1 , y - 1 ) + 2 f ( x - 1 , y ) + f ( x - 1 , y + 1 )
(1)

and

S y = f ( x - 1 , y + 1 ) + 2 f ( x , y + 1 ) + f ( x + 1 , y + 1 ) - f ( x - 1 , y - 1 ) + 2 f ( x , y - 1 ) + f ( x + 1 , y - 1 )
(2)

The gradient of each pixel is calculated according to g ( x , y ) = S x 2 + S y 2 and a threshold value t is selected. If g(x, y) > t, this point is regarded as an edge point.

The Sobel operator can also be expressed in the form of two masks as shown in Figure 2: the two masks are used to calculate S y and S x , respectively.

Figure 2
figure 2

Sobel masks.

3 Proposed metric

Since structural distortion is tightly linked with edge degradation, we propose a RR quality metric which compares edge information between the distorted image and the original one. We propose to apply Sobel filtering locally, only for some blocks of the entire image, after subsampling the images.

Images are divided in sub-windows, as shown in Figure 3. For instance, if images have size 512 × 768 we could subsample of a factor of 2 and consider 16 × 16 macroblocks of size 16 × 24 each, or we can subsample of a factor 1.5 and consider 18 × 16 macroblocks with size 19 × 32 each. The example in Figure 3 reports the second option. The block size is chosen such that it is sufficiently large to account for vertical and/or horizontal activities within each block, but small enough to reduce complexity and the size of side information. In addition, sub-windows are non coincident with macroblocks, to enable a better detection of DCT artifacts in the case of DCT compressed images and video.

Figure 3
figure 3

Example of block pattern selected based on VA models.

In order to reduce the overhead associated with the transmission of side information, only 12 blocks are selected to represent the different areas of the images. The block pattern utilized for our tests is chosen after several investigations based on visual attention (VA). Various experiments have been proposed in the literature for VA modeling and salient region identification, aiming at the detection of salient regions in an image. Models on VA are often developed and validated by visual fixation patterns through eye tracking experiments [16, 17]. In [18] a framework is proposed in order to extend existing image quality metrics with a simple VA model. A subjective region of interest (ROI) experiment was performed, with seven images, in which the viewers' task was to select within each image the region that drew most of their attention. For simplicity, in this experiment only rectangular-shaped ROIs were allowed. Considering the obtained ROI as a random value, it is possible to calculate the mean value and the standard deviation. It was observed that the ROI's center coordinates are around the image center for most of the images, and the mean of the ROI dimensions are very similar in both x and y directions. This confirms that the salient region, which include the most important informative content of the image, is often placed in the center of the picture.

Following these guidelines we have chosen the block pattern as a subset of the ROI with a central symmetry, minimizing the number of blocks to reduce the overhead associated to the transmission of side information. Figure 3 shows an example of block pattern.

For the assessment of the quality of the corrupted image, the edge structure of the blocks of the corrupted image should be compared to the structure of the correspondent blocks in the original image. For the identification of edges we use Sobel filtering, which is applied locally in these selected blocks.

For each pixel in each block we obtain a bit value, where one represents an edge and zero means that there are no edges. If m and n are the block dimensions, we denote the corresponding blocks l in the original and the possibly corrupted image as the m × n matrices O l and C l respectively, and the Sobel-filtered version of blocks l as the m × n binary matrices S O l =S ( O l ) , with elements so i, j , with i = 1,..., m, j = 1, ..., n, and S C l =S ( C l ) , with elements sc i, j , with i = 1, ..., m, j = 1, ..., n. We denoted above with S the Sobel operator. The similarity of two images can be assessed based on the similarity of the edge structures, i.e., by comparing the matrices SO l , associated to the filtered version of the block in the original image, and SC l , associated to the filtered version of the block in the possibly corrupted image.

We can check if the edges of the reference image are kept, simply by counting the zeros and ones which are unchanged after compression or lossy transmission of the image. Hence, for each block l of image s the similarity index can be computed as

I s , l = n l / p l
(3)

where

n l = p l - ∑ i = 1 m ∑ j = 1 n s c l , i j - s o l , i j
(4)

is the number of zeros and ones unchanged in the l-th block and p l = m × n is the total number of pixels in the l-th block.

If N b is the number of blocks in the selected block pattern, the similarity index I s for image s is defined here as

I s = 1 N b ∑ l = 1 N b I s , l
(5)

For images decomposed in blocks of equal size, as considered here, the proposed quality index is thus:

I s = 1 N b ∑ l = 1 N b 1 - ∑ i = 1 m ∑ j = 1 n s c l , i j - s o l , i j m n
(6)

3.1 Threshold selection

The threshold value is an important parameter that depends on a number of factors, such as image brightness, contrast, level of noise, and even edge direction. The selection of the threshold in Sobel filtering is associated to the sensitivity of the filter to edges. In particular, the lower the value of the threshold, the higher the sensitivity to edges. Too high values of the threshold do not detect edges which are important for quality assessment. On the other side, if the value of the threshold is too small, large parts of the image are considered as edges, whereas these are irrelevant for quality assessment. The threshold can be selected following an analysis of the gradient image histogram. Based on this consideration and on the analysis of Sobel filtering performance for the images of the considered databases, the selected threshold value is t = 0.001.

Figure 4 reports the correlation coefficient of our proposed metric and DMOS values in the LIVE [19] image quality assessment database. The correlation coefficient is calculated for different selections of the threshold, for the different types of impairments considered in the database: fast fading (FF), white noise (WN), Gaussian blur (GB), JPEG compression (JP), JPEG2000 compression (JP2K). We can observe that the performance drops after a threshold value of approximately 0.005. For lower values, the dependence of the performance on the threshold is very limited.

Figure 4
figure 4

Correlation coefficient (proposed metric--DMOS) versus threshold value in Sobel filtering, LIVE [19]image database.

3.2 Complexity

The selection of Sobel filtering results in a low complexity metric. The Sobel algorithm is characterized, in fact, by a low computational complexity and consequently high calculation speed. In [20] some edge detection techniques are compared for an application which uses a DSP implementation: the Sobel filter exhibits the best performance in terms of edge detection time in comparison with the other wavelet-based edge detectors. Sobel filtering has been implemented in hardware and used in different areas, often when realtime performance is required, such as for real-time volume rendering systems, and video assisted transportation systems [21, 22]. This makes the proposed metric suitable for real-time implementation, an important aspect when an image/video metric is used for the purpose of "on the fly" system adaptation as in the scenario considered here.

3.3 Overhead

In order to perform the proposed edge comparison, we should transmit the matrices composed of one's and zeros's in the reference blocks. By considering the pattern in Figure 3, this would result for images of resolution 512 × 768 in the transmission of 19 × 32 × 12 = 7.29 kbits per image. Note that the size of the original image (not compressed) is 3 × 512 × 768 × 8 = 9.4 Mbits.

In the worst case (side information not compressed) our metric reduces thus the needed reference with respect to FR metrics of a factor 1290:1. As a comparison, the RR metric in [11] reduces it of a factor 1024:1 and the metric in [12] of 64:1.

Since side information is in our case composed of a large number of zeros appearing in long runs, it is possible to further reduce the overhead by compressing the relevant data, e.g., through run-length encoding, or to transmit only the positions of ones in the matrix.

Furthermore, in the case of video, quality assessment can be performed only on a fraction of the transmitted frames (e.g., five frames per second) in order to reduce the side information overhead needed for the calculation of the quality metric.

4 Simulation set-up and results

In order to test the performance of our quality assessment algorithm, we considered publicly available databases.

The first one is provided by the Laboratory for Image & Video Engineering (LIVE) of the University of Texas Austin (in collaboration with The Department of Psychology at the same University). An extensive experiment was conducted to obtain scores from human subjects for a number of images distorted with different distortion types. The database contains 29 high-resolution (typically 768 × 512) original images (see Figure 5), altered with five types of distortions at different distortion levels: besides the original images, images corrupted with JPEG2000 and JPEG compression, white-noise, GB and JPEG2000 compression and subsequent transmission over a FF Rayleigh channel are considered. The latter set of images is in particular interesting since it enables to assess the quality of images impaired by both compression and transmission errors. Our quality metric is tested versus the subjective quality values provided in the database. Subjective results reported in the database were obtained with observers providing their quality score on a continuous linear scale that was divided into five equal regions marked with adjectives bad, poor, fair, good, and excellent. Two test sessions, with about half of the images in each session, were performed. Each image was rated by 20-25 subjects. No viewing distance restrictions were imposed, and normal indoor illumination conditions were provided. The observers received a short training before the session. The raw scores were converted into difference scores (between the test and the reference) and then converted to Z-scores [23], scaled back to 1-100 range, and finally a difference mean opinion score (DMOS) for each distorted image was obtained.

Figure 5
figure 5

Images in the LIVE [19]database.

The second database, IRCCyN/IVC [24], was developed by the Institut de Recherche en Communications et Cyberntique de Nantes. It is a 512 × 512 pixels color images database. This database is composed by ten original images and 235 distorted images generated by four different processing methods/impairments (JPEG, JPEG2000, LAR coding, and blurring). Subjective evaluations were made at a viewing distance of six times the screen height, by using a double stimulus impairment scale (DSIS) method with five categories and 15 observers. The images in the database are reported in Figure 6.

Figure 6
figure 6

Images in the IRCCyN/IVC [24]database.

Finally, for video we consider the database in [25–27]. The database is composed of ten video sequences. These are high definition (HD) YUV 4:2:0 format sequences downsampled to a resolution of 768 × 432 pixels. All videos, except one 8.68 s long, are 10 s long. The frame rate is 25 frames per second for seven sequences and 50 frames per second for three sequences. Example frames from the video sequences in the database are reported in Figure 7. For each video sequence, 15 distorted versions are present, with four types of distortion: wireless distortion, IP distortion, H.264 compression, MPEG-2 compression. For MPEG-2, the reference software available from the International Organization for Standardization (ISO) was used to compress the videos. Four compressed MPEG-2 videos spanning the desired range of visual quality were selected for each reference video. For H.264 the JM reference software (version 12.3) was used. The procedure for selecting the videos was the same as that used to select MPEG-2 compressed videos, with compression rates varied from 200 Kbps to 5 Mbps. For "IP distortion", three IP videos corresponding to each reference are present in the database, created by simulating IP losses on an H.264 compressed video stream. Four IP error patterns supplied by the Video Coding Experts Group (VCEG), with loss rates of 3, 5, 10, and 20%, were used. Since losses in different portions of the video stream may results in different visual effects, the authors viewed and selected a diverse set of videos suffering from different types of observed artifacts. For the "wireless"scenario, the video streams were encoded according to the H.264 standard using multiple slices per frame, where each packet contained one slice. Errors in the wireless environments were simulated using bit error patterns with packet error rates varied between 0.5-10%. The differential MOS (DMOS) value is provided for each impaired video sequence, in a scale from 1 to 100.

Figure 7
figure 7

Sample frames from video sequences in the LIVE video database [25].

With the aid of the databases above, we compare the performance versus subjective tests of our metric with respect to the most popular full reference metrics and to the RR metrics with the best performance and whose results are directly comparable or reproducible.

Namely, we consider:

  • MSSIM [2] (full reference);

  • PSNR (full reference);

  • [14] (reduced reference);

  • [15] (reduced reference);

  • [13] (reduced reference);

  • Proposed Sobel-based metric (reduced reference).

To apply the MSSIM metric, the images have been modified according to [28].

We report our results in terms of scatter plots, where each symbol in the plot refers to a different image: Figures 8,9, 10, and 11 report scatter plots for the metrics above in the case of compression according to the JPEG2000 standard and subsequent transmission over a fast fading channel.

Figure 8
figure 8

Fast fading, LIVE image database [19]--proposed metric. Above: scatter plot between DMOS and proposed metric. Below: residuals for the linear approximation and norm of residuals.

Figure 9
figure 9

Fast fading, LIVE image database [19]--RR metric in [4]. Above: scatter plot between DMOS and metric in [4]. Below: residuals for the linear approximation and norm of residuals.

Figure 10
figure 10

Fast fading, LIVE image database [19]--MSSIM. Above: scatter plot between DMOS and MSSIM. Below: residuals for the linear approximation and norm of residuals.

Figure 11
figure 11

Fast fading, LIVE image database [19]--PSNR. Above: scatter plot between DMOS and PSNR. Below: residuals for the linear approximation and norm of residuals.

The figures report, besides scatter plots, the linear approximation best fitting the data using the least-squares method, the residuals and the norm of residuals L for the linear model, i.e., L= ∑ i = 1 N ( d i ) 2 , where the residual d i is the difference between the predicted quality value and the experimental subjective quality value for image i, and N is the number of the considered images. The values of the norms of residuals enable a simple numerical comparison among the different metrics. Note that in the case of the MSSIM metric we have provided a non-linear approximation, better fitting the data.

A summary of the results for the LIVE image database [19] in terms of norms of residuals is reported in Table 1. Tables 2 and 3 report a summary of the results for the LIVE image database in terms of correlation coefficient, since this is more commonly used and enables an easier comparison with other metrics, and of Spearman rank. We have also reported results for two slightly different versions--(a) and (b)--of a the recent RR metric [15], whose performance results available in the literature can be compared with our ones for some of the impairments included in the LIVE database.

Table 1 Norm of residuals versus DMOS, LIVE image database [19]
Table 2 Correlation coefficient versus DMOS, LIVE image database [19]
Table 3 Spearman rank versus DMOS, LIVE image database [19]

We can observe that our metric well correlates with subjective tests, with results comparable to those achieved by full reference metrics. For the images in the LIVE database our metric outperforms the considered state-of-the-art RR metrics in all the considered scenarios, except for the case of WN, where the metric [15] performs better at the expense of a higher complexity, and the case of JPEG2000 where the benchmark RR metric [4], based on the wavelet transform, provides a better performance in terms of norm of residuals.

However, for the same type of impairment (JPEG2000 compression) our metric performs slightly better than the benchmark one when the images in the IRCCyN/IVC database [24] are considered. The relevant results are reported in Tables 4 and 5 and Figures 12, 13, 14, and 15 present in detail the relevant results for the case of JPEG compression.

Table 4 Norm of residuals versus MOS, IRCCyN/IVC image database [24]
Table 5 Correlation coefficient versus MOS, IRCCyN/IVC image database [24]
Figure 12
figure 12

JPEG compression, IRCCyN/IVC image database [24]--proposed metric. Above: scatter plot between mean opinion score and proposed metric. Below: residuals for the linear approximation and norm of residuals.

Figure 13
figure 13

JPEG compression, IRCCyN/IVC image database [24]--RR metric in [4]. Above: scatter plot between mean opinion score and metric in [4]. Below: residuals for the linear approximation and norm of residuals.

Figure 14
figure 14

JPEG compression--IRCCyN/IVC image database [24], MSSIM. Above: scatter plot between mean opinion score and MSSIM. Below: residuals for the linear approximation and norm of residuals.

Figure 15
figure 15

JPEG compression--IRCCyN/IVC image database [24]--PSNR. Above: scatter plot between mean opinion score and PSNR. Below: residuals for the linear approximation and norm of residuals.

Figures 16, 17, 18, and 19 report example results for the LIVE video database [25], where our metric is applied for all video frames. Figure 16 reports the scatter plot for our metric versus MOS in the case the video sequences in the database are compressed according to the MPEG-2 standard; Figure 17 reports the scatter plot for our metric versus MOS in the case the video sequences in the database are compressed according to the H.264 standard; Figure 18 reports the scatter plot for our metric versus MOS in the case the video sequences in the database are compressed according to the H.264 standard and affected by IP distortions; Figure 19 reports the scatter plot for our metric versus MOS in the case the video sequences in the database are compressed according to the H.264 standard and transmitted over a wireless channel. In all cases our metric well matches the subjective results.

Figure 16
figure 16

MPEG-2 compression--LIVE video database [25]. Above: scatter plot between diff. mean opinion score and proposed metric. Below: residuals for the linear approximation and norm of residuals.

Figure 17
figure 17

H.264 compression--LIVE video database [25]. Above: scatter plot between diff. mean opinion score and proposed metric. Below: residuals for the linear approximation and norm of residuals.

Figure 18
figure 18

IP distortion--LIVE video database [25]. Above: scatter plot between diff. mean opinion score and proposed metric. Below: residuals for the linear approximation and norm of residuals.

Figure 19
figure 19

Wireless distortion--LIVE video database [25]. Above: scatter plot between diff. mean opinion score and proposed metric. Below: residuals for the linear approximation and norm of residuals.

Table 3 reports results in terms of Spearman rank, an indicator of monotonicity, for the LIVE image database. With this criterion, our metric outperforms the full reference PSNR metric for all impairments except Gaussian noise, and the RR metric in [4] for all the reported cases except the case of fast fading. The more complex RR metric in [15] is outperformed in the case of GB.

Tables 4 and 5 report the results for the IVC image database in terms of norm of residuals and correlation coefficient, respectively. We observe that our metric outperforms the full reference metric PSNR and the RR metric in [4] in all cases. Considering the Spearman rank, reported in Table 6, our metric outperforms both the full reference PSNR metric and the RR metric in [4] in all cases except for PSNR in the case of JPEG2000 compression. Note that with this database, the gain obtained with our metric with respect to the others is higher, probably due to the fact that the metric in [4] was tailored to the LIVE database. We reported for completeness the results in terms of correlation coefficient for the metric [13]. This metric has very high correlation with subjective results; it is however too complex when real time implementation is required.

Table 6 Spearman rank versus MOS, IRCCyN/IVC image database [24]

The results obtained for the case of video sequences in the LIVE video database are summarised in Table 7 for the correlation coefficient and in Table 8 for the Spearman coefficient. We can observe that our metric outperforms the full reference PSNR metric in most cases.

Table 7 Correlation coefficient versus DMOS, LIVE video database [25, 27]
Table 8 Spearman rank versus DMOS, LIVE video database [25, 27]

Note that for video sequences, in order to reduce the overhead, it is possible to apply the metric only for selected frames, for instance by every 5, 10, 25, and 50 frames. The necessity of a more or less frequent calculation of the metric depends on the motion characteristics of the video sequence.

We can observe that the performance of our metric is comparable with the considered full reference metrics, and our metric outperforms PSNR in the case of both MPEG2 and H.264 compression and also in the case "IP distortion", i.e., the case of H.264 video transmitted over a network. Our metric outperforms also the MSSIM metric in terms of correlation coefficient with subjective data for the case of MPEG2 compressed video.

4.1 Comparison between full reference edge-based metric and RR one

We found interesting to perform a comparative evaluation of our metric, where edges are compared for a selected set of blocks (RR), and the metric obtained through the comparison of full edge maps (Sobel based full reference metric), that we define as below:

I f r = 1 N tot ∑ l = 1 N tot 1 - ∑ i = 1 m ∑ j = 1 n s c l , i j - s o l , i j m n
(7)

where the notation used is defined in Section 3, and Ntot is the total number of blocks in the image.

We found that, although the correlation with subjective results is higher for the full reference metric, the difference with our proposed metric is very small. The results are reported in Table 9. This confirms that the selected pattern well represents the ROI of the image and enables a reliable quality assessment, although with a very limited overhead for the transmission of side information.

Table 9 Norm of residuals versus DMOS, full reference versus RR edge-based metric, LIVE image database

5 Conclusion

We proposed in this article a perceptual RR image and video quality metric which compares edge information between portions of the distorted image and the original one by using Sobel filtering. The algorithms is simple and has a low computational complexity. Results highlight that the proposed metric well correlates with subjective observations, also in comparison with commonly used full-reference metrics and with state-of-the-art RR metrics.

References

  1. Pinson MH, Wolf S: A new standardized method for objectively measuring video quality. IEEE Trans Broadcast 2004, 50(3):312-322. 10.1109/TBC.2004.834028

    Article  Google Scholar 

  2. Wang Z, Bovik A, Sheikh H, Simoncelli E: Image quality assessment: from error measurement to structural similarity. IEEE Trans Image Process 2004, 13(4):600-612. 10.1109/TIP.2003.819861

    Article  Google Scholar 

  3. Sheikh HR, Sabir MF, Bovik AC: A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans Image Process 2006, 15(11):3440-3451.

    Article  Google Scholar 

  4. Wang Z, Simoncelli EP: Reduced-reference image quality assessment using a wavelet-domain natural image statistic model. In Human vision and Electronic Imaging. Volume 5666. San Jose, CA; 2005:149-159.

    Google Scholar 

  5. Martini MG, Mazzotti M, Lamy-Bergot C, Huusko J, Amon P: Content adaptive network aware joint optimization of wireless video transmission. In IEEE Commun Mag. Volume 45. San Jose, CA; 2007:84-90.

    Google Scholar 

  6. Marr D, Hildreth E: Theory of edge detection. Proc R Soc Lond Ser B 1980, 207: 187-217. 10.1098/rspb.1980.0020

    Article  Google Scholar 

  7. Zhang M, Mou X: A psychovisual image quality metric based on multi-scale structure similarity. In Proc IEEE International Conference on Image Processing (ICIP). San Diego, CA; 2008:381-384.

    Google Scholar 

  8. Woods J: Multidimensional Signal, Image and Video Processing and Coding. Elsevier, Amsterdam; 2006.

    Google Scholar 

  9. Wolf S, Pinson M: In-service performance metrics for mpeg-2 video systems. In Proc Made to Measure 98--Measurement Techniques of the Digital Age Technical Seminar, International Academy of Broadcasting (IAB). ITU and Technical University of Braunschweig, Montreux, Switzerland; 1998:12-13.

    Google Scholar 

  10. Wolf S, Pinson MH: Low bandwidth reduced reference video quality monitoring system. In Video Processing and Quality Metrics for Consumer Electronics. Scottsdale, Arizona; 2005:23-25.

    Google Scholar 

  11. Gunawan I, Ghanbari M: Reduced-reference video quality assessment using discriminative local harmonic strength with motion consideration. IEEE Trans Circ Syst Video Technol 2008, 18(1):71-83.

    Article  Google Scholar 

  12. Final report from the video quality experts group on the validation of objective models of video quality assessment, phase ii In Video quality expert group. San Jose, CA; 2003.

  13. Carnec M, Le Callet P, Barba D: Objective quality assessment of color images based on a generic perceptual reduced reference. Signal Process Image Commun 2008, 23(4):239-256. 10.1016/j.image.2008.02.003

    Article  Google Scholar 

  14. Engelke U, Kusuma M, Zepernick H, Caldera M: Objective quality assessment of color images based on a generic perceptual reduced reference. Signal Process Image Commun 2009, 24: 525-547. 10.1016/j.image.2009.06.005

    Article  Google Scholar 

  15. Li Q, Wang Z: Reduced-reference image quality assessment using divisive normalization-based image representation. IEEE J Sel Top Signal Process 2009, 3(9):202-211.

    Article  Google Scholar 

  16. Yarbus AL: Eye Movements and Vision. Plenum Press, New York; 1967.

    Chapter  Google Scholar 

  17. Privitera CM, Stark LW: Algorithms for defining visual regions-of-interest: comparison with eye fixations. IEEE Trans Pattern Anal Mach Intell 2000, 22(9):970-982. 10.1109/34.877520

    Article  Google Scholar 

  18. Engelke U, Zepernick HJ: Framework for optimal region of interest-based quality assessment in wireless imaging. J Electron Imaging 2010, 19(1):1-13.

    Article  Google Scholar 

  19. Sheikh HR, Wang Z, Cormack L, Bovik AC: Live image quality assessment database.2008. [http://live.ece.utexas.edu/research/quality]

    Google Scholar 

  20. Musoromy Z, Bensaali F, Ramalingam S, Pissanidis G: Comparison of real-time DSP-based edge detection techniques for license plate detection. In Sixth International Conference on Information Assurance and Security. Atlanta, GA; 2010:323-328.

    Chapter  Google Scholar 

  21. Zhou W, Xie Z, Hua C, Sun C, Zhang J: Research on edge detection for image based on wavelet transform. In Proceedings of the 2009 Second International Conference on Intelligent Computation Technology and Automation. Washington, DC, USA; 2009:686-689.

    Google Scholar 

  22. Kazakova N, Margala M, Durdle NG: Sobel edge detection processor for a real-time volume rendering system. In Proc of the 2004 International Symposium on Circuits and Systems (ISCAS '04). Vancouver, Canada; 2004:913-916.

    Google Scholar 

  23. van Dijk AM, Martens JB, Watson AB: Quality assessment of coded images using numerical category scaling. In Proc SPIE. Volume 2451. Amsterdam; 1995:99-101.

    Google Scholar 

  24. Le Callet P, Autrusseau F: Subjective quality assessment IRCCyN/IVC database.2005. [http://www.irccyn.ec-nantes.fr/ivcdb/]

    Google Scholar 

  25. Seshadrinathan K, Soundararajan R, Cormack LK, Bovik AC: LIVE video quality assessment database.2010. [http://live.ece.utexas.edu/research/quality/livevideo.html]

    Google Scholar 

  26. Seshadrinathan K, Soundararajan R, Bovik AC, Cormack LK: Study of subjective and objective quality assessment of video. IEEE Trans Image Process 2010, 19(6):1427-1441.

    Article  MathSciNet  Google Scholar 

  27. Seshadrinathan K, Soundararajan R, Bovik AC, Cormack LK: A subjective study to evaluate video quality assessment algorithms. Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series: Human Vision and Electronic Imaging 2010., 7527:

    Google Scholar 

  28. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP: The SSIM index for image quality assessment.2008. [http://www.ece.uwaterloo.ca/z70wang/research/ssim/#usage]

    Google Scholar 

  29. Tourancheau S, Autrusseau S, Sazzad ZMP, Horita Y: Impact of subjective dataset on the performance of image quality metrics. In IEEE International Conference on Image Processing (ICIP). San Diego, CA; 2008:365-368.

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by the European Commission (FP7 projects OPTIMIX and CONCERTO).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maria G Martini.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

MM conceived the work, proposed the edge-based metric, processed the data and analyzed the results, supervised the whole work and wrote the article. BV, in the framework of her internship in Kingston University, finalized the metric definition by proposing all the details, including the selection of the block pattern and the selection of the threshold in Sobel filtering; she performed all the simulations in the article and realized all the scatter plots; she also contributed to the processing of the data and analysis of the results. FF contributed to the literature review and supported BV in the selection of the final block pattern by taking into account visual attention models. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Martini, M.G., Villarini, B. & Fiorucci, F. A reduced-reference perceptual image and video quality metric based on edge preservation. EURASIP J. Adv. Signal Process. 2012, 66 (2012). https://doi.org/10.1186/1687-6180-2012-66

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-66

Keywords