Skip to main content
  • Research Article
  • Open access
  • Published:

Iris Recognition: The Consequences of Image Compression

Abstract

Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

1. Introduction

Iris recognition is gaining popularity as the method of choice for human identification in society today. The iris, the colored portion of the eye that surrounds the pupil, contains unique patterns which are prominent under near-infrared illumination. These patterns are relatively permanent, remaining stable from a very young age, barring trauma or disease. They allow accurate identification with a very high level of confidence.

Commercial iris systems are used in a number of applications such as access to secure facilities or other resources, and even criminal/terrorist identification in the Global War on Terror. The identification process begins with enrollment of an individual into a commercial iris system, requiring the capture of one or more images from a video stream. Typically, the database for such a system does not contain actual iris images, but rather it stores a binary file that represents the distinctive information contained in each enrolled iris (called the template). Most commercial iris systems today use the Daugman algorithm [13]. In the Daugman algorithm, the template is stored as 512 bytes per eye.

Data compression is beginning to play a part in the employment of iris recognition systems. Law enforcement agencies, such as the Border Patrol, the Coast Guard, and even the Armed Forces, are using portable wireless iris recognition devices. In cases where the devices require a query to a master database for identification, it may be required to transmit captured images or templates over a narrow-bandwidth communication channel. In this case, minimizing the amount of data to transmit (which is possible through compression) minimizes the time to transmit, and saves precious battery power. There are other iris applications that require a full-resolution iris image to be carried on a smart card, but require a small fixed data storage size. An example is the Registered Traveler Interoperability Consortium (RTIC) standard, where only 4 kB is allocated on the RT smart card for the iris image [4]. Since the standard iris image used for recognition is VGA-resolution (640 480, grayscale), it contains 307 kilobytes; significant compression would be required to fit a VGA iris image into 4 kilobytes. Applications of this nature serve as the primary motivation for this research.

This paper explores whether image compression can be utilized while maintaining recognition accuracy, and the effects on performance. We evaluate the effects of image compression on recognition using JPEG-2000 compression along with a commercial implementation of the Daugman recognition algorithm [5]. The database used in this research is described in the following section.

2. Data

Iris images used in this paper are available from the National Institute of Standards and Technology (NIST). The database of iris images used in this research is the Iris Challenge Evaluation (ICE) 2005 database [6]. This iris database is composed of a total of 2953 iris images, collected from 132 subjects. Of these images, 1425 were of right eyes from 124 different individuals and 1528 were left eyes from 120 individuals. The images are all VGA resolution, 480 rows by 640 columns, with 8-bit grayscale resolution.

This database contains images with a wide range of visual quality; some images seem near perfect while others are very blurry, have iris that extend off the periphery of the image, contain significantly occluded irises, and/or have video interlace artifacts. All of these factors impair recognition performance. Several examples of images from this database are shown in Figures 1, 2, and 3.

Figure 1
figure 1

An example image from the ICE 2005 database (image no. 245596). The visual quality is very good.

Figure 2
figure 2

An example image from the ICE 2005 database (image no. 245795). Note the extent of the occlusion, including eyelashes.

Figure 3
figure 3

An example image from the ICE 2005 database (image no. 243843). Note the extent of blurriness and the video interlace artifacts.

3. Image Compression

The JPEG-2000 algorithm is published by the Joint Photographic Experts Group (JPEG) as one of its still-image compression standards [7]. JPEG-2000 uses state-of-the art compression techniques based on wavelets, unlike the more popular JPEG standard, which is based on the discrete cosine transform (DCT). JPEG-2000 contains options that allow both lossless and lossy compression of imagery, as does JPEG. When using any lossy compression technique, some information is lost in the compression and the amount and type of information that is lost depends on several factors, including the algorithm used for compression, the amount of compression desired (which determines the size of the compressed file), and special options offered in the algorithm such as Region-of-Interest (ROI) processing. In ROI processing, select regions of the image are deemed more important than other areas such that less information is lost in those regions.

The effect of image compression on iris recognition system performance has been addressed [8, 9]. In particular, in [8], iris images were compressed up to 50 : 1 using both JPEG-2000 and JPEG. In [9], Daugman and Downing used a portion of the ICE-2005 iris database and JPEG-2000 compression. Daugman used the Region-of-Interest (ROI) capability of JPEG-2000 which resulted in compression ratios of up to 145 : 1. He used segmentation methods to completely isolate the iris so as to reduce the size of the images from 480 640 to 320 320, and then completely discarded the regions of the smaller image that did not include the iris. Since the images were reduced in size to only contain the segmented iris, higher compression ratios were obtained with minimal effects on recognition performance. However, storing iris database images in this manner precludes testing of alternate segmentation methods. In our research, we opted to compress entire images rather than just the area of the iris-only information. This allows a more general approach to algorithm development research using a compressed iris database.

For this paper, we used the entire ICE-2005 database to obtain our results. We compressed the images using JPEG-2000, with the default parameters and options available in the JasPer implementation [10]. The source code is freely available from the JasPer Project. We did not use the ROI capability, so that entire images were compressed as a whole and segmentation testing could be performed on compressed images.

Figure 4 displays an original iris image from the ICE-2005 database before and after its compression to a ratio of 100 : 1 using JPEG-2000. This is image number 245596, the same as displayed in Figure 1. Comparing both images in Figure 4 closely reveals some detectable differences, primarily in the areas of high frequency content (high detail), such as the eyelashes, where compression artifacts or smoothing is noted. Statistically, the two images are not very different; the maximum difference in value between any two pixels in the two images is 22, and the average gray level difference between the two images is essentially unchanged (0.02) with a standard deviation of 1.56. Figure 5 shows a zoomed in image of the upper left portion of the iris in Figure 4. Overall, JPEG-2000 does a great job of maintaining the detail information even up to a compression ratio of 50:1.

Figure 4
figure 4

The iris image from Figure 1 after compression to 100 : 1.

Figure 5
figure 5

Zoomed view of the iris image from Figure 4. At this level of zoom, the compression artifacts are noticeable, particularly around areas of high frequency (such as eyelashes). Also note the smoothed out areas throughout the iris.

For this research using JPEG-2000, four compressed databases were created using the 2953 ICE images. To create each of these databases, each original image was compressed with loss to compression ratios of approximately 25 : 1, 50 : 1, 75 : 1, and 100 : 1. For example, the first database consisted of all ICE images compressed to 25 : 1. The JPEG-2000 engine is not designed to achieve the specified compression ratio exactly, but rather uses it as a target which may be exceeded but should be close to the desired compression ratio. For these 2953 images, the average compression ratios achieved are shown in Table 1. The next section discusses the quality measure that was used to relate compression ratio to performance and quality.

Table 1 Desired and actual compression ratios.

4. Quality Metric

The information distance-based quality measure is used to evaluate the iris image quality [11, 12]. Prior to the application of the quality measure, the iris is first segmented and transformed to polar coordinates. This quality measure includes three parts: Feature Information Measure, Occlusion Measure, and Dilation Measure, which are then combined into a quality score. These three parts and the fusion to form the quality score are described below.

( 1) Feature Correlation Measure (FCM)

The compression process will introduce artificial iris patterns, which may have low correlation with the true patterns. Using this property, we applied the information distance (see [13]) between adjacent rows of the unwrapped image to measure the correlation within regions of the iris.

Suppose the row length is L with a starting location at (). The filtered magnitude values (from feature extraction) of the L pixels in the row is formed as a vector . The probability mass function (pmf) of this selected portion is and is the pmf of the neighbor row [13]. The information distance of this portion is , which can be calculated by

(1)

where is the Kullback-Leibler information distance, . In our algorithm, if there are values that do not appear within the selected portions of rows, they are not considered in the pmf to prevent a divide-by-zero condition in (1).

The feature correlation measure (FCM) of an iris image is then calculated by

(2)

where is the representative information distance of the th row and N is the total number of rows used for feature information calculation.

( 2) Occlusion Measure (O)

The total amount of invalid iris patterns can affect the recognition accuracy. Here, occlusion measure (O) is used to measure the percentage of the iris area that is invalid due to eyelids, eyelashes, and other noise.

( 3) Dilation Measure (D)

The dilation of a pupil can also affect the recognition accuracy. Here, the dilation measure () is calculated by the ratio of pupil radius and iris radius.

( 4) Score Fusion (Q)

The three measures were then combined to one quality score based on the FCM, , and . Different from simply multiplying, we normalized each of the measure scores first:

(3)

where , , and are normalization functions.

The function is used to normalize the FCM score from 0 to 1, and is defined as follows:

(4)

In (4), = 0.005 and = 1/. The value of was chosen experimentally. For most original images, the scores were above 0.005, while for compressed images most scores were lower than 0.005. The value is the normalization factor to ensure that when FCM = ,  f(FCM) = 1.

We analyzed the relationship between available iris patterns and the iris recognition accuracy to determine the normalization functions empirically. This relationship is more exponential than linear. Based on [14, 15], the function is calculated as

(5)

In (5), and . Similar to the occlusion, the dilation is also a nonlinear function compared to the recognition accuracy. The h function is calculated as

(6)

Here,, and . For dilation, is selected based on the dilation functionality of a normal eye.

Figure 6 shows two sample images from the ICE database, along with each image compressed to ratios of 25 : 1, 50 : 1, 75 : 1 and 100 : 1. A zoomed in portion of the iris is displayed also, for visual evaluation of the quality. For each image, the resulting quality score is displayed. Additional quality results are included in the following section.

Figure 6
figure 6

Two sample images and their compressed versions from the ICE database. Image quality is annotated for each image.

5. Results

This section is divided into two parts; performance results first, and then quality results. In many iris recognition algorithms, including the Daugman algorithm used in this research, two iris templates are compared using fractional Hamming distance (HD) as the measure of dissimilarity between two iris templates. Fractional Hamming distance (HD) is defined by

(7)

The operator is the Boolean XOR operation. It detects disagreements between the pairs of phase code bits in the two templates (called IrisCodes in the Daugman algorithm—here, designated as code A and code B). Mask A and B identify the locations in each IrisCode that are not believed to be corrupted by artifacts such as eyelids/eyelashes and specularities. The operator is the Boolean AND operator, and the operator is used to sum the number of "1" bits within its argument. The denominator of (7) ensures that only the phase code bits that are valid are included in the calculation, after any artifacts are discounted. A value of HD = 0 indicates a perfect match between the IrisCodes and a value of HD = 1 indicates that none of the bits match. Daugman provides an alternate measure of dissimilarity in the normalized fractional Hamming distance (), which incorporates the number of bits that were actually compared [16]. This serves to reduce the chances of a false match and is discussed later in this paper. The standard fractional Hamming distance in (7) is used here to derive the Performance curves shown in this section. A few images were of poor enough quality that at higher compression ratios, they did not produce templates for comparison, because they failed to segment. An example is found in Figure 7. One image failed to produce a template in its original form or at any compression ratio and is displayed in Figure 8.

Figure 7
figure 7

This image failed to generate an iris template at 75 : 1 and 100 : 1 compression (image no. 245561). (a) Original image. (b) 100 : 1 compression.

Figure 8
figure 8

This image failed to generate an iris template in its original form and at all compression ratios (image no. 242451).

5.1. Performance Curves

The quality of the images in the database did play a role in the performance, as demonstrated in Figure 9. Here, two images of different eyes have been segmented (segmentation is shown in the images), and both segmentations are poor. Still, successful segmentation allowed template generation, so each image was represented by a template that could be compared. When the templates of these two different eyes were compared, the net result was that there was only one valid bit in the Hamming distance computation, resulting in a HD = 0. There were two other such comparisons of different eyes with a low number of valid bits (3 bits and 9 bits), both also resulting in a HD = 0. All three of these cases would result in false matches. The issue of a low number of bits being compared and their effect on Hamming distance was addressed by Daugman in [16], in which he compared use of performance using raw Hamming distance (as we use here) and normalized Hamming distance, defined as

(8)
Figure 9
figure 9

Segmentation images. (a) Image no. 247076 compressed to 25 : 1. (b) Image no. 246215. These two images from different eyes generated templates, but their qualities resulted in poor segmentation. As a result, in the template comparisons there was only one valid bit compared, resulting in a false match using raw HD, (HD = 0.0). Normalized HD would have resulted in an HD = 0.5 (no false match). Also, the iCAP software version used in this research was an early version that did not include capability for off-axis images or partially out-of-frame images. The fact that these images did not segment properly could be expected.

Here, is the Hamming distance computed using (7), and n is the number of valid bits in the comparison. The value 911 is a scaling factor based on a typical number of bits used in comparisons. The normalization comes about to account for the number of valid bits actually used in computing the Hamming distance. In [16], the minimum number of bits used in the results is 400, and this is the minimum number of bits we allow in determining our results.

The size and number of subjects in the ICE database, the number of images that successfully segmented so that a template could be formed, and the number of valid bits used in comparing two template were all factors that determined the number of actual comparisons that were made. Recall that five databases were used in this research, one for the uncompressed images and one for each of the compression ratios used. The number of comparisons made (genuine or impostor) differed when comparing different databases. Part of the difference in number of comparisons comes about because when comparing one database to itself, we do not count comparisons of each image to itself (the HD = 0 in this case). However, when comparing two different databases, the difference in compression ratios means that there are no identical images, and this allows an additional number of valid comparisons. The number of comparisons also varies because a few images do not generate templates, so some databases had fewer templates for comparison than other databases. The original, 25 : 1 and 50 : 1 databases held 2952 templates while the 75 : 1 and 100 : 1 databases held 2950 templates. Finally, we only compare templates if at least 400 bits were valid in the comparison. As a result, the overall numbers of genuine, imposter and total comparisons are shown in Table 2.

Table 2 Number of matches in each database comparison.

The performance curves that follow are derived from the probability mass functions (PMF) of fractional Hamming distance scores. The PMF is an estimate of the underlying probability distribution using the histogram of HD values. An example of the effects of compression on the PMFs of genuine and imposter distributions is shown in Figure 10. Here the database of 25 : 1 compressed image templates are compared to the original image templates. We point out that compression does not really change the imposter distribution, but as compression ratio increases, the genuine distributions move closer to the imposter distribution, which reduces performance. We note that in the comparisons between this database and the original, and this database and itself (25 : 1 versus 25 : 1), that there is a distinct 2nd peak in the PMF close to a Hamming distance of 0; we attribute this to the comparison of images that are only slightly different (i.e., the compression does not result in much change in the iris template). At higher compression ratios, more change is induced in the templates resulting in higher Hamming distances when comparing them.

Figure 10
figure 10

Probability mass function curves for compression 25 : 1, as a function of Hamming distance (25 : 1 versus original, 25 : 1 versus 25 : 1, 25 : 1 versus 50 : 1, 25 : 1 versus 75 : 1 and 25 : 1 versus 100 : 1).

Since the imposter distributions are relatively unchanged as compression ratios increase, we further analyze the changes seen in the genuine distributions. Here, we investigate the changes in HD values as compression ratio is increased, when compared to the original images. Statistics have been gathered for five database comparisons: original versus original, original versus 25 : 1, original versus 50 : 1, original versus 75 : 1, and original versus 100 : 1. The minimum, average, and maximum HDs were recorded for each database comparison. We expected that all of these values would increase as compression ratio increased, since more of the original data is lost in the compression. These results are included in Table 3. We note that the minimum HD is 0.0 for comparisons between the original and 25 : 1, and original and 50 : 1 databases. We attribute this to the fact that JPEG-2000 is efficient in how it performs the compression, and the impacts on the iris and the iris template are minimal, so the template of a given iris image is in general close to the template of the same image compressed to 25 : 1 or 50 : 1. As mentioned earlier, for comparisons of a database with itself, comparisons of an image with itself are excluded because they trivially give a Hamming distance of zero. This is why the minimum Hamming distance in Table 3 is lower for the comparisons between the original and the 25 : 1 and the original and 50 : 1 are lower than the minimum for the first row comparing original and original.

Table 3 Minimum, mean And maximum HDs.

Figure 11 is an example of the performance curves created for this research. Here, each pair of curves (False Rejection Rate (FRR) and False Accept Rate (FAR)) represents the comparison of each compressed database against the original database. An original versus original comparison is included as a baseline. We note that as the compression ratio increases, the FAR curve remains virtually unchanged, while the FRR curves move further to the right. This will cause an increased Equal Error Rate (EER, where FAR = FRR), and an increased number of errors (False Accepts + False Rejects) which reduces overall system accuracy. Some overall results are included in Table 4, where we record: () best accuracy achieved, which reflects varying the threshold for identity to minimize the total number of errors achieved; () EER point, in percent; () FRR when FAR is fixed at 0.001 (one false accept in 1000 imposter comparisons); and () FRR when FAR is fixed at 0.0001 (one false match in 10,000 imposter comparisons). This table reflects all possible comparisons of the databases used (original and compressed), where the number of valid bits is ≥400.

Table 4 Summary of performance results.
Figure 11
figure 11

Performance curves for each compression ratio versus original images.

5.2. Quality Measure

The quality measure using the means described in Section 4 was determined for every image utilized (original and compressed). In most cases, as compression ratio increases, the quality degrades. An example is the quality of image number 243843, which is displayed in Figure 3. For this image, Table 5 displays the quality of the original and compressed versions of this image, as well as the Hamming distance (HD) when compared with the original image, and the number of valid bits that were used in the comparison. In addition, it shows the decidability of the two distributions, defined as

(9)

This equation includes the means and standard deviations of the pmfs of the genuine and imposter distributions, combined into a measure of how well separated the two probability mass functions are separated from each other [9]. A larger decidability value is indicative of a greater separation between the distributions, which should lead to improved recognition performance.

Table 5 Image quality and Hamming distances.

We note that for this image, the measured quality decreases and Hamming distance increases as the compression ratio increases, and is the general trend when using a large database of images. The number of valid bits compared does not follow this trend. We attribute this to the fact that the compression introduces artifacts that alter the spatial makeup of the image, and these artifacts will be reflected in a change in the masks used in the computation of Hamming distances. Overall, the mean qualities of the databases used are shown in Table 6.

Table 6 Database qualities.

6. Conclusions

As expected, and as shown in other researches, as iris images are compressed more, recognition performance reduces. The FAR remains fairly unaffected by changes in the image data, while the FRR is noticeably affected. The compression introduces artifacts into the iris images which alter the distinct patterns that are present in the original images, making the compressed images more dissimilar. There are some cases in which the compression introduced was small enough such that the templates of an original and the same image compressed by some amount resulted in the same template. The cases of zero Hamming distance between compression ratios came about due to a combination of small changes in the phase and mask bits so that none of the changed phase bits were actually counted. In general, though, the net effect is that comparing compressed images of the same eye will yield higher HDs, shifting the performance curve to the right and resulting in higher FRRs.

The importance of correct segmentation cannot be overemphasized. Poor segmentation will lead to poor results, and in fact can lead to false matches if too few bits are compared in computing the raw Hamming distance (7). The normalized Hamming distance (7) was developed to avoid this occurrence. Controls can be built into code to preclude this possibility if the number of bits compared between two templates is below some minimum number.

In general, when images are not compressed, images that have higher quality will generate higher recognition accuracy, as should be expected. When the images are compressed, the original image patterns within the iris will be suppressed and some new artificial compression artifacts/patterns will be added. This tends to decrease the recognition accuracy. As the compression rate increases, the recognition accuracy decreases. However, when using a small database, this effect may not be reflected in the recognition results. For some images in a small database, the compression process could introduce some stable unique patterns, which in some cases can increase the recognition accuracy. That is why we see the fluctuations in recognition accuracy across different compression rates, as well as fluctuations in the number of bits compared. In addition, different iris images would have different "reactions" to the compression due to the characteristics of the patterns. The quality of some images may be reduced dramatically due to the compression process, but some may not be.

Overall, the iris images in this research were subjected to considerable compression, and yet the recognition performance was only minimally affected. This is a significant, particularly when compared to the FBI's wavelet scalar quantization (WSQ) compression of fingerprint images. In the FBI standard, fingerprints can be WSQ compressed with loss to a maximum ratio of 15 : 1 [17], while in this research the images were compressed up to 100 : 1. This proves the effectiveness of JPEG-2000 compression, and its ability to preserve the important information in the compression process. Of further note, the iris images here were compressed without the benefit of the region-of-interest options available in JPEG-2000, which might allow even twice the compression with comparable results.

References

  1. Daugman J: How iris recognition works. IEEE Transactions on Circuits and Systems for Video Technology 2004, 14(1):21-30. 10.1109/TCSVT.2003.818350

    Article  Google Scholar 

  2. Daugman JG: High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence 1993, 15(11):1148-1161. 10.1109/34.244676

    Article  Google Scholar 

  3. Daugman J: The importance of being random: statistical principles of iris recognition. Pattern Recognition 2003, 36(2):279-291. 10.1016/S0031-3203(02)00030-4

    Article  Google Scholar 

  4. Registered Traveler Interoperability Consortium (RTIC) Technical Interoperability Standard Version 1.2 http://www.rtconsortium.org/_docpost/RTICTIGSpec_v1.2.pdf

  5. Hong J, Hwang J, Shah S, Meyerhoff T: The iCAP and SDKs are licensed commercial products.

  6. NIST's Iris Challenge Evaluation (ICE) http://iris.nist.gov/ICE/

  7. The JPEG-2000 Standard May, 2010, http://www.jpeg.org/jpeg2000/index.html

  8. Ives R, Broussard R, Kennell L, Soldan D: Effects of image compression on iris recognition system performance. Journal of Electronic Imaging 2008., 17(1):

  9. Daugman J, Downing C: Effect of severe image compression on iris recognition performance. IEEE Transactions on Information Forensics and Security 2008, 3(1):52-61.

    Article  Google Scholar 

  10. The JasPer Project June 2007, http://www.ece.uvic.ca/~mdadams/jasper/

  11. Belcher C, Du Y: A selective feature information approach for Iris image-quality measure. IEEE Transactions on Information Forensics and Security 2008, 3(3):572-577.

    Article  Google Scholar 

  12. Zhou Z, Du Y, Belcher C: Transforming traditional iris recognition systems to work on non-ideal situations. IEEE Transactions on Industry Electronics 2009, 56(8):3203-3213.

    Article  Google Scholar 

  13. Cover T, Tomas J: Elements of Information Theory. John Wiley & Sons, New York, NY, USA; 1991.

    Book  Google Scholar 

  14. Du Y, Ives R, Bonney B, Etter D: Analysis of partial iris recognition. Biometric Technology for Human Identification II, March 2005, Orlando, Fla, USA, Proceedings of SPIE 5779: 31-40.

    Article  Google Scholar 

  15. Du Y, Bonney B, Ives RW, Etter DM, Schultz R: Analysis of partial iris recognition using a 1D approach. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '05), March 2005 2: 961-964.

    Google Scholar 

  16. Daugman J: New methods in iris recognition. IEEE Transactions on Systems, Man and Cybernetics 2007., 37(5):

  17. The Federal Bureau of Investigation (FBI)'s Forensic Handbook http://www.fbi.gov/hq/lab/handbook/forensics.pdf

Download references

Acknowledgments

For the iCAP software implementation of the Daugman algorithm and advice on the use of the SDK, the authors gratefully acknowledge Dr. Jun Hong, Chief Scientist, Mr. Joseph Hwang, Senior Software Engineer, Mr. Samir Shah, Senior Software Engineer, and Mr. Tim Meyerhoff, Project Manager, LG Electronics U.S.A. Inc., Iris Technology Division. This work was supported in part by the Department of Defense and the National Institute of Justice (Award no. 2007-DE-BX-K182). This work was conducted under USNA IRB approval no. USNA.2007.0004-CR01-EM4-A.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert W. Ives.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ives, R.W., Bishop, D.A., Du, Y. et al. Iris Recognition: The Consequences of Image Compression. EURASIP J. Adv. Signal Process. 2010, 680845 (2010). https://doi.org/10.1155/2010/680845

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2010/680845

Keywords