Skip to main content
  • Research Article
  • Open access
  • Published:

A New User Dependent Iris Recognition System Based on an Area Preserving Pointwise Level Set Segmentation Approach

Abstract

This paper presents a new user dependent approach in iris recognition systems. In the proposed method, consistent bits of iris code are calculated, based on the user specifications, using the user's mask. Another contribution of our work is in the iris segmentation phase, where a new pointwise level set approach with area preserving has been used for determining inner and outer iris boundaries, both exclusively performed in one step. Thanks to the special properties of this segmentation technique, there is no constraint about angles of head tilt. Furthermore, we showed that this algorithm is robust in noisy situations and can locate irises which are partly occluded by eyelid and eyelashes. Experimental results, on three renowned iris databases (CASIAIrisV3, Bath, and Ubiris), show that our method outperforms some of the existing methods, both in terms of accuracy and response time.

1. Introduction

The demand for high-confidence authentication of human identity has grown steadily since the beginning of organized society. The identification systems using unique factors of human irises play an important role in this field. In comparison with other biometrics, iris recognition systems have many advantages. Since the degree of freedom of iris textures is extremely high, the probability of finding two identical irises is close to zero; therefore, the iris recognition systems are very reliable and could be used in most secure places [1–3].

A regular iris recognition system consists of different major steps, including image acquisition, iris localization, feature extraction, and matching and classification. In this paper, we have used standard iris datasets; therefore, we have not focused on the image acquisition phase. Other parts of an iris recognition system will be discussed later.

One of the most important steps in iris recognition systems is iris localization, which is related to the detection of the exact location and contour of iris in an image. Obviously, the performance of the identification system is closely related to the precision of the iris localization step [1, 2]. For iris localization, existing methods mainly use circular edge detectors or other standard image processing techniques, to detect the iris location based on derivative operators, which calculate the sum of gray level differences on the vertical arc. It must be mentioned that, since the upper and lower parts of the outer iris boundary are usually obstructed by eyelids, it could be impossible to use a complete circle, instead of two vertical arcs, to represent the iris boundaries. In these methods, the result of localization algorithm depends on the tilt angle of the iris and the quality of the boundaries [1, 2, 4]. For example, if some parts of boundaries are occluded by the eyelid and eyelashes, performance of these algorithms reduces considerably and even in some cases, they fail. Another source of error is the presence of other parts of face in input image.

In [1], Daugman introduces a circular edge detection operator for iris localization, which tries to find a circle in the image with maximum gray level differences with its neighbors. In its method, thanks to a significant contrast between iris and purple regions, the inner boundary is localized. Then, outer boundary is detected using the same operator with different radii and parameters. In order to remove eyelids, Daugman changes the curve of integral to find an arc which accurately detects iris boundaries. As features, he uses the sign of real and imaginary parts of Gabor Wavelet coefficients of iris image. In matching phase, Hamming distance between binary codes of the query iris and irises in database is calculated. In his recent work [5], Daugman proposed four modifications in his algorithm, including (1) using active contour models (Snake model) for iris localization, (2) handling off-axes gaze samples using Fourier-based methods, (3) using statistical methods for detecting eyelashes, and (4) score normalization in large number databases.

An alternative for iris segmentation and localization has been proposed by Camus and Wildes [3], which is based on edge detection operator, followed by Hough transform. This method has a high computational cost, since it searches among all of the potential candidates. For eyelid detection, Wildes uses some constrains to locate the true edge points.

Snake approach has been used for iris localization in [6]. Using this technique, the boundary of the irises is located without any circularity constraint. In [7], an easy to difficult method has been used for iris localization by, first, determining high-contrast parts of boundary, and then, detecting outer boundary and eyelids. It is obvious that, because of their lower SNR, each step is more challenging than previous ones. For exact inner boundary detection, authors used Harr Wavelet transform followed by modified Hough transform. In the next step, outer boundary is localized with integral differential operators. Since the search space for determining the center and radius of inner boundaries could be limited, the speed of the algorithm is considerably improved. In the last step, for detecting eyelids in the image, a method is utilized based on texture segmentation.

Sun et al. [8] proposed iris localization using texture segmentation. First, they use the information of low frequency of Wavelet transform of iris image for pupil segmentation and also localize the iris with a different integral operator. Then, they detect the upper eyelid next to eyelash segmentation. Finally, the lower eyelid is localized using parabolic curve fitting, based on gray level segmentation.

Huang et al. [9] used a new noise removing approach based on the fusion of edge and region information. The whole procedure includes three steps: rough localization and normalization, edge information extraction based on phase congruency, and the infusion of edge and region information. They proceeded to iris segmentation by simple filtering, edge detection, and Hough transform. This method is specifically proposed for removing eyelash and pupil noises. Boles and Boashah [10] and Lim et al. [11] mainly focused on the iris image representation and feature matching without introducing a new method for segmentation.

Tisse et al. [12] proposed a segmentation method based on integro-differential operators with Hough transform. This approach reduces the computation time and excludes potential centers outside of the eye image. Eyelash and pupil noise have not been considered in this method neither.

Kong and Zhang in [13] presented a method for eyelash detection. Separable and multiple eyelashes are detected using 1D Gabor filters and the variance of intensity, respectively. In this work, specular reflection regions in the eye image are localized using a predetermined threshold value. Thornton et al. [14] used a general probabilistic framework for matching patterns of irises, which improves pattern matching performance, when the iris tissue is subject to in-plane wrapping.

Monro et al. in [15] present a novel iris coding algorithms based on differences of Discrete Cosine Transform (DCT) coefficients of overlapped angular patches with normalized iris image. Iris localization is done using the circularity shape of iris boundaries.

Other methods exist for iris localization, including [12, 16]. However the above mentioned techniques are much more cited in literature. There are also a few papers which survey literature in iris recognition subject; amongst them, Bowyer et al. [2] is one of the best.

We have used active contour based-localization method in [4]. In this paper, we improve our method and test its performance on three famous databases, namely, CASIA-IrisV3 [17], Bath [18], and Proença and Alexandre [19]. The results show the superiority of our proposed method in comparison with other methods, including the method proposed in [6], which is also based on geodesic active contour for iris localization. The details will be discussed in Section 2.

In [19], new approaches for localization have been introduced. In their paper, they use a dataset of irises with heterogeneous characteristics, simulating the dynamics of a noncooperative environment. Their method builds a feature set from pixel position and pixel intensity . They apply a fuzzy clustering algorithm to cluster the pixels. In Section 4 we compare our proposed method to their results.

Considering the above mentioned methods, we can state the following important remarks and drawbacks of existing methods.

  1. (1)

    Usually, the iris inner and outer boundaries are detected using circle fitting techniques (except the recent works of Daugman [5] and Ross and Shah [6]). This is a source of error, since the iris boundaries are not exactly circles.

  2. (2)

    In almost all of these methods, inner and outer boundaries, eyelashes, and eyelid are detected in different steps, causing a considerable increase in processing time of the system.

  3. (3)

    The results of the circle fitting method are sensitive to the image rotation, particularly if the angular rotation of the input image is more than 10 degrees.

  4. (4)

    In noisy situations, the outer boundary of iris does not have sharp edges.

  5. (5)

    After detecting iris boundaries, the resulted iris area is mapped into a size independent rectangular shape area.

  6. (6)

    None of these methods take into account the user specifications.

Considering these remarks, we propose a new user specific iris recognition system with the following contributions.

  1. (i)

    We use a pointwise area preserving level set approach for iris localization, which guarantees the correct segmentation of iris, even in noisy environment and regardless of the head tilt and occlusion. Although active contours for localization have been also used in [5, 6], our proposed method has many advantages compared to those approaches (we will discuss these advantages in details in Section 2).

  2. (ii)

    We propose a new user dependent method which improves the system recognition performance.

In [4], we explained how to use pointwise level set with area preserving capability for iris localization purposes. We have also introduced a method for mapping the initial coordinates to polar space based on the estimated location of the center of pupil. In this paper, in order to reduce the complexity of the polar mapping calculations, we propose the improved version of the above mentioned method, which is based on the point trajectory of moving contours. We show the results of the new method on CASIA-IrisV3, Bath, and Ubiris datasets.

The rest of the paper is organized as follows. Section 2 briefly describes the theory of pointwise level set approach with area preserving capability. Section 3 is dedicated to the user dependency in iris recognition systems. Experimental results are presented in Section 4 and Section 5 concludes the paper.

2. Iris Localization with Pointwise Level Set Approach

In this approach, the moving front is defined as a zero level of a higher dimensional potential function [20]. Consequently, the curve corresponding to the zero level set of this potential function is enabled to handle topological changes, such as splitting and merging. Furthermore, it is not necessary to initialize the algorithm very close to the final contours, which is the case of Snakes model. According to the level set model, the initial curve is deformed using the following evolutionary equation:

(1)

where is any intrinsic quantity and does not depend on parameters, is the normal vector, and , as the implicit representation of the curve, is defined as

(2)

A distance measure can be used for initializing the potential function . It means that each point of the three-dimensional potential function is initialized with the minimum distance of that point to the contours. More details on this subject are available in [20]. The evolution of is such that its zero levels movement corresponds to deformation of the initial curve. This evolution may be described by the following equation:

(3)

This equation shows that the rate of changes of the potential function in time depends on the speed parameter and the magnitude of the gradient of . The speed has three components: balloon force (which cause all part of contour to move), curvature-based speed, and gradient-based speed [20]. Due to the high performance of active contour-based models for localization purposes, some references in literature are based on these models [4–6]. As we mentioned briefly in Section 1, Daugman, in [5], proposed a method for iris segmentation using Snake model [21]. Despite of the Snakes advantages over the traditional object recognition algorithms, it has some important drawbacks, due to its Lagrangian-based formulas. In Snake model, contour initialization is a crucial point; thus, if the initial contour is far from the target, it may not reach the target. Another important disadvantage of this model is its performance reduction: due to point-based structure of the contour, some unwanted pixels can cause misjudgment of localization results. In order to solve these drawbacks, new models have been introduced based on Euler equations [20]. These models consider moving contours as a level set of a higher dimensional function, which reshape during the different iterations. Very briefly speaking, Euler equations connect the differentiations in time and space together [20]. Because of this capability, if noisy pixels cause some parts of contour to stop, other moving parts prevent the whole contour to stop. Another advantage of this approach is its robustness to contour initialization. Because of the combination of different forces, which cause movement in this approach, almost all kinds of initialization, lead to the same result (Figure 1).

Figure 1
figure 1

(a) Three-dimensional function of level set approach, (b) Result of application of the zero level set method to an iris image taken from CASIA-IrisV3.

Another related work is Ross and Shah in [6], who use geodesic active contour models for iris segmentation. The structures of geodesic active contour and level set methods are similar; therefore, both can handle noisy situations and initialization problems properly. The major difference between Ross's method and the method proposed in this paper is as follows. Due to the geodesic active contour's structure, it lacks the point correspondence property. Therefore, it is impossible to find the correspondent points in initial and final contours. We used point correspondent level set approach [22], which, in addition to level set's regular abilities, keeps point correspondence during the iterations [4]. This ability enables us to perform both localization and mapping to the dimensionless coordination phases in a single phase, an interesting property which improves the performance of the whole system. Another advantage of our proposed method, in comparison with Ross's work, is that, here, we use an area preserving method [23] for our level set methods, which make our method robust in case of blurred images. If the boundaries of an iris image are blurred, level set method is not able to determine the exact location of blurred parts of the boundaries to stop moving; whilst, in our proposed method, thanks to its area preserving property, even if some parts of boundaries are blurred, the whole contour prevents the unwanted local movement of the contour in blurred image. This property leads us to determine the exact target boundaries (Figure 2). This could be done by defining the application specific normal motion, combining with adequate tangential speed. More details are available in [23]

Figure 2
figure 2

Iris segmentation with noisy samples (a) without and (b) with area preserving capability.

3. Template Generating with User Dependency

According to Hallingsworth et al. in [24], it is possible to use weighted iris codes during the Hamming distance estimation phase. This means that different bits in an iris code do not have the same importance. Based on this idea, we propose a new user dependent method for iris recognition. After mapping the segmented area of the iris to the dimensionless polar coordinates, as it has been explained in Section 2, iris texture is transformed into a binary code, using the sign of real and imaginary parts of log Gabor Wavelet coefficients of the iris image. As it can be seen in Figure 3, considering the quarter of the log Gabor coefficient in the real-imaginary axes, a two-bit binary code can be assigned to each coefficient.

Figure 3
figure 3

Real and imaginary axes and related binary codes.

Figure 4
figure 4

Iris features in the real/imaginary plane. The features near the axes are more inconsistent than others.

Gabor filters are traditional choices for obtaining localized frequency information, and thanks to their similarity to the human vision system [1], these filters are vastly used in iris feature extraction phase. However, they suffer from two major drawbacks: (1) the maximum bandwidth of a Gabor filter is limited to approximately one octave, and (2) Gabor filters are not optimal, if one is seeking broad spectral information with maximal spatial localization. Considering these points, we used log Gabor filters [25] for feature extraction. Equation (4) shows this filter:

(4)

where is the filter's center frequency. To obtain constant shape ratio filters, the term must also be held constant for different s.

It must be mentioned that using these filters is not an originality of this work (see [26]). Considering the real and imaginary parts of filters, texture of iris could be mapped to the iris codes, and as mentioned in [24], regarding to the distance of bits from axes, it is possible to choose some probability of bit consistency. For each user, the iris code of different samples is calculated, and by comparing these iris codes, the probability of changing each bit is determined. By choosing a threshold, it could be possible to judge about the consistency of each bit. Details about the consistency of bits in the iris codes can be found in [27].

In [27], existence of fragile bits in iris code has been theoretically proved, and the effect of applying filters, image rotation, and iris alignment has been discussed in details. In our work, we used their idea about the bit consistency in iris code and developed an applied method for iris recognition systems. In Figure 5, the performance of proposed method has been shown with different thresholds for using only the consistent bits in the iris code generation phase. As it can be seen, the best results have been obtained with threshold %. In addition, the comparison between performances of our system considering all bits of iris code with the same systems considering only consistent bits shows the positive effect of masking fragile bits. For each user the proper rectangular calculated and features inside this rectangular are eliminated from iris code generation process.

Figure 5
figure 5

Comparison of ROC curves of our proposed method using all bits of iris code and using only the consistent bits with different thresholds. As it can be seen, the performance of system considering consistent bits with threshold equal to 35% is the best. (Tests using CASIA-IrisV3 dataset).

For being rotation invariant, in this phase, like Daugman's method [4], the enrolled iris code will be compared with different shifted test iris codes to find the best match.

Figure 6 shows the calculated masks for three persons using samples in CASIA-IrisV3 and Bath iris databases. In this figure, black and white points show consistent and inconsistent bits, respectively.

Figure 6
figure 6

Three samples of masks used for choosing consistent bits in iris codes. Two upper masks are related to two subjects in CASIA-IrisV3, and the last one corresponds to a subject in Bath iris database.

4. Experimental Results

In our experimentations, we have used all samples of three famous iris databases, that is, CASIA-IrisV3, Bath, and Ubiris. CASIA-IrisV3 includes three subsets which are labeled as CASIA-IrisV3-Interval, CASIA-IrisV3-Lamp, and CASIA-IrisV3-Twins. CASIA-IrisV3 contains a total of 22 051 iris images from more than 700 subjects. All iris images are 8-bit gray-level JPEG files, collected under near infrared illumination. Almost all subjects are Chinese except a few ones in CASIA-IrisV3-Interval. Since these three datasets were collected in different times, CASIA-IrisV3-Interval and CASIA-IrisV3-Lamp have a small overlap in subjects. Some samples from this database have been shown in Figure 7(a). Bath iris database includes 20 samples from each eye of 25 subjects. The images are of a very high quality taken with a professional machine vision camera with infrared illumination. Some of these images have been shown in Figure 7(b).

Figure 7
figure 7

Some samples taken from (a) CASIA-IrisV3 database, (b) Bath database, and (c) Ubiris Version 1 database.

Ubiris iris database version 1 is composed of 1877 images collected from 241 subjects taken in two sessions (Figure 7(c)). Unlike the CASIA-IrisV3 database, it includes images in different noisy situations, which permits to evaluate the robustness of iris recognition methods in presence of noise [19].

To evaluate the performance of our algorithm, we have used the K-fold cross validation technique. For CASIA-IrisV3 database, for each subject, three-iris samples have been utilized, to extract the user dependent iris code, and the rest of samples to test the algorithm. For Bath database, the number of samples used to extract the code is five. We have repeated this technique in a way that all of the iris images have been used in K-fold cross validation strategy.

In this work, the precise location of an iris is determined using pointwise level set approach with area preserving capability. Generally speaking, active contour models have been used previously in iris recognition systems [6]. Although active contour refers to a family of moving contour methods, in some papers, it corresponds to the Snake techniques [5]. In previous sections, we have described the drawbacks of the Snake model. Geodesic active contours with point correspondence have been used for iris segmentation in [4]. In this paper, we propose a method based on pointwise level set approach with area preserving capability.

We calculate the approximate center of inner boundary of irises using vertical and horizontal histograms (Figure 8). Using this technique, the initial point of a contour is determined, and the starting point for tracing the contour is selected (for coordinate mapping to dimensionless polar space).

Figure 8
figure 8

(a) Horizontal histogram, (b) Vertical Histogram, (c) Overall Histogram of the image, and (d) Estimated center.

The vertical histogram is calculated as follows: size of the vertical histogram is equal to image's height, and the value of each histogram bin is equal to the sum of gray levels of a row of the image. The minimum of this histogram corresponds approximately to the vertical location of the center of inner boundary circle (almost circle). Indeed, pixels located in the pupil region are always dark; therefore, their values are close to 0. Thus, the minimum of the histogram shows the line that has the lowest number of dark pixels, that is, the diameter of the inner boundary circle. The intersection of this line with the output of the horizontal histogram shows the approximate location of the center point (Figure 8). Our experimental results show that we can locate the center of pupil in a point inside the pupil, even for difficult samples having other dark areas in the eye image. For image samples of datasets used in this paper, all pupils are placed almost in the center of the image.

In order to make the correct contour initialization , the estimated center of pupil is determined using (5) (Figure 9). In this equation, the contour starts to evolve from this point and is expected to find the whole iris location.

Figure 9
figure 9

Inner and outer boundaries detection using pointwise level set approach done in one step and related iris codes.

For calculating d from the approximate center, one dimensional derivation in the right horizontal axes has been calculated. is equal to the length of line between the approximate center and some pixels after the found edge (in our experienced d could be an integer between 10 and 30):

(5)

The proposed one step segmentation approach improves the speed of the whole process in comparison with regular two-step boundary detection methods.

This method is robust in noisy situations. A noisy pixel causes a sudden variation in gray levels and can stop the moving front. However, in this situation, other contour points continue to move and avoid the curve to stop. Figure 10 shows the results of applying our method to an iris image with Gaussian white noise (despite that encoding the iris texture is almost impossible in this image). During the detection process, some parts of the iris boundaries may have low gray level contrast, which may lead the algorithm to inaccurate edge detection results. For solving this problem, we have used a topology preserving algorithm [23], which guarantees the correct iris segmentation. Figure 11 shows the result of applying our algorithm to iris images with 10 and 15 percent salt and pepper noises.

Figure 10
figure 10

Performance of proposed algorithm in presence of Gaussian noise. For both images we have added a Gaussian white noise with mean  = 0 and variance  = 0.007.

Figure 11
figure 11

Performance of the proposed algorithm to iris images with (a) 10 percent and (b) 15 percent of salt and pepper noise.

In general, the effect of noncooperative iris images causes serious performance degradation. We used Ubiris iris database version 1 [28] for testing our localization ability dealing with noncooperative iris images. Our experimental results showed that our method is able to handle blurred, occluded images, localizing iris boundaries properly (Figure 12 and Table 1).

Table 1 Comparing localization accuracy of different methods using Ubiris database. The whole table entries are taken from reference [19], excluding the last row which contains the results obtained using our approach.
Figure 12
figure 12

Localization of two samples from Ubiris database with proposed method.

We tested our localization algorithm on Ubiris dataset and compared the results with the results published in [19]. The results in [19] were obtained by visual inspection of each segmented image. Although this is not the best for meaningful comparison, we did the same for localization evaluation in our system. Table 1 shows these results that are the proof of performance of our algorithm even for poor quality images. Indeed, in term of the degradation, the lowest accuracy degradation in the presence of noise belongs to our method, depicting low sensitivity of our approach to the image condition.

4.1. Error Definition

In order to measure the error of our method, we compared the points of the detected boundaries with those of the real boundaries. First, the exact boundary contours for inner and outer parts of irises are determined point to point manually. Then, the sum of the distance between the interface points and their nearest point in the correct boundary is calculated. Total error of localization is estimated using

(6)

where is the correct boundary, means the set of distances between th point of interface and all of the points of correct curve, and is the total number of interface points. Although a global system performance measure such as ROC curve could be a better measure of performance, by introducing this error measure, we intend to evaluate our segmentation module performance exclusively. Figure 13 shows the localization errors (according to (5), for proposed method and traditional circular based method, using some samples of CASIA-IrisV3 and Bath iris databases, in noisy situations.

Figure 13
figure 13

Error comparison between circle-based method and proposed method in noisy situation (salt and pepper noise).

4.2. Response Time

Figures 14 and 15 show the response times of proposed method using CASIA-IrisV3 and Bath iris databases. We implemented Daugman [5], Ma et al. [7], and Monro et al. [15] methods for comparing their results with the results of our proposed method. Our method's average response time in the same situation is less than others. In addition, small standard deviation of our method is a proof of its performance for real time applications.

Figure 14
figure 14

Response times of (a) Proposed, (b) Daugman [5], (c) Monro et al. [15], and (d) Ma et al. [7] methods using CASIA-IrisV3 database.

Figure 15
figure 15

Response times of (a) Proposed, (b) Daugman [5], (c) Monro et al. [15], and (d) Ma et al. [7] methods using Bath iris database.

4.3. Hamming Distance

After generating the iris code, the result is compared with iris codes in databases using Hamming distance operators. Depending on the user dependent consistent bits, only the important bits of each iris code are involved in the matching process. Figures 16 and 17 show the calculated Hamming distances for Daugman's [5], Ma et al. [7], Monro et al. [15], and proposed methods, for CASIA-IrisV3 interval and Bath iris datasetss, respectively.

Figure 16
figure 16

Hamming distance of match (blue,bottom), nearest nonmatch (red, middle), and average nonmatch (black, top) of (a) Daugman [5], (b) Monro et al. [15], (c) Ma et al. [7], and (d) proposed method using CASIA-IrisV3 interval database.

Figure 17
figure 17

Hamming distance of match (blue, bottom), nearest nonmatch (red, middle), and average nonmatch (black, top) of (a) Daugman [5], (b) Monro et al. [15], (c) Ma et al. [7], and (d) proposed method using Bath iris database.

4.4. ROC Curves

ROC curves of proposed method have been compared with those of five different methods, tested on CASIA-IrisV3 and Bath iris databases, respectively, in Figures 18 and 19. The results show the superiority of our method compared to other methods. Figure 20 shows the performance of our method using the iris samples with 5, 15, and 25 degrees rotation, compared to Boles and Boashash [10], Daugman [5], Ma et al. [7], Monro et al. [15], and Sun et al. [29] methods, tested on CASIA-IrisV3 and Bath iris databases. One of the curves belongs to the proposed method, and in each of the other curves, each point corresponds to the best result obtained from these four methods, for 5, 15, and 25 degrees rotations, respectively. Indeed, we showed only one curve for different rotations applied to our proposed method, which is a proof of robustness of this method against rotation. Concerning the other three curves in Figure 20, as it has been mentioned, each curve is a pointwise combination of the best of the four other methods.

Figure 18
figure 18

ROC curves of proposed method in comparison with (a) Boles and Boashash [10], Daugman [5], Ma et al. [7], and (b) Monro et al. [15], Halligswroth et al. [27] methods using CASIA-IrisV3 iris database.

Figure 19
figure 19

ROC curves of proposed method in comparison with (a) Boles and Boashash [10], Daugman [5], Ma et al. [7], and (b) Monro et al. [15], Halligswroth et al. [27] methods using Bath iris database.

Figure 20
figure 20

ROC curves of proposed method in comparison with best results of Boles and Boashash [10], Daugman [5], Ma et al. [7], Monro et al. [15] and Sun et al. [29] methods with iris rotations (5, 15, and 25 degrees clockwise) using (a) CASIA-IrisV3 and (b) Bath iris database.

As it can be seen, our method is robust against rotation, while rotation degrades the performance of other methods considerably, due to their circular edge detection nature. In general, circular edge detection process is based on determining the location of the circle with maximum differences of pixel gray levels for two adjacent circular curves. In practice, these differences are calculated using two arches, instead of a whole circle. The performance of the iris localization depends on the location and angle of these arches in relation with the iris axis, and, as a consequence, rotating the image degrades the results of circular edge detection, mainly due to wrong arches used in the process and presence of eyelid and eyelashes. In contrast with these conventional methods, the iris localization in the proposed method is based on geodesic active contour model, which calculates the iris boundaries independently to any geometric shape, including circles and arches; therefore, it is robust to the image rotation problem.

5. Conclusions

We have proposed a new user-dependent iris recognition method. Using a specific mask for each user, inconsistent bits of iris code are omitted during the Hamming distance comparison phase. As the experimental results show, using this approach, the performance of the whole system is improved considerably. Another contribution of this paper is the utilization of pointwise level set approach with area preserving capability for iris segmentation and localization. In this algorithm, the exact location of the iris can be detected using an iterative algorithm based on the active contour model. Comparing our algorithm with other methods, we showed that the new approach is able to solve some of the previous method's drawbacks. For instance, using our method, the iris location can be detected regardless to its angular position and shape, and this is done in only one step. Also, previous methods usually detect iris boundaries using circular edge. One of the disadvantages of this approximation is its sensitivity to the rotation of the iris images. In recent years, active contour model have been used for iris detection purposes. However, our method has some advantages over other methods. Indeed, an area preserving algorithm is used to compensate the problem of incorrect iris boundary detection in presence of noise. Furthermore, even when eyelids occlude some part of iris, our algorithm localizes iris area properly [4]. The experimental results show that our method outperforms the current methods both in terms of accuracy and response time.

References

  1. Daugman JG: High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence 1993,15(11):1148-1161. 10.1109/34.244676

    Article  Google Scholar 

  2. Bowyer KW, Hollingsworth K, Flynn PJ: Image understanding for iris biometrics: a survey. Computer Vision and Image Understanding 2008,110(2):281-307. 10.1016/j.cviu.2007.08.005

    Article  Google Scholar 

  3. Camus TA, Wildes R: Reliable and fast eye finding in close-up images. Proceedings of the 16th International Conference on Pattern Recognition (ICPR '02), August 2002, Quebec, Canada 1: 389-394.

    Google Scholar 

  4. Barzegar N, Moin MS: A new approach for iris localization in iris recognition systems. Proceedings of the 6th IEEE/ACS International Conference on Computer Systems and Applications (AICCSA '08), March-April 2008, Doha, Qatar 516-523.

    Google Scholar 

  5. Daugman J: New methods in iris recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part B 2007,37(5):1167-1175.

    Article  Google Scholar 

  6. Ross A, Shah S: Segmenting non-ideal irises using geodesic active contours. Proceedings of the Biometric Consortium Conference (BCC '06), September-August 2006, Baltimore, Md, USA 1-6.

    Google Scholar 

  7. Ma L, Tan T, Wang Y, Zhang D: Personal identification based on iris texture analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 2003,25(12):1519-1533. 10.1109/TPAMI.2003.1251145

    Article  Google Scholar 

  8. Sun Z, Wang Y, Tan T, Cui J: Improving iris recognition accuracy via cascaded classifiers. IEEE Transactions on Systems, Man and Cybernetics, Part C 2005,35(3):435-441. 10.1109/TSMCC.2005.848169

    Article  Google Scholar 

  9. Huang J, Ma L, Tan T, Wang Y: Learning based enhancement model of iris. Proceedings of the 14th British Machine Vision Conference (BMVC '03), September 2003, Norwich, UK 153-162.

    Google Scholar 

  10. Boles WW, Boashash B: A human identification technique using images of the iris and wavelet transform. IEEE Transactions on Signal Processing 1998,46(4):1185-1188. 10.1109/78.668573

    Article  Google Scholar 

  11. Lim S, Lee K, Byeon O, Kim T: Efficient iris recognition through improvement of feature vector and classifier. ETRI Journal 2001,23(2):61-70. 10.4218/etrij.01.0101.0203

    Article  Google Scholar 

  12. Tisse C, Martin L, Torres L, Robert M: Person identification technique using human iris recognition. Proceedings of the 15th International Conference on Vision Interface (VI '02), May 2002, Calgary, Canada 294-299.

    Google Scholar 

  13. Kong W-K, Zhang D: Detecting eyelash and reflection for accurate iris segmentation. International Journal of Pattern Recognition and Artificial Intelligence 2003,17(6):1025-1034. 10.1142/S0218001403002733

    Article  Google Scholar 

  14. Thornton J, Savvides M, Kumar V: A Bayesian approach to deformed pattern matching of iris images. IEEE Transactions on Pattern Analysis and Machine Intelligence 2007,29(4):596-606.

    Article  Google Scholar 

  15. Monro DM, Rakshit S, Zhang D: DCT-based iris recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 2007,29(4):586-595.

    Article  Google Scholar 

  16. Zhu Y, Tan T, Wang Y: Biometric personal identification based on iris patterns. Proceedings of the 15th International Conference on Pattern Recognition (ICPR '00), September 2000, Barcelona, Spain 2: 801-804.

    Article  Google Scholar 

  17. http://www.cbsr.ia.ac.cn/IrisDatabase.htm

  18. http://www.bath.ac.uk/elec-eng/research/sipg/irisweb

  19. Proença H, Alexandre LA: Iris segmentation methodology for non-cooperative recognition. IEE Proceedings: Vision, Image and Signal Processing 2006,153(2):199-205. 10.1049/ip-vis:20050213

    Google Scholar 

  20. Sethian JA: Level Set Methods and Fast Marching Methods. 2nd edition. Cambridge University Press, Cambridge, Mass, USA; 1999.

    MATH  Google Scholar 

  21. Kass M, Witkin A, Terzopoulos D: Snakes: active contour models. Proceedings of the 1st International Conference on Computer Vision (ICCV '87), June 1987, London, UK 259-268.

    Google Scholar 

  22. Pons J-P, Hermosillo G, Keriven R, Faugeras O: Maintaining the point correspondence in the level set framework. Journal of Computational Physics 2006,220(1):339-354. 10.1016/j.jcp.2006.05.036

    Article  MATH  MathSciNet  Google Scholar 

  23. Pons J-P, Keriven R, Faugeras O: Area preserving cortex unfolding. Proceedings of the 7th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI '04), September 2004, Saint-Malo, France, Lecture Notes in Computer Science 3216: 376-383.

    Google Scholar 

  24. Hollingsworth K, Bowyer KW, Flynn PJ: All iris code bits are not created equal. Proceedings of the 1st IEEE International Conference on Biometrics: Theory, Applications, and Systems (BTAS '07), September 2007, Crystal City, Va, USA 1-6.

    Google Scholar 

  25. Field DJ: Relations between the statistics of natural images and the response properties of cortical cells. Journal of the Optical Society of America A 1987,4(12):2379-2394. 10.1364/JOSAA.4.002379

    Article  Google Scholar 

  26. Yao P, Li J, Ye X, Zhuang Zh, Li B: Iris recognition algorithm using modified Log-Gabor filters. Proceedings of the 18th International Conference on Pattern Recognition (ICPR '06), August 2006, Hong Kong 4: 461-464.

    Google Scholar 

  27. Holligsworth KP, Bowyer KW, Flynn PJ: The Best Bitsin an Iris Code. IEEE Trensaction on Pattern Analysis and Machine Intelligence 2009,31(6):964-973.

    Article  Google Scholar 

  28. Proença H, Alexandre LA: UBIRIS: a noisy iris image database. Proceedings of the 13th International Conference on Image Analysis and Processing (ICIAP '05), September 2005, Cagliari, Italy, Lecture Notes in Computer Science 3617: 970-977.

    Google Scholar 

  29. Sun Z, Wang Y, Tan T, Cui J: Improving iris recognition accuracy via cascaded classifiers. IEEE Transactions on Systems, Man and Cybernetics, Part C 2005,35(3):435-441. 10.1109/TSMCC.2005.848169

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nakissa Barzegar.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Barzegar, N., Moin, M.S. A New User Dependent Iris Recognition System Based on an Area Preserving Pointwise Level Set Segmentation Approach. EURASIP J. Adv. Signal Process. 2009, 980159 (2009). https://doi.org/10.1155/2009/980159

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2009/980159

Keywords