Skip to main content
  • Research Article
  • Open access
  • Published:

Small-Sample Error Estimation for Bagged Classification Rules

Abstract

Application of ensemble classification rules in genomics and proteomics has become increasingly common. However, the problem of error estimation for these classification rules, particularly for bagging under the small-sample settings prevalent in genomics and proteomics, is not well understood. Breiman proposed the "out-of-bag" method for estimating statistics of bagged classifiers, which was subsequently applied by other authors to estimate the classification error. In this paper, we give an explicit definition of the out-of-bag estimator that is intended to remove estimator bias, by formulating carefully how the error count is normalized. We also report the results of an extensive simulation study of bagging of common classification rules, including LDA, 3NN, and CART, applied on both synthetic and real patient data, corresponding to the use of common error estimators such as resubstitution, leave-one-out, cross-validation, basic bootstrap, bootstrap 632, bootstrap 632 plus, bolstering, semi-bolstering, in addition to the out-of-bag estimator. The results from the numerical experiments indicated that the performance of the out-of-bag estimator is very similar to that of leave-one-out; in particular, the out-of-bag estimator is slightly pessimistically biased. The performance of the other estimators is consistent with their performance with the corresponding single classifiers, as reported in other studies.

1. Introduction

Ensemble classification methods combine the decision of multiple classifiers designed on randomly perturbed versions of the available data [1–5]. The most popular version of this scheme is known as bootstrap aggregating, or "bagging" [4, 5] where the ensemble classifier corresponds to a majority vote among classifiers designed on bootstrap samples [6] from the available training data.

There has been considerable interest recently in the application of bagging in the classification of both gene expression data [7–10] and protein-abundance mass spectrometry data [11–16]. The popularity of bagging is based on the expectation that combining the decision of several classifiers will regularize and improve the performance of unstable, overfitting classification rules (the so-called "weak learners"). In a related study [17], the authors have investigated this claim, in the context of small-sample genomics and proteomics data. On the other hand, a different issue is the performance of error estimators for bagged classifiers. Accurate error estimation is a critical issue in Genomics, as it decisively impacts the scientific validity of hypotheses derived from application of pattern recognition methods to biomedical data [18–20]. On the topic of error estimation, Breiman proposed a general method, which he called "out-of-bag", for estimating statistics of bagged classifiers [21], and, subsequently, other authors applied it to the estimation of the classification error [22, 23]. In this paper, we give an explicit definition of the out-of-bag estimator that is intended to remove estimator bias, which is done by formulating carefully how the error count is normalized. The performance of out-of-bag estimators with general bagged classification rules is not in fact well understood, especially in connection with bagging ensemble classifiers derived from classification rules other than decision trees (which was Breiman's primary interest). In addition, to our knowledge, no studies have attempted to assess the performance of error estimators for bagged classifiers in the context of Genomics data, particularly in the prevalent small-sample setting usually found in these applications.

To investigate these issues, we conducted an extensive simulation study of bagging of common classification rules, including LDA, 3NN, and CART, applied on both synthetic and real patient data, corresponding to the use of common error estimators such as resubstitution, leave-one-out, cross-validation, basic bootstrap, bootstrap 632, bootstrap 632 plus, bolstering, semibolstering, in addition to the out-of-bag estimator itself. We present here selected representative results; the full set of results can be found on the companion website, at http://gsp.tamu.edu/Publications/supplementary/oob. The results from the numerical experiments indicated that the performance of the out-of-bag error estimator is very similar to that of leave-one-out; in particular, the out-of-bag estimator is slightly pessimistically biased. The performance of the other estimators is for the most part consistent with their performance with the corresponding single classification rules assessed in other studies, with the best performance being provided by the bolstered error estimators, in terms of root mean square error.

This paper is organized as follows. In Section 2, we review briefly the definition of bagged ensemble classification rules. In Section 3, we describe the error estimators considered in this study. In Section 4, we present the results of a large simulation study on the performance of error estimators with bagged classification rules. Finally, Section 5 provides concluding remarks.

2. Bagged Classification Rules

In pattern recognition, classification is the process of assigning a group label to an object, based on information available about it in the form of a data vector called a feature vector. Suppose we have a binary classification problem with feature vector in a feature space and label . A classifier is a function . The stochastic properties of the classification problem are completely determined by the joint feature-label distribution of the pair . is, in practice, rarely known. Classification is implemented empirically, by means of the design of a classifier based on a finite set of i.i.d. samples drawn from :

(1)

For a fixed , a classification rule would be a function that maps the sample data to a classifier:

(2)

For a given training set , we have a designed classifier . The classification error is the chance of incorrectly classifying a future sample given the training sample set :

(3)

It is clear that is random as it depends on . The expected error taken over the randomness of is called expected classification error and this is a deterministic quantity which is a function of classification rule and the joint feature-label distribution.

The number of training samples is, in practice, always limited. Much effort is spent on exploiting and reusing the samples as much as possible. Randomization is one resampling technique in which multiple bootstrap sets are created by randomly drawing points from , either with or without replacement, corresponding to a resampling distribution on the training data. The cardinality of can be smaller, equal to or larger than , depending on the application of interest. In a bootstrap set, a sample point can appear multiple times or not at all. In bagging, different choices of resampling distribution and lead to variants, but the most common one is uniform resampling with .

An ensemble classifier is acquired based on majority voting among component classifiers. Each component of the ensemble is built up on a bootstrap set using the original classification rule . The bagged classification is defined as

(4)

where the expectation is taken with respect to the random mechanism , fixed at the observed value of . Bagging is a version of ensemble classifier, in which the expectation in (4) is approximated by Monte-Carlo sampling:

(5)

where the classifiers are designed by the original classification rule on bootstrap samples , for , for large enough . How large should be is an important topic of bagging so that it is computationally efficient and the Monte Carlo approximation is accurate enough. In this paper, we chose according to the recommendation from Breiman [21] and from our observations on the convergence of mean error of bagged classifiers in our previous study [17]. It is important to select an odd to avoid the issue of tie breaking in the majority vote. Experimental results in our previous study [17] showed that increasing beyond leads to negligible differences in performance.

3. Error Estimation

3.1. Classical Methods

Data in practice are often limited, and the training sample has to be used for both designing the classifier and estimating the true error . An obvious method to estimate is to use itself as the test set, which leads often, but not always, to optimistic bias. This is called the resubstitution estimator:

(6)

In -fold cross-validation, is partitioned into   folds, for (for simplicity, we assume that divides ), each fold is left out of the design process and used as a testing set, and the estimate is the overall proportion of error committed on all folds [24]:

(7)

where is a sample in the th fold. The process may be repeated, where several cross-validated estimates are computed, using different partitions of the data into folds, and the results averaged. In leave-one-out estimation, a single observation is left out each time, which corresponds to -fold cross-validation. The leave-one-out estimator is nearly unbiased as an estimator of .

3.2. Bootstrap Error Estimators

Resampling methodology, as mentioned above in generating ensemble classifiers, can be used for estimating errors. In fact, bootstrap error estimation was proposed by Efron [25], before its use in bagging. The actual proportion of times a data point appears in a bootstrap sample can be written as , where if the statement is true, zero otherwise. The basic bootstrap (or "zero bootstrap") is given by

(8)

With the number of bootstrap sample being between 25 and 200, as recommended in [25]. Bootstrap 632 is a variant, which tries to correct the bias of the basic bootstrap estimator by performing an average with the resubstitution estimator [25]:

(9)

Bootstrap 632 plus is another modified version of bootstrap, proposed in [26], which is intended for highly-overfitting classification rules. Bootstrap 632 plus attempts to adaptively find the weights in (9) that offset the effects of overfitting. The weights depend on the relative overfitting rate and no-information error rate. In dichotomous classification, and are estimated from , the proportion of observed samples belonging to class 1 and , the proportion of classifier outputs belonging to class 1. The relations are as follows:

(10)

3.3. Bolstered Error Estimators

Bolstered estimation was proposed in [27]. It has shown promising performance for small sample sizes in terms of root mean square error. While it is comparable to bootstrap methods in many cases, bolstered estimators are typically much more computationally efficient than the bootstrap. The main idea of bolstering is to put a kernel at each of the sample point, called "bolstering kernel" to smooth the variance of counting-based estimation methods (in this paper, we adopt Gaussian bolstering kernels). When the classifiers are overfitted, and hence, resubstitution estimates are optimistically biased, then bolstering at a misclassified point will increase this bias. Semibolstering is suggested for correcting this, by conducting no bolstering at misclassified points. We refer the reader to [27] for the full details (in this paper, we employ the bolstered and semibolstered resubstitution estimators of [27]).

3.4. Out-of-Bag Error Estimators

Breiman [21] originally proposed the out-of-bag method to estimate the generalization error of bagged predictors of CART and the node priority probabilities. Bylander [22] later did a simulation study comparing out-of-bag and cross-validation for tree classification C4.5 and concluded that both are biased. Banfield et al. [23] used out-of-bag in a large simulation of investigating performances of a variety of ensemble methods. Martínez-Muñoz and Suárez [28], in an attempt to find the optimal number of components of ensembles, employed out-of-bag as the optimization criterion. Despite that, the properties of the out-of-bag estimator remain largely unclear, in particular, the issue of bias. We propose in the sequel a modification to the standard out-of-bag estimator that removes nearly all of its bias (as evidenced by the numerical experiments in Section 4).

In bagging, component classifiers are designed based on bootstrap sets, each of which contain on average of the original sample set. Hence, there are approximately of the data which are not used to build the classifier and are therefore uncorrelated with it. Out-of-bag estimates are obtained by testing the majority voting classifier via those individual classifiers in the ensemble that are uncorrelated with the testing point, that is, those classifiers whose training sets do not contain the testing points. Suppose we resample the original sample set times, leading to bootstrap sample sets . Let if sample appears in the bootstrap sample , and , otherwise, for . Denote

(11)

for . Notice that is equal to the number of times that sample in class appears across all bootstrap sample sets, while is equal to the number of times that sample in class appears and is misclassified across all bootstrap sample sets. Then the out-of-bag error estimator, as proposed by Breiman in [4], can be written as

(12)

The estimator, as formulated above, will be optimistically biased, in general, according to the following rationale. Clearly, when and , then the th sample point belongs to all of the bootstrap samples, so there are no individual classifiers to test on the th point. In other words, the "out-of-bag ensemble" of classifiers for that point is empty in this case. That means that, with training sample size of , we often have fewer than samples to perform the out-of-bag estimation. In computing the proportion of incorrect classification by the ensemble, one should therefore divide not by as in (12), but rather by minus the number of times when the out-of-bag ensembles are empty, which leads to the following modified out-of-bag estimator:

(13)

As shown by the numerical results in Section 4, this estimator has approximately the bias of leave-one-out; that is, it is only slightly pessimistically biased. As far as we know, this formulation of the out-of-bag estimator has not been explicitly given in the literature.

4. Simulation Study

This section reports the results of an extensive simulation study, which were conducted on both synthetic and publicly available microarray data and protein abundance mass spectrometry data. We present here selected representative results; the full set of results can be found on the companion website, at http://gsp.tamu.edu/Publications/supplementary/oob. We simulated bagged ensembles of linear discriminant analysis (LDA), 3-nearest neighbors (3NN), and decision trees (CART) [24], and computed actual and estimated errors, according to the different estimation methods. These estimators were evaluated based on the distribution of their deviation from the true error, and in terms of bias, variance, and root mean square (RMS) errors.

4.1. Methods

We compared the performances of estimators for varying number of training samples with different dimensions of the feature space. The dimensionality and number of samples are selected to be compatible with a small-sample scenario (in this paper, the dimensionality is kept fixed at ). For patient data, a small number of features (once again, in this paper) are first selected by the -test. We afterwards randomly draw a number of samples to be used as the training set and employed the rest as a testing set. The number of training points are chosen to be small to keep the small sample setting, and to have a large enough testing set. This was repeated times to get the empirical deviation distribution [18], that is, the distribution of estimated minus actual errors, for the different error estimators. The results are presented in forms of beta-fit curves, box-plots, and bias, variance, and RMS curves in order to provide as detailed as possible a picture of the empirical deviation distributions of the error estimators.

4.2. Simulation Based on Synthetic Data

We employ here the spherical Gaussian model, where the covariance matrix is identity and the two mean vectors are symmetric over the origin. With that assumption, we varied the Bayes error of the model by changing the distance between the two means. Models with different Bayes errors and dimension are compared over varying number of samples. The feature-label distribution is known and this allows us to exactly compute the true error of the designed classifier, which is then used to derive the empirical deviation distribution for the different estimators.

4.3. Simulation Based on Patient Data

We utilized the following publicly available data sets from published studies in order to study the performance of bagging in the context of genomics and proteomics applications.

4.3.1. Breast Cancer Gene Expression Data

These data come from the breast cancer classification study in [29], which analyzed gene-expression microarrays containing a total of 25760 transcripts each. Filter-based feature selection was performed on a 70-gene prognosis profile, previously published by the same authors in [30]. Classification is between the good-prognosis class (115 samples), and the poor-prognosis class (180 samples), where prognosis is determined retrospectively in terms of survivability [29].

4.3.2. Lung Cancer Gene Expression Data

We employed here the data set "A" from the study in [31] on nonsmall cell lung carcinomas (NSCLC) that analyzed gene expression microarrays containing a total of 12600 transcripts each. NSCLC is subclassified as adenocarcinomas, squamous cell carcinomas, and large-cell carcinomas, of which adenocarcinomas are the most common subtypes and of interest to classify from other subtypes of NSCLC. Classification is thus between adenocarcinomas (139 samples) and nonadenocarcinomas (47 samples).

4.3.3. Prostate Cancer Mass Spectrometry Data

Given the recent keen interest on deriving serum-based proteomic biomarkers for the diagnosis of cancer [32], we also included in this study data from a proteomic study of prostate cancer reported in [33]. It consists of SELDI-TOF mass spectrometry of samples, which yield mass spectra for 45000 n/z (mass over charge) values. Filter-based feature selection is employed to find the top discriminatory n/z values to be used in the experiment. Classification is between prostate cancer patients (167 samples) and nonprostate patients, including benign prostatic hyperplasia and healthy patients (159 samples). We use the raw spectra values, without baseline subtraction, as we found that this leads to better classification rates.

4.4. Results and Discussion

4.4.1. Synthetic Data

The various error estimators can be grouped into four groups according to performance. The first group corresponds to resubstitution, which showed to be optimistically biased for the bagged LDA, 3NN, and CART classifiers, with a root mean square error that increases substantially with increasing Bayes error; resubstitution had been previously known to behave as such for single LDA, 3NN, and CART classifiers. The second group contains leave-one-out, fivefold cross-validation and out-of-bag. As we can see from Figure 1, the out-of-bag estimator, with the formulation given in (13), is almost identical to leave-one-out. This second group shows very small bias but considerably high variance. The resemblance of out-of-bag to cross-validation, which had been pointed out already in [22], is explained by the similar way of partitioning the sample set. This group shows much smaller bias than resubstitution, and this is consistent as the Bayes error increases. However, this group displayed larger variability than resubstitution and the bootstrap group, as we already knew from [19] on single classification rules. The third group includes the basic bootstrap, bootstrap 632, and bootstrap 632 plus; this group displays very competitive performance in terms of root mean square error. Even though they often perform better than the two previous groups, the estimators in this group took the longest time to compute across all experiments. The last group consists of the bolstered and semibolstered error estimators, which exhibit superior performance to the other groups, in terms of RMS error, despite being far less computationally expensive than cross-validation and bootstrap estimators.

Figure 1
figure 1

Comparison of out-of-bag and leave-one-out for different Gaussian models over the number of samples, for dimensionality p=2. . (a) Sample mean. (b) Sample standard deviation.

Generally, for a fixed model, almost all the estimates work better when the sample size increases and this holds for all three bagged classifiers. In Figure 2, we see that there is a consistent trend; as the Bayes error increases or, equivalently, the classification problem becomes harder, error estimation performance decreases steadily, in term of root mean square error; this is true for all error estimation methods. Bolstered error estimators showed consistent superior performance to the others, in terms of accuracy (RMS) and computational cost. These conclusions are also supported by Figures 3 and 4.

Figure 2
figure 2

Bias, variance (standard deviation), and RMS of as a function of the Bayes error, for the synthetic data, sample size n = 20, and dimensionality p = 2, with different base classification rules.

Figure 3
figure 3

Empirical deviation distribution (a), box plots (b), and RMS as a function of sample size (c), for synthetic Gaussian model with Bayes error = 0.05, sample size n = 20, and dimensionality p = 2, with different base classification rules.

Figure 4
figure 4

Empirical deviation distribution (a), box plots (b), and RMS as a function of sample size (c), for synthetic Gaussian model with Bayes error = 0.15, sample size n = 20, and dimensionality p = 2, with different base classification rules.

We observed that the performance of error estimators other than out-of-bag (which can only be applied to ensemble rules) were consistent with their performance with the corresponding single classifier, as reported in other studies [18, 27].

4.4.2. Patient Data

The results for the real patient data sets were entirely consistent with those for the synthetic data, as can be seen in Figures 5, 6, and 7 and Tables 1, 2, and 3. We again observed the division of the error estimators in the same four groups according to performance. We also observed that the bolstered error estimator group displayed the best performance, as measured by RMS.

Table 1 Bias, variance (standard deviation), and RMS for different error estimators, with different base classification rules, for breast cancer gene expression data, and dimensionality .
Table 2 Bias, variance (standard deviation), and RMS for different error estimators, with different base classification rules, for lung cancer gene expression data, and dimensionality .
Table 3 Bias, variance (standard deviation), and RMS for different error estimators, with different base classification rules, for prostate cancer mass-spectrometry data, and dimensionality .
Figure 5
figure 5

Empirical deviation distribution (a) and box plots (b), for breast cancer gene expression data, sample size n = 20, and dimensionality p = 2, with different base classification rules.

Figure 6
figure 6

Empirical deviation distribution (a) and box plots (b), for lung cancer gene expression data, sample size n = 20, and dimensionality p = 2, with different base classification rules.

Figure 7
figure 7

Empirical deviation distribution (a) and box plots (b), for prostate cancer mass-spectrometry data, sample size n = 20, and dimensionality p = 2, with different base classification rules.

5. Conclusion

We presented an extensive study of several error estimation methods for bagged ensembles of typical classifiers. We provided here an explicit formulation for the out-of-bag error estimator, which is intended to remove estimator bias. We observed that this out-of-bag error estimator was almost identical to leave-one-out, under spherical Gaussian models, and conjectured a very close relationship between the two. The results of our simulation study were consistent between synthetic and real patient data, and the performance of error estimators that can be applied to single classifiers (i.e., all of them save for the out-of-bag estimator) with the bagged classifiers was comparable to their performance with the corresponding single classifier, as reported elsewhere. The bolstered error estimators exhibited the best performance, in terms of RMS error, in our simulation study, despite being far less computationally expensive than cross-validation and bootstrap estimators. We hope this work will provide useful guidance to practitioners working with bagged ensemble classifiers designed on small-sample data.

References

  1. Schapire RE: The strength of weak learnability. Machine Learning 1990, 5(2):197-227.

    Google Scholar 

  2. Freund Y: Boosting a weak learning algorithm by majority. Proceedings of the 3rd Annual Workshop on Computational Learning Theory, 1990 202-216.

    Google Scholar 

  3. Xu L, Krzyzak A, Suen C: Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Transactions on Systems, Man and Cybernetics 1992, 22(3):418-435. 10.1109/21.155943

    Article  Google Scholar 

  4. Breiman L: Bagging predictors. Machine Learning 1996, 24(2):123-140.

    MathSciNet  MATH  Google Scholar 

  5. Breiman L: Random forests. Machine Learning 2001, 45(1):5-32. 10.1023/A:1010933404324

    Article  MathSciNet  MATH  Google Scholar 

  6. Efron B: Bootstrap methods: another look at the jackknife. Annals of Statistics 1979, 7: 1-26. 10.1214/aos/1176344552

    Article  MathSciNet  MATH  Google Scholar 

  7. Alvarez S, Diaz-Uriarte R, Osorio A, Barroso A, Melchor L, Paz MF, Honrado E, Rodríguez R, Urioste M, Valle L, Díez O, Cigudosa JC, Dopazo J, Esteller M, Benitez J: A predictor based on the somatic genomic changes of the BRCA1/BRCA2 breast cancer tumors identifies the non-BRCA1/BRCA2 tumors with BRCA1 promoter hypermethylation. Clinical Cancer Research 2005, 11(3):1146-1153.

    Google Scholar 

  8. Gunther EC, Stone DJ, Gerwien RW, Bento P, Heyes MP: Prediction of clinical drug efficacy by classification of drug-induced genomic expression profiles in vitro. Proceedings of the National Academy of Sciences of the United States of America 2003, 100(16):9608-9613. 10.1073/pnas.1632587100

    Article  Google Scholar 

  9. Díaz-Uriarte R, Alvarez de Andrés S: Gene selection and classification of microarray data using random forest. BMC Bioinformatics 2006., 7, article 3:

    Google Scholar 

  10. Statnikov A, Wang L, Aliferis CF: A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification. BMC Bioinformatics 2008., 9, article 319:

    Google Scholar 

  11. Izmirlian G: Application of the random forest classification algorithm to a SELDI-TOF proteomics study in the setting of a cancer prevention trial. Annals of the New York Academy of Sciences 2004, 1020: 154-174. 10.1196/annals.1310.015

    Article  Google Scholar 

  12. Wu B, Abbott T, Fishman D, McMurray W, Mor G, Stone K, Ward D, Williams K, Zhao H: Comparison of statistical methods for classification of ovarian cancer using mass spectrometry data. Bioinformatics 2003, 19(13):1636-1643. 10.1093/bioinformatics/btg210

    Article  Google Scholar 

  13. Geurts P, Fillet M, de Seny D, Meuwis M-A, Malaise M, Merville M-P, Wehenkel L: Proteomic mass spectra classification using decision tree based ensemble methods. Bioinformatics 2005, 21(14):3138-3145. 10.1093/bioinformatics/bti494

    Article  Google Scholar 

  14. Zhang B, Pham TD, Zhang Y: Bagging support vector machine for classification of SELDI-ToF mass spectra of ovarian cancer serum samples. Proceedings of the 20th Australian Joint Conference on Artificial Intelligence (AI '07), December 2007, Gold Coast, Australia, Lecture Notes in Computer Science 4830: 820-826.

    Google Scholar 

  15. Assareh A, Moradi MH, Esmaeili V: A novel ensemble strategy for classification of prostate cancer protein mass spectra. Proceedings of the 29th Annual International Conference of IEEE-EMBS, Engineering in Medicine and Biology Society (EMBC '07), August 2007 5987-5990.

    Google Scholar 

  16. Tong W, Xie Q, Hong H, Fang H, Shi L, Perkins R, Petricoin EF: Using decision forest to classify prostate cancer samples on the basis of SELDI-TOF MS data: assessing chance correlation and prediction confidence. Environmental Health Perspectives 2004, 112(16):1622-1627. 10.1289/ehp.7109

    Article  Google Scholar 

  17. Vu TT, Braga-Neto UM: Is bagging effective in the classification of small-sample genomic and proteomic data? EURASIP Journal on Bioinformatics and Systems Biology 2009., 2009:

    Google Scholar 

  18. Braga-Neto UM, Dougherty ER: Is cross-validation valid for small-sample microarray classification? Bioinformatics 2004, 20(3):374-380. 10.1093/bioinformatics/btg419

    Article  Google Scholar 

  19. Braga-Neto U, Hashimoto R, Dougherty ER, Nguyen DV, Carroll RJ: Is cross-validation better than resubstitution for ranking genes? Bioinformatics 2004, 20(2):253-258. 10.1093/bioinformatics/btg399

    Article  Google Scholar 

  20. Braga-Neto U, Dougherty E: Exact performance of error estimators for discrete classifiers. Pattern Recognition 2005, 38(11):1799-1814. 10.1016/j.patcog.2005.02.013

    Article  MATH  Google Scholar 

  21. Breiman L: Out-of-bag estimation. Department of Statistics, University of California; ftp://ftp.stat.berkeley.edu/pub/users/breiman/OOBestimation.ps.Z

  22. Bylander T: Estimating generalization error on two-class datasets using out-of-bag estimates. Machine Learning 2002, 48(1–3):287-297.

    Article  MATH  Google Scholar 

  23. Banfield RE, Hall LO, Bowyer KW, Kegelmeyer WP: A comparison of decision tree ensemble creation techniques. IEEE Transactions on Pattern Analysis and Machine Intelligence 2007, 29(1):173-180.

    Article  Google Scholar 

  24. Duda RO, Hart PE, Stork DG: Pattern Classification. 2nd edition. John Wiley & Sons, New York, NY, USA; 2001.

    MATH  Google Scholar 

  25. Efron B: Estimating the error rate of a prediction rule: improvement on cross-validation. Journal of the American Statistical Association 1983, 78(382):316-331. 10.2307/2288636

    Article  MathSciNet  MATH  Google Scholar 

  26. Efron B, Tibshirani R: Improvements on cross-validation: the .632+ bootstrap method. Journal of the American Statistical Association 1997, 92(438):548-560. 10.2307/2965703

    MathSciNet  MATH  Google Scholar 

  27. Braga-Neto U, Dougherty E: Bolstered error estimation. Pattern Recognition 2004, 37(6):1267-1281. 10.1016/j.patcog.2003.08.017

    Article  MATH  Google Scholar 

  28. Martínez-Muñoz G, Suárez A: Out-of-bag estimation of the optimal sample size in bagging. Pattern Recognition 2010, 43(1):143-152. 10.1016/j.patcog.2009.05.010

    Article  MATH  Google Scholar 

  29. van de Vijver MJ, He YD, van 'T Veer LJ, Dai H, Hart AAM, Voskuil DW, Schreiber GJ, Peterse JL, Roberts C, Marton MJ, Parrish M, Atsma D, Witteveen A, Glas A, Delahaye L, Van Der Velde T, Bartelink H, Rodenhuis S, Rutgers ET, Friend SH, Bernards R: A gene-expression signature as a predictor of survival in breast cancer. New England Journal of Medicine 2002, 347(25):1999-2009. 10.1056/NEJMoa021967

    Article  Google Scholar 

  30. Van't Veer LJ, Dai H, van de Vijver MJ, He YD, Hart AAM, Mao M, Peterse HL, Van Der Kooy K, Marton MJ, Witteveen AT, Schreiber GJ, Kerkhoven RM, Roberts C, Linsley PS, Bernards R, Friend SH: Gene expression profiling predicts clinical outcome of breast cancer. Nature 2002, 415(6871):530-536. 10.1038/415530a

    Article  Google Scholar 

  31. Bhattacharjee A, Richards WG, Staunton J, Li C, Monti S, Vasa P, Ladd C, Beheshti J, Bueno R, Gillette M, Loda M, Weber G, Mark EJ, Lander ES, Wong W, Johnson BE, Golub TR, Sugarbaker DJ, Meyerson M: Classification of human lung carcinomas by mRNA expression profiling reveals distinct adenocarcinoma subclasses. Proceedings of the National Academy of Sciences of the United States of America 2001, 98(24):13790-13795. 10.1073/pnas.191502998

    Article  Google Scholar 

  32. Issaq HJ, Veenstra TD, Conrads TP, Felschow D: The SELDI-TOF MS approach to proteomics: protein profiling and biomarker identification. Biochemical and Biophysical Research Communications 2002, 292(3):587-592. 10.1006/bbrc.2002.6678

    Article  Google Scholar 

  33. Adam B-L, Qu Y, Davis JW, Ward MD, Clements MA, Cazares LH, Semmes OJ, Schellhammer PF, Yasui Y, Feng Z, Wright GL Jr.: Serum protein fingerprinting coupled with a pattern-matching algorithm distinguishes prostate cancer from benign prostate hyperplasia and healthy men. Cancer Research 2002, 62(13):3609-3614.

    Google Scholar 

Download references

Acknowledgment

This work was supported by the National Science Foundation, through NSF Award CCF-0845407.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to U. M. Braga-Neto.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Vu, T.T., Braga-Neto, U.M. Small-Sample Error Estimation for Bagged Classification Rules. EURASIP J. Adv. Signal Process. 2010, 548906 (2010). https://doi.org/10.1155/2010/548906

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2010/548906

Keywords