Target detection performance bounds in compressive imaging
 Kalyani Krishnamurthy^{1}Email author,
 Rebecca Willett^{1} and
 Maxim Raginsky^{2}
DOI: 10.1186/168761802012205
© Krishnamurthy et al.; licensee Springer. 2012
Received: 2 December 2011
Accepted: 28 August 2012
Published: 25 September 2012
Abstract
This article describes computationally efficient approaches and associated theoretical performance guarantees for the detection of known targets and anomalies from few projection measurements of the underlying signals. The proposed approaches accommodate signals of different strengths contaminated by a colored Gaussian background, and perform detection without reconstructing the underlying signals from the observations. The theoretical performance bounds of the target detector highlight fundamental tradeoffs among the number of measurements collected, amount of background signal present, signaltonoise ratio, and similarity among potential targets coming from a known dictionary. The anomaly detector is designed to control the number of false discoveries. The proposed approach does not depend on a known sparse representation of targets; rather, the theoretical performance bounds exploit the structure of a known dictionary of targets and the distance preservation property of the measurement matrix. Simulation experiments illustrate the practicality and effectiveness of the proposed approaches.
Keywords
Target detection Anomaly detection False discovery rate _{i}pvalue Incoherent projections Compressive sensingIntroduction
The theory of compressive sensing (CS) has shown that it is possible to accurately reconstruct a sparse signal from few (relative to the signal dimension) projection measurements[1, 2]. Though such a reconstruction is crucial to visually inspect the signal, there are many instances where one is solely interested in identifying whether the underlying signal is one of several possible signals of interest. In such situations, a complete reconstruction is computationally expensive and does not optimize the correct performance metric. Recently, CS ideas have been exploited in[3–5] to perform target detection and classification from projection measurements, without reconstructing the underlying signal of interest. In[3, 5], the authors propose nearestneighbor based methods to classify a signal$\mathit{f}\in {\mathbb{R}}^{N}$ to one of m known signals given projection measurements of the form$\mathit{y}=\mathit{A}\mathit{f}+\mathit{n}\in {\mathbb{R}}^{K}$ for K≤N, where$\mathit{A}\in {\mathbb{R}}^{K\times N}$ is a known projection operator and$\mathit{n}\sim \mathcal{N}\left(0,{\sigma}^{2}\mathit{I}\right)$ is the additive Gaussian noise. This model is simple to analyze, but is impractical, since in reality, a signal is always corrupted by some kind of interference or background noise. Extension of the methods in[3, 5] to handle background noise is nontrivial. Though, Duarte et al.[4] provides a way to account for background contamination, it makes a strong assumption that the signal of interest and the background are sparse in bases that are incoherent. This might not always be true in many applications. Recent works on CS[6, 7] allow for the input signal f to be corrupted by some premeasurement noise$\mathit{b}\sim \mathcal{N}\left(0,{\sigma}_{b}^{2}\mathit{I}\right)$ such that one observes y=A(f + b) + n, and study reconstruction performance as a function of the number of measurements, pre and postmeasurement noise statistics and the dimension of the input signal. In this work, however, we are interested in performing target detection without an intermediate reconstruction step. Furthermore, the increased utility of highdimensional imaging techniques such as spectral imaging or videography in applications like remote sensing, biomedical imaging and astronomical imaging[8–15] necessitates the extension of compressive target detection ideas to such imaging modalities to achieve reliable target detection from fewer measurements relative to the ambient signal dimensions.
For example, recent advances in CS have led to the development of new spectral imaging platforms which attempt to address challenges in conventional imaging platforms related to system size, resolution, and noise by acquiring fewer compressive measurements than spatiospectral voxels[16–21]. However, these system designs have a number of degrees of freedom which influence subsequent data analysis. For instance, the singleshot compressive spectral imager discussed in[18] collects one coded projection of each spectrum in the scene. One projection per spectrum is sufficient for reconstructing spatially homogeneous spectral images, since projections of neighboring locations can be combined to infer each spectrum. Significantly more projections are required for detecting targets of unknown strengths without the benefit of spatial homogeneity. We are interested in investigating how several such systems can be used in parallel to reliably detect spectral targets and anomalies from different coded projections.
In general, we consider a broadly applicable framework that allows us to account for background and sensor noise, and perform target detection directly from projection measurements of signals obtained at different spatial or temporal locations. The precise problem formulation is provided below.
Problem formulation
where

i∈{1,…,M} indexes the spatial or temporal locations at which data are collected;

α_{i}≥0 is a measure of the signaltonoise ratio at location i, which is either known or estimated from observations;

$\mathit{\Phi}\in {\mathbb{R}}^{K\times N}$ for K < N, is a measurement matrix to be specified in Section “Whitening compressive observations”;
${\mathit{b}}_{i}\in {\mathbb{R}}^{N}\sim \mathcal{N}({\mathit{\mu}}_{b},{\mathit{\Sigma}}_{b})$• is the background noise vector, and${\mathit{w}}_{i}\in {\mathbb{R}}^{K}\sim \mathcal{N}(0,{\sigma}^{2}\mathit{I})$ is the i.i.d. sensor noise.
 (1)
Dictionary signal detection (DSD): Here we assume that each ${\mathit{f}}_{i}^{\ast}\in \mathcal{D}$ for i∈{1,…,M}, and our task is to detect all instances of one target signal ${\mathit{f}}^{\left(j\right)}\in \mathcal{D}$ for some unknown j∈{1,…,m}, i.e., to locate $S=\left\{i:{\mathit{f}}_{i}^{\ast}={\mathit{f}}^{\left(j\right)}\right\}$. DSD is useful in contexts in which we know the makeup of a scene and wish to focus our attention on the locations of a particular signal. For instance, in spectral imaging, DSD is used to study a scene of interest by classifying every spectrum in the scene to different known classes [11, 22]. In a video setup, DSD could be used to classify video segments to one of several categories (such as news, weather, sports, etc.) by projecting the video sequence to an appropriate feature space and comparing the feature vectors to the ones in a known dictionary [23].
 (2)
Anomalous signal detection (ASD): Here, our task is to detect all signals which are not members of our dictionary, i.e., detect $S=\left\{i:{\mathit{f}}_{i}^{\ast}\notin \mathcal{D}\right\}$ (this is akin to anomaly detection methods in the literature which are based on nominal, nonanomalous training samples [24, 25]). For instance, ASD may be used when we know most components of a spectral image and wish to identify all spectra which deviate from this model [26].
Our goal is to accurately perform DSD or ASD without reconstructing the spectral input${\mathit{f}}_{i}^{\ast}$ from z_{i} for i∈{1,…,M}. Accounting for background is a crucial issue. Typically, the background corresponding to the scene of interest and the sensor noise are modeled together by a colored multivariate Gaussian distribution[27]. However, in our case, it is important to distinguish the two because of the presence of the projection operator Φ. The projection operator acts upon the background spectrum in the same way as on the target spectrum, but it does not affect the sensor noise. We assume that b_{i}and w_{i}are independent of each other, and the prior probabilities of different targets in the dictionary${p}^{\left(j\right)}=\mathbb{P}\left({\mathit{f}}_{i}^{\ast}={\mathit{f}}^{\left(j\right)}\right)$ for j∈{1,⋯,m} are known in advance. If these probabilities are unknown, then the targets can be considered equally likely. Given this setup, our goal is to develop suitable target and anomaly detection approaches, and provide theoretical guarantees on their performances.
In this article, we develop detection performance bounds which show how performance scales with the number of detectors in a compressive setting as a function of SNR, the similarity between potential targets in a known dictionary, and their prior probabilities. Our bounds are based on a detection strategy which operates directly on the collected data as opposed to first reconstructing each${\mathit{f}}_{i}^{\ast}$ and then performing detection on the estimated signals. Reconstruction as an intermediate step in detection may be appealing to end users who wish to visually inspect spectral images instead of relying entirely on an automatic detection algorithm. However, using this intermediate step has two potential pitfalls. First, the Rao–Blackwell theorem[28] tells us that an optimal detection algorithm operating on the processed data (i.e., not sufficient statistics) cannot perform better than an optimal detection algorithm operating on the raw data. In other words, optimal performance is possible on the raw data, but we have no such performance guarantee for the reconstructed signals. Second, the relationship between reconstruction errors and detection performance is not well understood in many settings. Although we do not reconstruct the underlying signals, our performance bounds are intimately related to the signal resolution needed to achieve the signal diversity present in our dictionary. Since we have many fewer observations than the signals at this resolution, we adopt the “compressive” terminology.
Performance metric
where V is the number of falsely rejected null hypotheses, and R is the total number of rejected null hypotheses. Controlling the FDR in a multiple hypothesis testing framework is akin to designing a constant false alarm rate (CFAR) detector in spectral target detection applications that keeps the false alarm rate at a desired level irrespective of the background interference and sensor noise statistics[22].
Previous investigations
Much of the classical target detection literature[30–34] assume that each target lies in a Pdimensional subspace of${\mathbb{R}}^{N}$ for P < N. The subspace in which the target lies is often assumed to be known or specified by the user, and the variability of the background is modeled using a probability distribution. Given knowledge of the target subspace, background statistics and sensor noise statistics, detection methods based on LRTs (likelihood ratio tests) and GLRTs (generalized likelihood ratio tests) have been proposed in[30–35]. A subspace model is optimal if the subspace in which targets lie is known in advance. However, in many applications, such subspaces might be hard to characterize. An alternative, and a more flexible option is to assume that the highdimensional target exhibits some lowdimensional structure that can be exploited to perform efficient target detection. This approach is utilized in this work and in[5] where the target signal in${\mathbb{R}}^{N}$ is assumed to come from a dictionary of m known signals such that m≪N, and in[3], where the targets are assumed to lie in a lowdimensional manifold embedded in highdimensional target space.
Recently, several methods for target or anomaly detection that rely on recovering the full spatiospectral data from projection measurements[36, 37] have been proposed. However, they are computationally intensive and the detection performance associated with these reconstructions is unknown. Other researchers have exploited CS to perform target detection and classification without reconstructing the underlying signal[3–5]. Duarte et al.[4] propose a matching pursuit based algorithm, called the incoherent detection and estimation algorithm (IDEA), to detect the presence of a signal of interest against a strong interfering signal from noisy projection measurements. The algorithm is shown to perform well on experimental data sets under some strong assumptions on the sparsity of the signal of interest and the interfering signal. Davenport et al.[3] develop a classification algorithm called the smashed filter to classify an image in${\mathbb{R}}^{N}$ to one of m known classes from K projections of the signal, where K < N. The underlying image is assumed to lie on a lowdimensional manifold, and the algorithm finds the closest match from the m known classes by performing a nearest neighbor search over the m different manifolds. The projection measurements are chosen to preserve the distances among the manifolds. Though Davenport et al.[3] offers theoretical bounds on the number of measurements necessary to preserve distances among different manifolds, it is not clear how the performance scales with K or how to incorporate background models into this setup. Moreover, this approach may be computationally intensive since it involves learning and searching over different manifolds. Haupt et al.[5] use a nearestneighbor classifier to classify an Ndimensional signal to one of m equally likely target classes based on K < N random projections, and provide theoretical guarantees on the detector performance. While the method discussed in[5] is computationally efficient, it is nontrivial to extend to the case of target detection with colored background noise and nonequiprobable targets. Furthermore, their performance guarantees cannot be directly extended to our problem since we focus on error measures that let us analyze the performance of multiple hypothesis tests simultaneously as opposed to the above methods that consider compressive classification performance for a single hypothesis test.
The authors of a more recent work[38] extend the classical RX anomaly detector[39] to directly detect anomalies from random, orthonormal projection measurements without an intermediate reconstruction step. They numerically show how the detection probability improves as a function of the signaltonoise ratio when the number of measurements changes. Though probability of detection is a good performance measure, in many applications controlling the false discoveries below a desired level is more crucial. As a result, in our work, we propose an anomaly detection method that controls the FDR below a desired level.
Contributions
This article makes the following contributions to the above literature:

A compressive target detection approach, which (a) is computationally efficient, (b) allows for the signal strengths of the targets to vary with spatial location, (c) allows for backgrounds mixed with potential targets, (d) considers targets with different a priori probabilities, and (e) yields theoretical guarantees on detector performance. This article unifies preliminary work by the authors[40, 41], presents previously unpublished aspects of the proofs, and contains updated experimental results.

A computationally efficient anomaly detection method that detects anomalies of different strengths from projection measurements and also controls the FDR at a desired level.

A whitening filter approach to compressive measurements of signals with background contamination, and associated analysis leading to bounds on the amount of background to which our detection procedure is robust.
The above theoretical results, which are the main focus of this article, are supported with simulation studies in Section “Experimental results”. Classical detection methods described in[22, 26, 27, 30–35, 39, 42–45] do not establish performance bounds as a function of signal resolution or target dictionary properties and rely on relatively direct observation models which we show to be suboptimal when the detector size is limited. The methods in[3, 4] do not contain performance analysis, and our analysis builds upon the analysis in[5] to account for several specific aspects of the compressive target detection problem.
Whitening compressive observations
Before we present our detection methods for DSD and ASD problems, respectively, we briefly discuss a whitening step that is common to both our problems of interest.
We can now choose Φ so that the corresponding A has certain desirable properties as detailed in Sections “Dictionary signal detection” and “Anomalous signal detection”.
For a given A, the following theorem provides a construction of Φ that satisfies (3) and a bound on the maximum tolerable background contamination:
Theorem 1
where ∥A∥ is the spectral norm of A, then B is positive definite and Φ=σB^{−1/2}A is a sensing matrix, which can be used in conjunction with a whitening filter to produce observations modeled in (2).
The proof of this theorem is provided in Appendix 1. This theorem draws an interesting relationship between the maximum background perturbation that the system can tolerate and the spectral norm of the measurement matrix, which in turn varies with K and N. Hardware designs such as those in[17, 19] use spatial light modulators and digital micro mirrors, which allow the measurement matrix Φ to be adjusted easily in response to changing background statistics and other operating conditions.
In the sections that follow, we consider collecting measurements of the form${\mathit{y}}_{i}={\alpha}_{i}\mathit{A}{\mathit{f}}_{i}^{\ast}+{\mathit{n}}_{i}$ given in (2), where${\mathit{f}}_{i}^{\ast}$ is the target of interest for i=1,…,M, and$\mathit{A}\in {\mathbb{R}}^{K\times N}$ is a sensing matrix that satisfies (3). It is assumed that any background contamination has been eliminated with the whitening procedure described in this section.
Dictionary signal detection
where${\mathit{f}}^{\left(j\right)}\in \mathcal{D}$ is the target of interest and i=1,…,M.
Decision rule
where$\text{log}\phantom{\rule{0.3em}{0ex}}\mathbb{P}\left({\mathit{f}}_{i}^{\ast}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}{\mathit{f}}^{\left(j\right)}\left\right.{\mathit{y}}_{i},{\alpha}_{i},\mathit{A}\right)\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\frac{K}{2}log\left(\frac{1}{2\Pi}\right)\frac{{\u2225{\mathit{y}}_{i}{\alpha}_{i}\mathit{A}{\mathit{f}}^{\left(j\right)}\u2225}^{2}}{2}+\phantom{\rule{0.3em}{0ex}}\text{log}\phantom{\rule{0.3em}{0ex}}{p}^{\left(j\right)}$ is the logarithm of the a posteriori probability density of the target f^{(j)} at the ith spatial location given the observations y_{i}, the signaltonoise ratio α_{i}and the sensing matrix A, and p^{(j)} is the a priori probability of target class j. Note that the process of determining these decision regions involves a sequence of nearestneighbor calculations, so the computational complexity scales with the number of classes m. In this work, we operate under the assumption that m is much smaller than the dimensionality of the datasets we consider. For example, if we consider spectral images, then the number of objects (signal classes) that make up a scene of interest is often smaller than the number of voxels in the image. This assumption is not unrealistic and has been exploited in earlier work such as[22] and the references therein. In most of the prior work we have surveyed[46, 47], the number of signal classes is less than 35, which doesn’t make our approach intractable.
We analyze this detector by extending the positive FDR (pFDR) error measure introduced by Storey to characterize the errors encountered in performing multiple, independent and nonidentical hypothesis tests simultaneously[48]. The pFDR, discussed formally below, is the fraction of falsely rejected null hypotheses among the total number of rejected null hypotheses, subject to the positivity condition that one rejects at least one null hypothesis. The pFDR is similar to the FDR except that the positivity condition is enforced here. In our context, the positivity condition means that we declare at least one signal to be a nontarget, which in turn implies that the scene of interest is composed of more than one object in the case of spectral imaging, or that the scene is not static in the case of video imaging.
is the total number of rejected null hypotheses, and${\mathbb{E}}_{\left\{E\right\}}=1$ if event E is true and 0 otherwise. In our setup, the pFDR corresponds to the expected ratio of the number of missed targets to the number of signals declared to be nontargets subject to the condition that at least one signal is declared to be a nontarget (note that this ratio is traditionally referred to as the positive false nondiscovery rate (pFNR), but is technically the pFDR in this context because of our definitions of the null and alternate hypotheses). The theorem below presents our main result:
Theorem 2
The proof of this theorem is detailed in Appendix 2. A key element of our proof is the adaptation of the techniques from[48] to nonidentical independent hypothesis tests.
An achievable bound on the worstcase pFDR
Theorem 2 in the preceding section shows that, for a given A, the worstcase pFDR is bounded from above by a function of the worstcase misclassification probability. In this section, we use this theorem to establish an achievable bound on the worstcase pFDR that explicitly depends on the number of observations K, signal strengths${\left\{{\alpha}_{i}\right\}}_{i=1}^{M}$, similarity among different targets of interest, and a priori target probabilities.
Then we have the following theorem, whose proof is given in Appendix 3:
Theorem 3
 (1)
For a given N, the upper bound (13b) on λ_{max}increases as K increases, which implies that the system can tolerate more background perturbation if we collect more measurements.
 (2)
The pFDR bound (14) decays with the increase in the values of K, d_{min}and α_{min}, and increases as p_{min}decreases. For a fixed p_{max}, p_{min}, α_{min}and d_{min}, the bound in (14) enables one to choose a value of K to guarantee a desired pFDR value.
 (3)
The dominant part of the bound (14) is independent of N, and is only a function of K, p_{max}, p_{min}, α_{min}, and d_{min}. The lack of dependence on N is not unexpected. Indeed, when we are interested in preserving pairwise distances among the members of a fixed dictionary of size m, the Johnson–Lindenstrauss lemma [49] says that, with high probability, $K=\mathcal{O}\left(\text{log}\phantom{\rule{0.3em}{0ex}}m\right)$ random Gaussian projections suffice, regardless of the ambient dimension N. This is precisely the regime we are working with here.
 (4)The bound on K given in (13c) increases logarithmically with the increase in the difference between p_{max} and p_{min}. This is to be expected since one would need more measurements to detect a less probable target as our decision rule weights each target by its a priori probability. If all targets are equally likely, then p_{max}=p_{min}=1/m, and $K=\mathcal{O}\left(\text{log}\phantom{\rule{0.3em}{0ex}}m\right)$ is sufficient provided ${\alpha}_{\mathrm{\text{min}}}^{2}{d}_{\mathrm{\text{min}}}^{2}$ is sufficiently large such that$\begin{array}{l}\phantom{\rule{12.0pt}{0ex}}\text{log}\phantom{\rule{0.3em}{0ex}}\left(1+\frac{{\alpha}_{\mathrm{\text{min}}}^{2}{d}_{\mathrm{\text{min}}}^{2}}{4K}\right)>\phantom{\rule{0.3em}{0ex}}\text{log}\phantom{\rule{0.3em}{0ex}}\left(1+\frac{{\alpha}_{\mathrm{\text{min}}}^{2}{d}_{\mathrm{\text{min}}}^{2}}{4N}\right)>1\end{array}$
(where the first inequality holds since K < N). In addition, the lower bound on K also illustrates the interplay between the signal strength of the targets, the similarity among different targets in$\mathcal{D}$, and the number of measurements collected. A small value of d_{min} suggests that the targets in$\mathcal{D}$ are very similar to each other, and thus α_{min}and K need to be high enough so that similar targets can still be distinguished. The experimental results discussed in Section “Experimental results” illustrate the tightness of the theoretical results discussed here.
Inspection of the proof shows that if A is generated according to a Gaussian distribution, then the conditions of Theorem 3 will be met with high probability.
Extension to a manifoldbased target detection framework
The DSD problem formulation in Section “ASD problem formulation” is accurate if the signals in the dictionary are faithful representations of the target signals that we observe. In reality, however, the target signals will differ from the dictionary signals owing to the differences in the experimental conditions under which they are collected. For instance, in spectral imaging applications, the observed spectrum of any material will not match the reference spectrum of the same material observed in a laboratory because of the differences in atmospheric and illumination conditions. To overcome this problem, one could form a large dictionary to account for such uncertainties in the target signals and perform target detection according to the approaches discussed in Sections “Whitening compressive observations” and “Dictionary signal detection”. A potential drawback with this approach is that our theoretical performance bound increases with the size of$\mathcal{D}$ through p_{min} and d_{min}. Instead, one could reasonably model the target signals observed under different experimental conditions to lie in a lowdimensional submanifold of the highdimensional ambient signal space as shown to be true for spectral images in[50]. We can exploit this result to extend our analysis to a much broader framework that accounts for uncertainties in our dictionary.
 (1)Given {y _{i}}, form a datadependent dictionary ${\mathcal{D}}_{{\mathit{y}}_{i}}=\left\{{\stackrel{~}{\mathit{f}}}_{i}^{\left(1\right)},\dots ,{\stackrel{~}{\mathit{f}}}_{i}^{\left(m\right)}\right\}$ corresponding to each y _{i} by finding its nearestneighbor in each manifold:${\stackrel{~}{\mathit{f}}}_{i}^{\left(\ell \right)}={\text{arg}\text{max}}_{\mathit{f}\in {\mathcal{\mathcal{M}}}^{\left(\ell \right)}}\mathbb{P}\left(\left(\right)close="">{\mathit{y}}_{i}{\mathit{f}}_{i}^{\ast}=\mathit{f},{\alpha}_{i},\mathit{A}\right)$
for ℓ∈{1,…,m} and i=1,…,M.
 (2)Given $\left\{{\stackrel{~}{\mathit{y}}}_{i}\right\}$ and corresponding $\left\{{\mathcal{D}}_{{\mathit{y}}_{i}}\right\}$, find${\hat{\mathit{f}}}_{i}={\text{arg}\text{max}}_{\stackrel{~}{\mathit{f}}\in {\mathcal{D}}_{{\mathit{y}}_{i}}}\mathbb{P}\left(\left(\right)close="">{\stackrel{~}{\mathit{y}}}_{i}{\mathit{f}}_{i}^{\ast}=\stackrel{~}{\mathit{f}},{\alpha}_{i},\mathit{A}\right)$
and declare that the ith observed spectrum corresponds to class j if${\hat{\mathit{f}}}_{i}={\stackrel{~}{\mathit{f}}}_{i}^{\left(j\right)}$.
Anomalous signal detection
The target detection approach discussed above assumes that the target signal of interest resides in a dictionary that is available to the user. However, in some applications (such as military applications and surveillance), one might be interested in detecting objects not in the dictionary. In other words, the target signals of interest are anomalous and are not available to the user. In this section, we show how the target detection methods discussed above can be extended to anomaly detection. In particular, we exploit the distance preservation property of the sensing matrix A to detect anomalous targets from projection measurements.
ASD problem formulation
where$\tau \in \left[0,\sqrt{2}\right)$ is a userdefined threshold that encapsulates our uncertainty about the accuracy with which we know the dictionary.^{a} In particular, τ controls how different a signal needs to be from every dictionary element to truly be considered anomalous. In the absence of any prior knowledge on the targets of interest, τ can simply be set to zero. The null hypothesis in this setting models the normal behavior, while the alternative hypothesis models the abnormal or anomalous behavior. This formulation is consistent with the literature[26, 38].
Note that the definition of the hypotheses given in (15a) and (15b) matches the definition in (5) for the special case where the dictionary contains just one signal. In this special case, the signal input f^{∗} is in the dictionary under the null hypothesis in both DSD and ASD problem formulations.^{b}
Anomaly detection approach
where${\stackrel{~}{d}}_{i}={min}_{\mathit{f}\in \mathcal{D}}\parallel {\alpha}_{i}\mathit{A}\left({\mathit{f}}_{i}^{\ast}\mathit{f}\right)+\mathit{n}\parallel $ and$\mathit{n}\sim \mathcal{N}\left(0,\mathit{I}\right)$ is independent of n_{i}. This is the probability under the null hypothesis, of acquiring a test statistic at least as extreme as the one observed. Let us denote the ordered set of pvalues by p_{(1)}≤p_{(2)}≤⋯≤p_{(M)} and let${\mathcal{\mathscr{H}}}_{\left(0i\right)}$ be the null hypothesis corresponding to (i)^{th}pvalue. The BH procedure says that if we reject all${\mathcal{\mathscr{H}}}_{\left(0i\right)}$ for i=1,…,t where t is the largest i for which p_{(i)}≤iδ/M, then the FDR is controlled at δ.
The existence of such projection operators is guaranteed by the celebrated Johnson and Lindenstrauss (JL) lemma[49], which says that there exists random constructions of A for which (18) holds with probability at least 1−2V^{2}e^{−Kc(ε)}provided$K=\mathcal{O}\left(\text{log}\phantom{\rule{0.3em}{0ex}}\leftV\right\right)\le N$, where c(ε)=ε^{2}/16−ε^{3}/48[51, 52]. Examples of such constructions are: (a) Gaussian matrices whose entries are drawn from$\mathcal{N}(0,1/K)$, (b) Bernoulli matrices whose entries are$\pm 1/\sqrt{N}$ with probability 1/2, (c) random matrices whose entries are$\pm \sqrt{3/N}$ with probability 1/6 and zero with probability 2/3[51, 52], and (d) matrices that satisfy the restricted isometry property (RIP) where the signs of the entries in each column are randomized[53].
for i=1,…,M where ζ∈[0,1] is a measure of the accuracy of the estimation procedure.
Theorem 4
holds for all i=1,…,M where$\mathcal{F}\left(\xb7;K,\nu \right)$ is the CDF of a noncentral χ^{2}random variable with K degrees of freedom and noncentrality parameter ν[54].
The proof of this theorem is given in Appendix 4. We find the pvalue upper bounds at every location and use the BH procedure to perform anomaly detection. The performance of this procedure depends on the values of K, {α_{i}}, τ and ε. The parameter ε is a measure of the accuracy with which the projection matrix A preserves the distances between any two vectors in${\mathbb{R}}^{N}$. A value of ε close to zero implies that the distances are preserved fairly accurately. When {α_{i}} are unknown and estimated from the observations, the performance depends on the accuracy of the estimation procedure, which is reflected in our bounds in (20) through ζ.
for some absolute constants C,c > 0. This result shows that with high probability,${\u2225{\mathit{y}}_{i}\u2225}_{2}^{2}K$ is nonnegative.
The experimental results discussed in Section “Experimental results” demonstrate the performance of this detector as a function of K, {α_{i}} and τ when {α_{i}} are known and as a function of K, τ and ζ when {α_{i}} are estimated.
Experimental results
In the experiments that follow, the entries of A are drawn from$\mathcal{N}(0,1/K)$.
Dictionary signal detection
To test the effectiveness of our approach, we formed a dictionary$\mathcal{D}$ of nine spectra (corresponding to different kinds of trees, grass, water bodies and roads) obtained from a labeled HyMap (Hyperspectral Mapper) remote sensing data set[57], and simulated a realistic dataset using the spectra from this dictionary. Each HyMap spectrum is of length N=106. We generated projection measurements of these data such that${\mathit{z}}_{i}={\alpha}_{i}\mathit{\Phi}({\mathit{f}}_{i}^{\ast}+{\mathit{b}}_{i})+{\mathit{w}}_{i}$ according to (1), where${\mathit{w}}_{i}\sim \mathcal{N}(0,{\sigma}^{2}\mathit{I})$,${\mathit{f}}_{i}^{\ast}\in \mathcal{D}$ for i=1,…,8100,${\mathit{b}}_{i}\sim \mathcal{N}\left({\mathit{\mu}}_{\mathit{b}},{\mathit{\Sigma}}_{\mathit{b}}\right)$ such that Σ_{ b } satisfies the condition in (4), and${\alpha}_{i}={\alpha}_{i}^{\ast}\sqrt{K}$ where${\alpha}_{i}^{\ast}\sim \mathcal{U}[21,25]$ and$\mathcal{U}$ denotes uniform distribution. We let σ^{2}=5 and model {α_{i}} to be proportional to$\sqrt{K}$ to account for the fact that the total observed signal energy increases as the number of detectors increases. We transform the z_{i}by a series of operations to arrive at a model of the form discussed in (2), which is${\mathit{y}}_{i}={\alpha}_{i}\mathit{A}{\mathit{f}}_{i}^{\ast}+{\mathit{n}}_{i}$. For this dataset, p_{min}=0.04938, p_{max}=0.1481, and d_{min}=0.04341.
In the experiment that follows, we let${\alpha}_{i}^{\ast}\sim \mathcal{U}[10,20]$, where$\mathcal{U}$ denotes a uniform random variable,${\alpha}_{i}=\sqrt{K}{\alpha}_{i}^{\ast}$ and evaluate the performance of our detector for different values of K that are not necessarily chosen to satisfy (13c). In addition, we also compare the performance of our detection method to that of a MAP based target detector operating on downsampled versions of our simulated spectral input image. The reason behind such a comparison is to show what kinds of measurements yield better results given a fixed number of detectors.
where$G={\stackrel{~}{\mathit{\Sigma}}}_{b}+{\sigma}^{2}\mathit{I}$ and${\stackrel{~}{\mathit{\Sigma}}}_{b}$ is the covariance matrix obtained from the downsampled versions of the background training data and${\stackrel{~}{\mathit{f}}}^{\left(\ell \right)}$ is the downsampled version of${\mathit{f}}^{\left(\ell \right)}\in \mathcal{D}$. The algorithm declares that target spectrum${\mathit{f}}^{\left(j\right)}\in \mathcal{D}$ is present in the i th location if${D}_{i}^{\text{MAP}}=j$. In order to illustrate the advantages of using a Φ designed according to (24), we compare the performances of the proposed anomaly detector when Φ is chosen to be a random Gaussian matrix whose entries are drawn from$\mathcal{N}\left(0,1/K\right)$ and when Φ is chosen according to (24). Figure1b shows a comparison of the results obtained using the projection measurements obtained using Φ designed according to (24), Φ chosen at random, and the downsampled measurements under the AK case. These results show that the detection algorithm operating on projection measurements using Φ designed using background and sensor noise statistics yield significantly better results than the one operating on the downsampled data, and that the empirical pFDR values in our method decays with K. The improvement in performance using projection measurements comes from the distancepreservation property of the projection operator A. While a Gaussian sensing matrix A preserves distances between any pair of vectors from a finite collection of vectors with high probability[51, 52], downsampling loses some of the fine differences between similarlooking spectra in the dictionary. Furthermore, when Φ is chosen at random, the resulting whitened transformation matrix is not necessarily distancepreserving. This may worsen the performance as illustrated in Figure1b.
Anomaly detection
In this section, we evaluate the performance of our anomaly detection method on (a) a simulated dataset and provide a comparison of the results obtained using the proposed projection measurements and the ones obtained using downsampled measurements, and (b) real AVIRIS (Airborne Visible InfraRed Imaging Spectrometer) dataset.
Experiments on simulated data
We simulate a spectral image f^{∗}composed of 8100 spectra, where each of them is either drawn from a dictionary$\mathcal{D}=\{{\mathit{f}}^{\left(1\right)},\cdots \phantom{\rule{0.3em}{0ex}},{\mathit{f}}^{\left(5\right)}\}$ consisting of five labeled spectra from the HyMap data that correspond to a natural landscape (trees, grass and lakes) or is anomalous. The anomalous spectrum is extracted from unlabeled AVIRIS data, and the minimum distance between the anomalous spectrum f^{(a)} and any of the spectra in$\mathcal{D}$ is${d}_{\mathrm{\text{min}}}={\mathrm{\text{min}}}_{\mathit{f}\in \mathcal{D}}\parallel \mathit{f}{\mathit{f}}^{\left(\mathrm{a}\right)}\parallel =0.5308$. The simulated data has 625 locations that contain the anomalous spectrum. Our goal is to find the spatial locations that contain the anomalous AVIRIS spectrum given noisy measurements of the form${\mathit{z}}_{i}=\mathit{\Phi}\left({\alpha}_{i}{\mathit{f}}_{i}^{\ast}+{\mathit{b}}_{i}\right)+{\mathit{w}}_{i}$ where b_{ i }∼(μ_{ b },Σ_{ b }), Φ is designed according to (24),${\mathit{w}}_{i}\sim \mathcal{N}(0,{\sigma}^{2}\mathit{I})$ and${\mathit{f}}_{i}^{\ast}\in \mathcal{D}$ under${\mathcal{\mathscr{H}}}_{0i}$. As discussed in Section “Anomalous signal detection”,${\mathit{f}}_{i}^{\ast}$ is anomalous under${\mathcal{\mathscr{H}}}_{1i}$, and our goal is to control the FDR below a userspecified false discovery level δ. We simulate$\left\{{\alpha}_{i}\right\}=\sqrt{K}{\alpha}_{i}^{\ast}$ where${\alpha}_{i}^{\ast}\sim \mathcal{U}[2,3]$. In this experiment we assume the availability of background training data to estimate the background statistics and the sensor noise variance σ^{2}. Given the knowledge of the background statistics, we perform the whitening transformation discussed in Section “Whitening compressive observations” and evaluate the detection performance on the preprocessed observations given by (2).
where p_{ t }is the pvalue threshold such that the BH procedure rejects all null hypotheses for which p_{ i }≤p_{ t }, and the ground truth label${L}_{i}^{\mathrm{\text{GT}}}=0$ if the i th spectrum is not anomalous, and 1 otherwise. In this experiment, we consider three different values of K approximately given by K∈{N/6,N/3,N/2} where N=106, and evaluate the performance of our detector for each K. Furthermore, in our experiments with simulated data, we declare a spectrum to be anomalous if d_{ i }≥η where η is a userspecified threshold and d_{ i }is defined in (16). We use the pvalue upper bound in (20) in our experiments with real data where the ground truth is unknown.
Experiments on real AVIRIS data
We generate measurements of the form${\mathit{y}}_{i}=\sqrt{K}{\mathit{g}}_{i}^{v}+{\mathit{n}}_{i}$ for i=1,…,128×128, where${\mathit{n}}_{i}\sim \mathcal{N}(0,\mathit{I})$. The$\sqrt{K}$ factor indicates that the observed signal strength increases with K. For a fixed FDR control value of 0.01, Figure3c,d shows the results obtained for K≈N/5 and K≈N/2, respectively. Figure3e shows how the probability of error decays as a function of the number of measurements K. The results presented here are obtained by averaging over 1,000 different noise and sensing matrix realizations. From these results, we can see that the number of detected anomalies increases with K and the number of misclassifications decrease with K.
Conclusion
This work presents computationally efficient approaches for detecting known targets and anomalies of different strengths from projection measurements without performing a complete reconstruction of the underlying signals, and offers theoretical bounds on the worstcase target detector performance. This article treats each signal as independent of its spatial or temporal neighbors. This assumption is reasonable in many contexts, especially when the spatial or temporal resolution is low relative to the spatial homogeneity of the environment or the pace with which a scene changes. However, emerging technologies in computational optical systems continue to improve the resolution of spectral imagers. In our future work we will build upon the methods that we have discussed here to exploit the spatial or temporal correlations in the data.
Appendix 1: Proof of Theorem 1
where the thirdtolast equation follows from the definition of B and (25) follows from the fact that B is symmetric and positive definite. If B is positive definite, then B^{−1} is positive definite as well and can be decomposed as B^{−1}=(B^{−1/2})^{ T }B^{−1/2}, where the matrix square root B^{−1/2}is symmetric and positive definite. By substituting (25) and (24) in (3), we have C_{ Φ }Φ=σ^{−1}B^{1/2}σ B^{−1/2}A=A. A sufficient condition for B to be positive definite can be derived as follows.
since ∥A∥=∥A^{ T }∥ and ∥Σ_{ b }∥=λ_{max}, where λ_{max} is the largest eigenvalue of Σ_{ b }. To ensure ∥A Σ_{ b }A^{ T }∥ < 1, ∥A∥^{2}λ_{max} has to be < 1, which leads to the result of Theorem 1.
Appendix 2: Proof of Theorem 2
where p_{max}=max_{ℓ∈{1,…,m}}p^{(ℓ)}.
Appendix 3: Proof of Theorem 3
The proof is via a random selection technique, similar to random coding arguments common in information theory. Specifically, we will draw a K×N sensing matrix A at random from a particular distribution and then show that, for ε, N, and K satisfying the conditions of the theorem, the probability that the conclusions of the theorem will fail to hold for this randomly chosen A is strictly smaller than unity. This will imply that the conclusions of the theorem must be true for at least one (deterministic) realization of A.
We begin by specifying all the relevant random variables:

${\mathit{f}}_{1}^{\ast},\dots ,{\mathit{f}}_{M}^{\ast}$ are i.i.d. random variables taking values in the dictionary$\mathcal{D}=\{{\mathit{f}}^{\left(1\right)},\dots ,{\mathit{f}}^{\left(m\right)}\}$ with probabilities${p}^{\left(j\right)}=\text{Pr}\{{\mathit{f}}_{i}^{\ast}={\mathit{f}}^{\left(j\right)}\},j\in \{1,\dots ,m\}$;

${\mathit{n}}_{1},\dots ,{\mathit{n}}_{M}\stackrel{\text{i.i.d.}}{\sim}\mathcal{N}(0,\mathit{I})$ ;

G is a random K×N matrix with i.i.d.$\mathcal{N}(0,1)$ entries.
where α_{1},…,α_{ M } > 0 are the given signal strengths.
Next, we bound$\mathsf{P}\left({\mathcal{E}}_{2}\right)$. To that end, we use the following result, which is a straightforward extension of ([5], Theorem 1) to nonequiprobable dictionary elements:
Lemma 1 (Compressive classification error)
where the probability is taken with respect to the distributions underlying f^{∗}, A, and n.
Combining (36) and (37), we get (35).
where, for a given choice of A, (P_{e})_{max}(A) denotes the maximum probability of error defined in Theorem 2.
This proves the theorem for the case α_{1}=⋯=α_{ M }=α.
where${\stackrel{~}{\mathit{n}}}_{i}=\frac{1}{{\alpha}_{i}}{\mathit{n}}_{i}\sim \mathcal{N}(0,\frac{1}{{\alpha}_{i}^{2}}\mathit{I})$. Secondly, from the fact that α_{ i }≥α_{i∗}≡α_{min}for any i≠i^{∗} it follows that${\stackrel{~}{\mathit{n}}}_{{i}^{\ast}}$ is equal in distribution to${\stackrel{~}{\mathit{n}}}_{i}+{\stackrel{~}{\mathit{n}}}_{i}^{\prime}$, where${\stackrel{~}{\mathit{n}}}_{i}^{\prime}\sim \mathcal{N}\left(0,\left(\frac{1}{{\alpha}_{i}^{2}}\frac{1}{{\alpha}_{\mathrm{min}}^{2}}\right)\mathit{I}\right)$ is independent of${\stackrel{~}{\mathit{n}}}_{i}$. This implies that the i^{∗}th observation is the noisiest, and the corresponding MAP estimate${\hat{\mathit{f}}}_{{i}^{\ast}}$ has the largest probability of error.
Appendix 4: Proof of Theorem 4
since$\parallel {\mathit{f}}_{i}^{\ast}\mathit{f}\parallel \le \tau $ for all$\mathit{f}\in \mathcal{D}$ under${\mathcal{\mathscr{H}}}_{0i}$.
Endnotes
^{a} Note that τ cannot exceed$\sqrt{2}$ because we assume that all targets of interest, including those in$\mathcal{D}$ and the actual target f^{∗}, are unitnorm.^{b} The anomaly detection problem discussed here is more accurately described as target detection in the classical detection theory vocabulary. However, in recent works[24, 25], the authors assume that the nominal distribution is obtained from training data and a test sample is declared to be anomalous if it falls outside of the nominal distribution learned form the training data. Our work is in a similar spirit where we learn our dictionary from training data and label any test spectrum that does not correspond to our dictionary as being anomalous.^{c} The authors would like to thank Prof. Roummel Marcia for fruitful discussions related to this point.
Declarations
Acknowledgements
This work was supported by the NSF Award No. DMS0811062, DARPA Grant No. HR00110910036, and AFRL Grant No. FA865007D1221.
Authors’ Affiliations
References
 Candès EJ, Tao T: Nearoptimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 2006, 52(12):54065425.View ArticleMathSciNetMATHGoogle Scholar
 Donoho D: Compressed sensing. IEEE Trans. Info. Theory 2006, 52(4):12891306.MathSciNetView ArticleMATHGoogle Scholar
 Davenport M, Duarte M, Wakin M, Laska J, Takhar D, Kelly K, Baraniuk R: The smashed filter for compressive classification and target recognition. Proceedings of SPIE, vol. 6498 (San Jose, CA, 2007), pp. 142–153Google Scholar
 Duarte MF, Davenport MA, Wakin MB, Baraniuk RG: Sparse signal detection from incoherent projections. IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 3 (Toulouse, France, 2006), pp. 305–308Google Scholar
 Haupt J, Castro R, Nowak R, Fudge G, Yeh A: Compressive sampling for signal classification. Fortieth Asilomar Conference on Signals, Systems and Computers,. 2006, pp. 1430–1434Google Scholar
 Aeron S, Saligrama V, Zhao M: Information theoretic bounds for compressed sensing. Inf. IEEE Trans. Theory 2010, 56(10):51115130.MathSciNetView ArticleGoogle Scholar
 AriasCastro E, Eldar Y: Noise folding in compressed sensing. IEEE Signal Process. Lett 2011, 18: 478481.View ArticleGoogle Scholar
 Han J, Bhanu B: Fusion of color and infrared video for moving human detection. Pattern Recogn 2007, 40(6):17711784. 10.1016/j.patcog.2006.11.010View ArticleMATHGoogle Scholar
 Johnson W, Wilson D, Fink W, Humayun M, Bearman G: Snapshot hyperspectral imaging in ophthalmology. J. Biomed. Optics 2007, 12(1):0140361–0140367. 10.1117/1.2434950View ArticleGoogle Scholar
 Lin R, Dennis B, Benz A: The Reuven Ramaty HighEnergy Solar Spectrscopic Imager (RHESSI)  Mission Description and Early Results. (Kluwer Academic Publishers, Dordrecht, 2003)View ArticleGoogle Scholar
 Martin M, Newman S, Aber J, Congalton R: Determining forest species composition using high spectral resolution remote sensing data. Remote Sens. Envir 1998, 65(3):249254. 10.1016/S00344257(98)000352View ArticleGoogle Scholar
 Martin M, Wabuyele M, Chen K, Kasili P, Panjehpour M, Phan M, Overholt B, Cunningham G, Wilson D, DeNovo R, VoDinh T: Development of an advanced hyperspectral imaging (HSI) system with applications for cancer detection. Ann. Biomed. Eng 2006, 34(6):10611068. 10.1007/s1043900691219View ArticleGoogle Scholar
 Miller J, Elvidge C, Rock B, Freemantle J: An airborne perspective on vegetation phenology from the analysis of AVRIS data sets over the Jasper ridge biological preserve. Geoscience and Remote Sensing Symposium (IGARSS’90): Remote sensing for the nineties (College Park, MD, 20–24 May 1990), pp. 565–568Google Scholar
 Stellman C, Hazel G, Bucholtz F, Michalowicz J, Stocker A, Schaaf W: Realtime hyperspectral detection and cuing. Opt. Eng 2000, 39: 19281935. 10.1117/1.602577View ArticleGoogle Scholar
 Zuzak K, Naik S, Alexandrakis G, Hawkins D, Behbehani K, Livingston E: Intraoperative bile duct visualization using nearinfrared hyperspectral video imaging. Am. J. Surg 2008, 195(4):491497. 10.1016/j.amjsurg.2007.05.044View ArticleGoogle Scholar
 Brady D, Gehm M: Compressive imaging spectrometers using coded apertures. Proc. of SPIE, vol. 6246 (Kissimmee, Florida, 2006), pp. 62460A1–62460A9Google Scholar
 DeVerse RA, Coifman RR, Coppi AC, Fateley WG, Geshwind F, Hammaker RM, Valenti S, Warner FJ, Davis GL: Application of Spatial Light Modulators for New Modalities in Spectrometry and Imaging. Spectral Imaging: Instrumentation, Applications, and Analysis II, vol. 4959, ed. by RM Levenson, GH Bearman, A MahadevanJansen (2003), pp. 12–22View ArticleGoogle Scholar
 Gehm M, John R, Brady D, Willett R, Schulz T: Singleshot compressive spectral imaging with a dualdisperser architecture. Opt. Express 2007, 15(21):1401314027. 10.1364/OE.15.014013View ArticleGoogle Scholar
 Takhar D, Laska J, Wakin MB, Duarte MF, Baron D, Sarvotham S, Kelly K, Baraniuk RG: A new compressive imaging camera architecture using opticaldomain compression. Proc. IS&T/SPIE Symposium on Electronic Imaging (San Jose, CA, 2006), pp. 43–52Google Scholar
 Wagadarikar A, John R, Willett R, Brady D: Single disperser design for coded aperture snapshot spectral imaging. Appl. Opt 2008, 47(10):B44B51. 10.1364/AO.47.000B44View ArticleGoogle Scholar
 Woolfe F, Maggioni M, Davis G, Warner F, Coifman R, Zucker S: Hyperspectral microscopic discrimination between normal and cancerous colon biopsies. Manuscript (2006)Google Scholar
 Manolakis D, Marden D, Shaw G: Hyperspectral image processing for automatic target detection applications. Lincoln Laboratory J 2003, 14(1):79116.Google Scholar
 Wei G, Agnihotri L, Dimitrova N: TV program classification based on face and text processing. 2000 IEEE International Conference on Multimedia and Expo, ICME 2000, vol. 3 (2000), pp. 1345–1348Google Scholar
 Hero AO: Geometric entropy minimization (GEM) for anomaly detection and localization. Proc. Advances in Neural Information Processing Systems (NIPS) (MIT Press, Vancouver, Canada, 2006), pp. 585–592Google Scholar
 Steinwart I, Hush D, Scovel C: A classification framework for anomaly detection. J. Mach. Learn. Res 2005, 6: 211232.MathSciNetMATHGoogle Scholar
 Stein D, Beaven S, Hoff L, Winter E, Schaum A, Stocker A: Anomaly detection from hyperspectral imagery. IEEE Signal Process. Mag 2002, 19(1):5869. 10.1109/79.974730View ArticleGoogle Scholar
 Manolakis D, Shaw G: Detection algorithms for hyperspectral imaging applications. IEEE Signal Process. Mag 2002, 19(1):2943. 10.1109/79.974724View ArticleGoogle Scholar
 Berger JO: Statistical Decision Theory and Bayesian Analysis,. (Springer, New York, 1985)View ArticleMATHGoogle Scholar
 Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B (Methodological) 1995, 57(1):289300.MathSciNetMATHGoogle Scholar
 Jin X, Paswaters S, Cline H: A comparative study of target detection algorithms for hyperspectral imagery. Proceedings of SPIE, vol. 7334 (2009), p. 73341WGoogle Scholar
 Kelly E: An adaptive detection algorithm. IEEE Trans. Aerospace Electron. Syst. AES 1986, 22(2):115127.View ArticleGoogle Scholar
 Kraut S, Scharf L, McWhorter L: Adaptive subspace detectors. IEEE Trans. Signal Processing 2001, 49(1):116. 10.1109/78.890324View ArticleGoogle Scholar
 Scharf L, Friedlander B: Matched subspace detectors. IEEE Trans. Signal Process 1994, 42(8):21462157. 10.1109/78.301849View ArticleGoogle Scholar
 Kwon H, Nasrabadi N: Kernel matched subspace detectors for hyperspectral target detection. IEEE Trans. Pattern Anal. Mach. Intell 2006, 28(2):178194.View ArticleGoogle Scholar
 Scharf LL, McWhorter LT: Adaptive matched subspace detectors and adaptive coherence estimators. Conference Record of the Thirtieth Asilomar Conference on Signals, Systems and Computers (Pacific Grove, CA, 1996), pp. 1114–1117Google Scholar
 Parmar M, Lansel S, Wandell B: Spatiospectral reconstruction of the multispectral datacube using sparse recovery. 15th IEEE International Conference on Image Processing (San Diego, CA, 2008), pp. 473–476Google Scholar
 Willett R, Gehm M, Brady D: Multiscale reconstruction for computational spectral imaging. Comput. Imag. V 2007, 6498: 64980L1–64980L15.Google Scholar
 Fowler J, Du Q: Anomaly detection and reconstruction from random projections. IEEE Trans. Image Process 2012, 21(1):184195.MathSciNetView ArticleGoogle Scholar
 Reed I, Yu X: Adaptive multipleband CFAR detection of an optical pattern with unknown spectral distribution. IEEE Trans. Acoust. Speech Signal Process 1990, 38(10):17601770. 10.1109/29.60107View ArticleGoogle Scholar
 Krishnamurthy K, Raginsky M, Willett R: Hyperspectral target detection from incoherent projections. IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP) (Dallas, TX, 2010), pp. 3550–3553Google Scholar
 Krishnamurthy K, Raginsky M, Willett R: Hyperspectral target detection from incoherent projections: nonequiprobable targets and inhomogenous SNR. 17th IEEE International Conference on Image Processing (ICIP) (Hongkong, 2010), pp. 1357–1360Google Scholar
 Boardman JW: Spectral Angle Mapping: A Rapid Measure of Spectral Similarity. (AVIRIS, 1993)Google Scholar
 Guo Z, Osher S: Template matching via L1 minimization and its application to hyperspectral data. Accepted to Inverse Problems and Imaging (IPI), 2009MATHGoogle Scholar
 Kwon H, Nasrabadi N: Kernel RXalgorithm: a nonlinear anomaly detector for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens 2005, 43(2):388397.View ArticleGoogle Scholar
 Szlam A, Guo Z, Osher S: A split Bregman method for nonnegative sparsity penalized least squares with applications to hyperspectral demixing. IEEE 17th International Conference on Image Processing (ICIP) (Hongkong, 2010), pp. 1917–1920Google Scholar
 Chang C: Virtual dimensionality for hyperspectral imagery. SPIE Newsroom 2009, 10(2.1200909):1749.Google Scholar
 Chang C, Du Q: Estimation of number of spectrally distinct signal sources in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens 2004, 42(3):608619. 10.1109/TGRS.2003.819189View ArticleGoogle Scholar
 Storey J: The positive false discovery rate: a Bayesian interpretation and the qvalue. Ann. Stat 2003, 20132035.Google Scholar
 Johnson W, Lindenstrauss J: Extensions of Lipschitz maps into a Hilbert space. Contemp. Math 1984, 26: 189206.MathSciNetView ArticleMATHGoogle Scholar
 Healey G, Slater D: Models and methods for automated material identification in hyperspectral imagery acquired under unknown illumination and atmospheric conditions. IEEE Trans. Geosci. Remote Sens 1999, 37(6):27062717. 10.1109/36.803418View ArticleGoogle Scholar
 Achlioptas D: Databasefriendly random projections. Proc. 20th ACM Symp. Principles of Database Systems (ACM Press, New York, NY 2001), pp. 274–281Google Scholar
 Baraniuk R, Davenport M, DeVore R, Wakin M: A simple proof of the restricted isometry property for random matrices. Constructive Approx 2008, 28(3):253263. 10.1007/s003650079003xMathSciNetView ArticleMATHGoogle Scholar
 Krahmer F, Ward R: New and improved johnsonlindenstrauss embeddings via the restricted isometry property. SIAM Journal on Mathematical Analysis 2011, 43(3):12691281. Arxiv preprint arXiv:1009.0744, 2010 10.1137/100810447MathSciNetView ArticleMATHGoogle Scholar
 Wasserman L: All of Statistics: A Concise Course in Statistical Inference. (Springer, New York, NY 2004)View ArticleMATHGoogle Scholar
 Tao T, Vu V: On random±1 matrices: singularity and determinant. Random Struct. Algor 2006, 28(1):123. 10.1002/rsa.20109MathSciNetView ArticleMATHGoogle Scholar
 Tao T: Talagrand’s concentration inequality,. . Accessed on 08/03/2012 http://terrytao.wordpress.com/2009/06/09/talagrandsconcentrationinequality/
 Kruse FA, Boardman JW, Lefkoff AB, Young JM, KiereinYoung KS, Cocks TD, Jensen R, Cocks PA: HyMap: an Australian hyperspectral sensor solving global problemsresults from USA HyMap data acquisitions. Proc. of the 10th Australasian Remote Sensing and Photogrammetry Conference (Adelaide, Australia, 2000), pp. 18–23Google Scholar
 Davidson KR, Szarek SJ: Local operator theory, random matrices and Banach spaces. (NorthHolland, Amsterdam, 2001), pp. 317–366MATHGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.