Skip to main content

Near optimal bound of orthogonal matching pursuit using restricted isometric constant

Abstract

As a paradigm for reconstructing sparse signals using a set of under sampled measurements, compressed sensing has received much attention in recent years. In identifying the sufficient condition under which the perfect recovery of sparse signals is ensured, a property of the sensing matrix referred to as the restricted isometry property (RIP) is popularly employed. In this article, we propose the RIP based bound of the orthogonal matching pursuit (OMP) algorithm guaranteeing the exact reconstruction of sparse signals. Our proof is built on an observation that the general step of the OMP process is in essence the same as the initial step in the sense that the residual is considered as a new measurement preserving the sparsity level of an input vector. Our main conclusion is that if the restricted isometry constant δ K of the sensing matrix satisfies

δ K < K - 1 K - 1 + K

then the OMP algorithm can perfectly recover K(> 1)-sparse signals from measurements. We show that our bound is sharp and indeed close to the limit conjectured by Dai and Milenkovic.

1 Introduction

As a paradigm to acquire sparse signals at a rate significantly below Nyquist rate, compressive sensing has received much attention in recent years [117]. The goal of compressive sensing is to recover the sparse vector using small number of linearly transformed measurements. The process of acquiring compressed measurements is referred to as sensing while that of recovering the original sparse signals from compressed measurements is called reconstruction.

In the sensing operation, K-sparse signal vector x, i.e., n-dimensional vector having at most K non-zero elements, is transformed into m-dimensional signal (measurements) y via a matrix multiplication with Φ. This process is expressed as

y = Φ x .
(1)

Since n > m for most of the compressive sensing scenarios, the system in (1) can be classified as an underdetermined system having more unknowns than observations, and hence, one cannot accurately solve this inverse problem in general. However, due to the prior knowledge of sparsity information, one can reconstruct x perfectly via properly designed reconstruction algorithms. Overall, commonly used reconstruction strategies in the literature can be classified into two categories. The first class is linear programming (LP) techniques including ℓ1-minimization and its variants. Donoho [10] and Candes [13] showed that accurate recovery of the sparse vector x from measurements y is possible using ℓ1-minimization technique if the sensing matrix Φ satisfies restricted isometry property (RIP) with a constant parameter called restricted isometry constant. For each positive integer K, the restricted isometric constant δ K of a matrix Φ is defined as the smallest number satisfying

( 1 - δ K ) x 2 2 Φ x 2 2 ( 1 + δ K ) x 2 2
(2)

for all K-sparse vectors x. It has been shown that if δ 2 K < 2 -1[13], the ℓ1-minimization is guaranteed to recover K-sparse signals exactly.

The second class is greedy search algorithms identifying the support (position of nonzero element) of the sparse signal sequentially. In each iteration of these algorithms, correlations between each column of Φ and the modified measurement (residual) are compared and the index (indices) of one or multiple columns that are most strongly correlated with the residual is identified as the support. In general, the computational complexity of greedy algorithms is much smaller than the LP based techniques, in particular for the highly sparse signals (signals with small K). Algorithms contained in this category include orthogonal matching pursuit (OMP) [1], regularized OMP (ROMP) [18], stagewise OMP (DL Donoho, I Drori, Y Tsaig, JL Starck: Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit, submittd), and compressive sampling matching pursuit (CoSaMP) [16].

As a canonical method in this family, the OMP algorithm has received special attention due to its simplicity and competitive reconstruction performance. As shown in our Table, the OMP algorithm performs the support identification followed by the residual update in each iteration and these operations are repeated usually K times. It has been shown that the OMP algorithm is robust in recovering both sparse and near-sparse signals [13] with O(nmK) complexity [1]. Over the years, many efforts have been made to find out the condition (upper bound of restricted isometric constant) guaranteeing the exact recovery of sparse signals. For example, δ3K< 0.165 for the subspace pursuit [19], δ4K< 0.1 for the CoSaMP [16], and δ 4 K <0.01/ log K for the ROMP [18]. The condition for the OMP is given by [20]

δ K + 1 < 1 3 K .
(3)

Recently, improved conditions of the OMP have been reported including δ K + 1 <1/ 2 K [21] and δ K + 1 <1/ ( K + 1 ) (J Wang, B Shim: On recovery limit of orthogonal matching pursuit using restricted isometric property, submitted).

The primary goal of this article is to provide an improved condition ensuring the exact recovery of K-sparse signals of the OMP algorithm. While previously proposed recovery conditions are expressed in terms of δK+ 1[20, 21], our result, formally described in Theorem 1.1, is expressed in terms of the restricted isometric constant δ K of order K so that it is perhaps most natural and simple to interpret. For instance, our result together with the Johnson-Lindenstrauss lemma [22] can be used to estimate the compression ratio (i.e., minimal number of measurements m ensuring perfect recovery) when the elements of Φ are chosen randomly [17]. Besides, we show that our result is sharp in the sense that the condition is close to the limit of the OMP algorithm conjectured by Dai and Milenkovic [19], in particular when K is large. Our result is formally described in the following theorem.

Theorem 1.1 (Bound of restricted isometry constant). Suppose x nis a K-sparse signal (K > 1), then the OMP algorithm recovers x from y = Φx mif

δ K < K - 1 K - 1 + K .
(4)

Loosely speaking, since K/ K - 1 K for K 1, the proposed condition approximates to δ K <1/ 1 + K for a large K. In order to get an idea how close the proposed bound is from the limit conjectured by Dai and Milenkovic δ K + 1 = 1 / K , we plot the bound as a function of the sparsity level K in Figure 1. Although the proposed bound is expressed in terms of δ K while (3) and the limit of Dai and Milenkovic are expressed in terms of δK+ 1so that the comparison is slightly unfavorable for the proposed bound, we still see that the proposed bound is fairly close to the limit for large K. a

Figure 1
figure 1

Bounds of restricted isometry constant. Note that the proposed bound is expressed in terms of δ K while (3) and the limit of Dai and Milenkovic are expressed in terms of δK+1.

As mentioned, another interesting result we can deduce from Theorem 1.1 is that we can estimate the maximal compression ratio when Gaussian random matrix is employed in the sensing process. Note that the direct investigation of the condition δ K < ϵ for a given sensing matrix Φ is undesirable, especially when n is large and K is nontrivial, since the extremal singular values of ( K n ) sub-matrices need to be tested.

In an alternative way, a condition derived from Johnson-Lindenstrauss lemma has been popularly considered. In fact, it is now well known that m × n random matrices with i.i.d. entries from the Gaussian distribution N ( 0 , 1 / m ) obey the RIP with δ K ϵ with overwhelming probability if the dimension of the measurements satisfies [17]

m C K log n K ε 2
(5)

where C is a constant. By applying the result in Theorem 1.1, we can obtain the minimum dimension of m ensuring exact reconstruction of K-sparse signal using the OMP algorithm. Specifically, plugging ε= K - 1 / K - 1 + K into (5), we get

m C K 3 K - 1 + 2 K 2 K - 1 + K log n K .
(6)

This result [m is expressed in the order of O K 2 log n K is desirable, since the size of measurements m grows moderately with the sparsity level K.

2 Proof of theorem 1.1

2.1 Notations

We now provide a brief summary of the notations used throughout the article.

  • T = supp(x) = { i | x i ≠ 0} is the set of non-zero positions in x.

  • |D| is the cardinality of D.

  • T\D is the set of all elements contained in T but not in D.

  • Φ D m×|D|is a submatrix of Φ that only contains columns indexed by D.

  • x D |D|is a restriction of the vector x to the elements with indices in D.

  • span(Φ D ) is the span (range) of columns in Φ D .

  • Φ D denotes the transpose of the matrix Φ D .

  • Φ D = Φ D Φ D - 1 Φ D is the pseudoinverse of Φ D .

  • P D = Φ D Φ D denotes the orthogonal projection onto span(Φ D ).

  • P D = I - P D is the orthogonal projection onto the orthogonal complement of span(Φ D ).

2.2 Preliminaries--definitions and lemmas

In this subsection, we provide useful definition and lemmas used for the proof of Theorem 1.1.

Definition 1 (Restricted orthogonality constant [23]). For two positive integers K and K', if K + K'n, then K, K'-restricted orthogonality constant θ K,K' , is the smallest number that satisfies

Φ x , Φ x θ K , K x 2 x 2
(7)

for all x and x' such that x and x' are K-sparse and K'-sparse respectively, and have disjoint supports.

Lemma 2.1. In the OMP algorithm, the residual rkis orthogonal to the columns selected in previous iterations. That is,

φ i , r k = 0
(8)

for i Tk.

Lemma 2.2 (Monotonicity of δ K [19]). If the sensing matrix satisfies the RIP of orders K1and K2, respectively, then

δ K 1 δ K 2

for any K1K2. This property is referred to as the monotonicity of the restricted isometric constant.

Lemma 2.3 (A direct consequence of RIP [19]). Let I {1,2,... ,n} and Φ I be the sub-matrix of Φ that contains columns of Φ indexed by I. If δ|I|< 1, then for any u |I|,

1 - δ I u 2 Φ I Φ I u 2 1 + δ I u 2 .

Lemma 2.4 (Square root lifting inequality [23]). For α ≥ 1 and positive integers K, K' such that αK' is also an integer, we have

θ K , α K α θ K , K .
(9)

Lemma 2.5 (Lemma 2.1 in [13]). For all x, x' nsupported on disjoint subsets I1, I2 {1, 2,..., n}, we have

Φ x , Φ x δ I 1 + I 2 x 2 x 2 .

Lemma 2.6. For two disjoint sets I1, I2 {1, 2,... ,n}, let θ I 1 , I 2 be the |I1|, |I2|-restricted orthogonality constant of Φ. If |I1| + |I2| ≤ n, then

Φ I 1 Φ I 2 x I 2 2 θ I 1 , I 2 x 2 .
(10)

Proof. Let u I 1 be a unit vector, then we have

max u : u 2 = 1 u Φ I 1 Φ I 2 x I 2 2 = Φ I 1 Φ I 2 x I 2 2 .
(11)

where the maximum of inner product is achieved when u is in the same direction of Φ I 1 Φ I 2 x I 2 i.e., u = Φ I 1 Φ I 2 x I 2 / Φ I 1 Φ I 2 x I 2 2 . Moreover, from Definition 1, we have

u Φ I 1 Φ I 2 x I 2 2 = Φ I 1 u , Φ I 2 x I 2 θ I 1 , I 2 u 2 x 2 = θ I 1 , I 2 x 2
(12)

and thus

Φ I 1 Φ I 2 x I 2 2 θ I 1 , I 2 x 2 .

Lemma 2.7. For two disjoint sets I1,I2with |I1| + |I2| ≤ n, we have

δ I 1 + I 2 θ I 1 , I 2 .
(13)

Proof. From Lemma 2.5 we directly have

Φ I 1 x I 1 , Φ I 2 x I 2 δ I 1 + I 2 x I 1 2 x I 2 2 .
(14)

By Definition 1, θ I 1 , I 2 is the minimal value satisfying

Φ I 1 x I 1 , Φ I 2 x I 2 θ I 1 , I 2 x I 1 2 x I 2 2 ,
(15)

and this completes the proof of the lemma.

2.3 Proof of theorem 1.1

Now we turn to the proof of our main theorem. Our proof is in essence based on the mathematical induction; First, we show that the index t1 found at the first iteration is correct (t1 T) under (4) and then we show that tk+ 1is also correct (more accurately Tk= {t1,t2, ...,tk} T then tk+1 T\Tk) under (4).

Proof. Let tkbe the index of the column maximally correlated with the residual rk-1in the k-th iteration of the OMP algorithm. Since rk-1= y for k = 1, t1 can be expressed as

t 1 =arg max i φ i , y
(16)

and also

φ t 1 , y = max i φ i , y
(17)
1 T j T φ j , y 2
(18)
= 1 K Φ T y 2
(19)

where (19) uses the fact |T| = K (x is K-sparse supported on T). Now that y = Φ T x T , we have

φ t 1 , y 1 K Φ T Φ T x T 2
(20)
1 K ( 1 - δ K ) x T 2
(21)

where (21) follows from Lemma 2.3.

Now, suppose that t1 does not belong to the support of x (i.e., t1 T), then

φ t 1 , y = φ t 1 Φ T x T 2
(22)
θ 1 , K x T 2
(23)

where (23) is from Lemma 2.6. This case, however, will never occur if

1 K ( 1 - δ K ) x T 2 > θ 1 , K x T 2
(24)

or

K θ 1 , K + δ K < 1 .
(25)

Let α = K/(K - 1), then α(K - 1) = K is an integer and

θ 1 , K = θ 1 , α ( K - 1 )
(26)
α θ 1 , K - 1
(27)
K K - 1 δ K
(28)

where (27) and (28) follow from Lemma 2.4 and 3.1, respectively. Thus, (25) holds true when

K K K - 1 δ K + δ K < 1 ,

which yields

δ K < K - 1 K - 1 + K .
(29)

In summary, if δ K < K - 1 / K - 1 + K , then t1 T for the first iteration of the OMP algorithm. Now we assume that former k iterations are successful (Tk= {t1, t2,... ,tk} T) for 1 ≤ kK - 1. Then it suffices to show that tk+1is in T but not in Tk(i.e., tk+1 T\Tk). Recall from Table 1 that the residual at the k-th iteration of the OMP is expressed as

Table 1 OMP algorithm
r k = y - Φ T k x ^ T k .
(30)

Since y = Φ T x T and Φ T k is a submatrix of Φ T , rk span (Φ T ) and hence rkcan be expressed as a linear combination of the |T| (= K) columns of Φ T . Accordingly, we can express rkas rk= Φxk where the support (set of indices for nonzero elements) of xkis contained in the support of x. That is, rkis a measurement of K-sparse signal xk using the sensing matrix Φ.

Therefore, it is clear that if Tk T, then tk+1 T under (29). Recalling that the residual rkis orthogonal to the column already selected (〈φ i , rk〉 = 0 for i Tk) from Lemma 1, index of these columns is not selected again (see the identify step in Table 1) and hence tk+1T\Tk. This indicates that under the condition in (4) all the indices in the support T will be identified within K iterations (i.e., TK= T) and therefore

x ^ T K = arg min x y - Φ T K x 2
(31)
= Φ T K y
(32)
= Φ T y
(33)
= Φ T t Φ T - 1 Φ T Φ T x T
(34)
= x T ,
(35)

which completes the proof.

3 Discussions

In [19], Dai and Milenkovic conjectured that the sufficient condition of the OMP algorithm guaranteeing exact recovery of K-sparse vector cannot be further relaxed to δ K + 1 =1/ K . This conjecture says that if the RIP condition is given by δK+ 1< ϵ then ϵ should be strictly smaller than 1/ K . In [20], this conjecture has been confirmed via experiments for K = 2.

We now show that our result in Theorem 1.1 agrees with the conjecture, leaving only marginal gap from the limit. Note that since we cannot directly compare Dai and Milenkovic's conjecture (expressed in term of δK+ 1) with our condition (expressed in term of δ K ), we need to modify our result. Following proposition provides a bit loose bound (sufficient condition) of our result expressed in the form of δ K + 1 1/ K + θ .

Proposition 3.1. If δ K + 1 <1/ K + 3 - 2 then δ K < K - 1 / K - 1 + K .

Proof. Since the inequality

1 K + 3 - 2 K - 1 K - 1 + K
(36)

holds true for any integer K > 1 (see Appendix), if δ K + 1 <1/ K + 3 - 2 then δ K + 1 < K - 1 / K - 1 + K . Also, from the monotonicity of the RIP constant K δK+1), if δ K + 1 < K - 1 / K - 1 + K then δ K < K - 1 / K - 1 + K . Syllogism of above two conditions yields the desired result.

One can clearly observe that δ K + 1 <1/ K + 3 - 2 1/ K + 1 . 5858 is better than the condition δ K + 1 <1/ 3 K [20], similar to the result of Wang and Shim, and also close to the achievable limit δ k + 1 < 1 / K , in particular for large K. Considering that the derived condition δ K + 1 <1/ K + 3 - 2 is slightly worse than our original condition δ K < K - 1 / K - 1 + K , we may conclude that our result is fairly close to the optimal.

4 Conclusion

In this article, we have investigated the sufficient condition ensuring exact reconstruction of sparse signal for the OMP algorithm. We showed that if the restricted isometry constant δ K of the sensing matrix satisfies

δ K < K - 1 K - 1 + K

then the OMP algorithm can perfectly recover K-sparse signals from measurements. Our result directly indicates that the set of sensing matrices for which exact recovery of sparse signal is possible using the OMP algorithm is wider than what has been proved thus far. Another interesting point that we can draw from our result is that the size of measurements (compressed signal) required for the reconstruction of sparse signal grows moderately with the sparsity level.

Appendix--proof of (36)

After some algebra, one can show that (36) can be rewritten as

1 + K K - 1 - K 3 - 2 .
(37)

Let f ( K ) =1+K/ K - 1 - K then f ( 2 ) =3- 2 . Hence, it suffices to show that f(K) is a decreasing function in K ≥ 2 (i.e., f(2) is the maximum for K ≥ 2). In fact, since

f ( K ) = ( K - 2 ) K - ( K - 1 ) K - 1 2 K ( K - 1 ) ( K - 1 ) ,
(38)

with

K ( K - 1 ) ( K - 1 ) > 0
(39)

and

( K - 2 ) K - ( K - 1 ) K - 1 < 0
(40)

for K ≥ 2, f'(K) < 0 for K ≥ 2, which completes the proof of (36).

Endnote

aIn Section 3, we provide more rigorous discussions on this issue.

References

  1. Tropp JA, Gilbert AC: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 2007, 53(12):4655-4666.

    Article  MathSciNet  Google Scholar 

  2. Tropp JA: Greed is good: algorithmic results for sparse approximation. IEEE Trans Inf Theory 2004, 50(10):2231-2242. 10.1109/TIT.2004.834793

    Article  MathSciNet  Google Scholar 

  3. Donoho DL, Elad M: Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1minimization. Proc Natl Acad Sci 2003, 100(5):2197. 10.1073/pnas.0437847100

    Article  MathSciNet  Google Scholar 

  4. Donoho DL, Stark PB: Uncertainty principles and signal recovery. SIAM J Appl Math 1989, 49(3):906-931. 10.1137/0149053

    Article  MathSciNet  Google Scholar 

  5. Giryes R, Elad M: RIP-based near-oracle performance guarantees for subspace-pursuit, CoSaMP, and iterative hard-thresholding. 2010.

    Google Scholar 

  6. Qian S, Chen D: Signal representation using adaptive normalized Gaussian functions. Signal Process 1994, 36: 1-11. 10.1016/0165-1684(94)90174-0

    Article  MathSciNet  Google Scholar 

  7. Cevher V, Indyk P, Hegde C, Baraniuk RG: Recovery of clustered sparse signals from compressive measurements. In Sampling Theory and Applications (SAMPTA). Marseilles, France; 2009:18-22.

    Google Scholar 

  8. Malioutov D, Cetin M, Willsky AS: A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans Signal Process 2005, 53(8):3010-3022.

    Article  MathSciNet  Google Scholar 

  9. Elad M, Bruckstein AM: A generalized uncertainty principle and sparse representation in pairs of bases. IEEE Trans Inf Theory 2002, 48(9):2558-2567. 10.1109/TIT.2002.801410

    Article  MathSciNet  Google Scholar 

  10. Donoho DL: Compressed sensing. IEEE Trans Inf Theory 2006, 52(4):1289-1306.

    Article  MathSciNet  Google Scholar 

  11. Candès EJ, Romberg J, Tao T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 2006, 52(2):489-509.

    Article  Google Scholar 

  12. Friedman JH, Stuetzle W: Projection pursuit regression. J Am Stat Assoc 1981, 76(376):817-823. 10.2307/2287576

    Article  MathSciNet  Google Scholar 

  13. Candès EJ: The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique 2008, 346(9-10):589-592. 10.1016/j.crma.2008.03.014

    Article  Google Scholar 

  14. Cai TT, Xu G, Zhang J: On recovery of sparse signals via ℓ1minimization. IEEE Trans Inf Theory 2009, 55(7):3388-3397.

    Article  MathSciNet  Google Scholar 

  15. Cai TT, Wang L, Xu G: New bounds for restricted isometry constants. IEEE Trans Inf Theory 2010, 56(9):4388-4394.

    Article  MathSciNet  Google Scholar 

  16. Needell D, Tropp JA: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harm Anal 2009, 26(3):301-321. 10.1016/j.acha.2008.07.002

    Article  MathSciNet  Google Scholar 

  17. Baraniuk RG, Davenport MA, DeVore R, Wakin MB: A simple proof of the restricted isometry property for random matrices. Const Approx 2008, 28(3):253-263. 10.1007/s00365-007-9003-x

    Article  MathSciNet  Google Scholar 

  18. Needell D, Vershynin R: Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE J Sel Top Signal Process 2010, 4(2):310-316.

    Article  Google Scholar 

  19. Dai W, Milenkovic O: Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans Inf Theory 2009, 55(5):2230-2249.

    Article  MathSciNet  Google Scholar 

  20. Davenport MA, Wakin MB: Analysis of orthogonal matching pursuit using the restricted isometry property. IEEE Trans Inf Theory 2010, 56(9):4395-4401.

    Article  MathSciNet  Google Scholar 

  21. Liu E, Temlyakov VN: Orthogonal super greedy algorithm and applications in compressed sensing. IEEE Trans Inf Theory 2011, 1-8. PP(99)

    Google Scholar 

  22. Johnson WB, Lindenstrauss J: Extensions of Lipschitz mappings into a Hilbert space. Contemp Math 1984, 26: 189-206.

    Article  MathSciNet  Google Scholar 

  23. Cai TT, Wang L, Xu G: Shifting inequality and recovery of sparse signals. IEEE Trans Inf Theory 2010, 58(3):1300-1308.

    MathSciNet  Google Scholar 

Download references

Acknowledgements

This study was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 2010-0012525) and the research grant from the second BK21 project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Byonghyo Shim.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wang, J., Kwon, S. & Shim, B. Near optimal bound of orthogonal matching pursuit using restricted isometric constant. EURASIP J. Adv. Signal Process. 2012, 8 (2012). https://doi.org/10.1186/1687-6180-2012-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-8

Keywords