Skip to main content

Subspace weighted 2,1 minimization for sparse signal recovery

Abstract

In this article, we propose a weighted 2,1 minimization algorithm for jointly-sparse signal recovery problem. The proposed algorithm exploits the relationship between the noise subspace and the overcomplete basis matrix for designing weights, i.e., large weights are appointed to the entries, whose indices are more likely to be outside of the row support of the jointly sparse signals, so that their indices are expelled from the row support in the solution, and small weights are appointed to the entries, whose indices correspond to the row support of the jointly sparse signals, so that the solution prefers to reserve their indices. Compared with the regular 2,1 minimization, the proposed algorithm can not only further enhance the sparseness of the solution but also reduce the requirements on both the number of snapshots and the signal-to-noise ratio (SNR) for stable recovery. Both simulations and experiments on real data demonstrate that the proposed algorithm outperforms the 1-SVD algorithm, which exploits straightforwardly 2,1 minimization, for both deterministic basis matrix and random basis matrix.

1 Introduction

In recent years, sparse signal recovery has attracted a great deal of attention from the signal processing society [116]. Using an overcomplete basis A M×K(M K) and the sparsity prior on signal x, the sparse representation problem of the noiseless measurements with single measurement vector (SMV) y = Ax can be solved by a combinatorial 0 problem

min x 0 s .t y  =  Ax ,
(1)

where the sparse signal x has only P nonzero components and || x ||0 = P represents the number of nonzero components of x. Unfortunately, the minimization problem (1) is NP-hard. A practicable way of solving the sparse representation problem is to employ the following convex optimization

min x 1 s .t y  =  Ax ,
(2)

where x 1 = i = 1 K | x i |. As a surrogate of the 0 norm, the regular 1 norm is tractable, but it depends on signal's coefficient values and attenuates the nature of literal 0 sparsity count, which may cause performance degrading in some situations. To avoid the disadvantage of the dependence on magnitude of the regular 1 minimization, Candès et al. designed an iterative reweighted formulation of 1 minimization to more democratically penalize nonzero coefficients, namely, large weights will discourage to reserve those entries who are more likely to be zero in recovered signal, whereas small weights will encourage to reserve larger entries [12]. In other words, the essence of the iterative reweighted 1 minimization lies in that large weights are appointed to those elements of x, whose indices are more likely to be outside of the support [14, 16], which expels their indices from the support in the sparse solution and further consolidates the sparsity-encouraging nature of regular 1 minimization [1216]. The support of x is defined as Supp(x) = {k|x k ≠ 0}. Incidentally, it was proved that the iterative reweighted 1 minimization can indeed improve both the recoverable sparsity thresholds and the recovery accuracy upon the regular 1 minimization [1315].

It is worth noting that the iterative reweighted 1 minimization was designed for the SMV problem [12]. In fact, the multiple measurement vectors (MMV) problem is encountered in many applications of sparse signal representation such as array processing [1, 611], magne-toencephalography [1], nonparametric spectrum analysis of time series [17], equalization of sparse communication channels [18] and so on. In the MMV case the 1-SVD method [79] replaces the 1 norm minimization with the mixed norm 2,1 norm minimization. Similar to the regular 1 norm minimization, the 2,1 minimization also meets the disadvantage of the dependence on magnitude. Therefore, in the MMV case how to design an appropriate weighting vector to cope with the disadvantage of the dependence on magnitude of the regular 2,1 minimization is an interesting issue. In this article, we focus on the noisy MMV case and propose an algorithm of the jointly-sparse signal recovery based on the relationship between the noise subspace and the overcomplete basis for weighting the jointly sparse signals, which extends the essence of the iterative reweighted 1 minimization in [12] from SMV to MMV.

The measurements with MMV can be written as

y t = Ax t + n t , t = 1, , T ,
(3)

where the vector n(t) denotes an additive noise vector with zero-mean and variance σ2, the vector x(t) is the jointly-sparse signals and the support is independent of the snapshot t[1]. Without loss of generality, the additive noise n(t) is assumed to be uncorrelated with the jointly-spares signals x(t). The row support of the jointly sparse signals X (X denotes the matrix form of x(t)) plays a key role in the sparse signal recovery with MMV, and it can be defined as Sup p row ( x ) = { k | x k ( 2 ) 0 } Λ[6], where x k ( 2 ) denotes the k th entry of x ( 2 ) , and x ( 2 ) is a column vector whose k th elements denotes the 2 norm of k th row of X. It is obvious that the index set Λ {1, ..., K}and its cardinality |Λ|= P. Considering the relationship between the indices of the columns of A and the row support Λ, the overcomplete basis A can be divided into two submatrix, i.e., A = [ A Λ A Λ c ] , where the indices of the columns of the submatrix AΛ constitute the row support Λ, and the indices of the columns of the submatrix A Λ c constitute the complement of Λ, i.e., ΛΛc = {1, ..., K} and Λ∩Λc = . On the other hand, the subspace decomposition on {y(t), t = 1, ..., T} provides the signal subspace and the noise subspace. It is noted that the noise subspace is orthogonal to the column space of AΛ[1921] but not to the submatrix A Λ c .

From this observation, this article designs a subspace weighted (SW) 2,1 minimization algorithm in the MMV case, in which small and large weights are generated by using the orthogonality between noise subspace and AΛ and the isomorphism between noise subspace and A Λ c . We will show that the designed weights can force the entries whose indices are more likely to be outside of the row support to be close to zero in the solution, and, therefore further promotes the sparseness of the solution and improves the recovery accuracy.

Although our proposed algorithm also exploits the singular value decomposition (SVD), compared with the 1-SVD algorithm [79], the key difference is that we not only use the SVD to reduce the computation complexity but also employ it to obtain the signal subspace and the noise subspace and, accordingly, to design the weights. Thus, we call the proposed algorithm as the SW 2,1-SVD algorithm. The experiments prove that the SW 2,1-SVD algorithm can achieve better estimation performance than 1-SVD algorithm that exploits straightforwardly 2,1 minimization. In addition, simulations and experiments on real data also demonstrate that the SW 2,1-SVD algorithm can be applied to the DOA estimation, high resolution radar imaging, and other sparse recovery related problems with the random basis matrix.

The remainder of this article is organized as follows. In the following section, we describe the sparse signal representation framework in the MMV case. In Section 3, we formulate the SW 2,1-SVD algorithm. In Section 4, the performance of the proposed method is explored with some examples. The summary is given in Section 5.

2 The 1-SVD algorithm

For recovering the jointly-sparse signals X, a feasible way is that the row support of the jointly sparse signals is first determined and then the signals can be recovered by solving a least square (LS) problem [1]. In addition, in some applications the problem of interest is to determine the row support rather than recover X oneself. Therefore, in this article we consider the problem of determining the row support of the jointly sparse signals.

The Equation (3) can be expressed with matrix form:

Y = AX + N .
(4)

The truncated SVD can be exploited to hold the principal components on the measurements Y[79]:

Y SV = U Σ D P = YV D P ,
(5)

where Y = UΣVH, the superscript H denotes the conjugate transpose, the non-zero entries of Σ are equal to the singular values of Y and they are sorted in descending order on the diagonal; the columns of U and V are, respectively, left singular vectors and right singular vectors for corresponding singular values; DP = [IP ; 0], where IP is a P × P identity matrix and 0 is a (TP ) × P matrix of zeros. Moreover, let

X SV = XV D P ,
(6)

and

N SV = NV D P .
(7)

Obviously, XSV and X have the same row support.

The 1-SVD algorithm can be described as [79]:

min X SV 2 , 1 s .t . Y A X SV F 2 β 2 ,
(8)

where β 2 || N SV | | F 2 is a regularization parameter, the mixed norm 2,1 norm is defined as | | | | 2 , 1 i ( j [ ] i , j 2 ) 1 / 2 [22], and ||·||F denotes Frobenius norm, respectively. In practice, we select the set of the indices of the P peaks in the solution as the estimate of the row support set Λ ^ .

3 Subspace weighted 2,1-SVD algorithm

Here, given the MMV case we exploit the relationship between the noise subspace and the overcomplete basis to construct a weighting vector that can improve the performance of the 1-SVD method. Incidentally, we already presented the SW method in [23], where we used an extra eigendecomposition of sample correlation matrix to obtain the noise subspace. However it is not necessary to employ the extra eigendecomposition, because the SVD of measurements Y has revealed how to obtain subspace decomposition [17]. In addition, we address some interesting issues and extend the application of the SW method.

Retracing the SVD mentioned as (5), we have

U = U s U n ,
(9)

where U s = [u1, ..., uP] and U n = [uP+1 , ..., uM], which correspond to the signal subspace and noise subspace, respectively, [17].

In [19, 20], it is proved that

A Λ H U n = B ,
(10)

where B =[bi,j] and bi,j→ 0 as the number of snapshots T → ∞. As a result, we have

A H U n = A Λ H U n A Λ c H U n = B C = W Λ W Λ c = W ,
(11)

where C= A Λ c H U n . We can express the weighted vector as

W ( 2 ) = W Λ ( 2 ) W Λ c ( 2 ) ,
(12)

where W Λ , i ( 2 ) 0 and W Λ C , i ( 2 ) C i ( 2 ) >0 as T → ∞ [19] W Λ , i ( 2 ) , W Λ c , i ( 2 ) , and C i ( 2 ) denote the i th entry of W Λ ( 2 ) , W Λ c ( 2 ) , and C ( 2 ) , respectively. This is consistent with the methodology of the iterative reweighted 1 minimization, i.e., large weights are assigned to the entries whose indices are more likely to be outside of the row support, whereas small weights are assigned to the entries whose indices are inside of the row support [12, 14, 16]. When the limited snapshots are used in actual application, it is also guaranteed that the entries of W Λ ( 2 ) are much smaller than those of W Λ c ( 2 ) [1921].

We define

W = W ( 2 ) .
(13)

The sparse solution can be found bya

min X SV w ; 2 , 1 s . t . Y A X SV F 2 β 2 ,
(14)

where · w ; 2 , 1 i w i ( j | [ ] i , j 2 ) | 1 / 2 , w i denotes the i th entry of w.

Some related issues are discussed as follows.

Discussion 1 : An interesting issue raised by the Equation (12) is how many snapshots are enough for the SW method to work. The weighted process can be seen as a preprocessing that obtains the rough information about the row support and the weighted values. The SW method employs the methodology of the MUSIC method [19] to achieve the preprocessing. Therefore, the SW method is consistent with the MUSIC method on the requirement of the number of snapshots. The theoretical limitation of the requirement of the number of snapshots TP is showed in [24, 25] for the MUSIC method. In other words, the SW method is able to work with very small number of snapshots.

Discussion 2 : The prior information about the number of sources plays a key role in partitioning the noise subspace and the signal subspace. The right partition of the noise subspace and the signal subspace is beneficial to accomplish the optimal weights for the SW method. In practice, the number of sources can be determined by exploiting the information theoretic criterion such as the Akaike's information criterion (AIC) [26] and the minimum description length (MDL) criterion [27]. These methods require the eigenvalue of the sample correlation matrix R ^ , where R ^ = 1 T Y Y H is a Hermitian matrix. We can use the SVD of Y to obtain the eigenvalue because the eigenvalue decomposition (EVD) of a Hermitian matrix is a special case of the SVD of a general matrix [17]. As a result, we have Σ e = 1 T Σ Σ H , where the elements on the diagonal of Σ e are the eigenvalue of R ^ . Therefore, the number of sources can be determined by combining the SVD of Y with the information theoretic criterion. However, in some situations, for example, when the signal-to-noise ratio (SNR) is very low or the number of snapshots is very small, the classical AIC and MDL rules are likely to overestimate or underestimate the number of sources. Thus, another interesting issue is the robustness of the proposed SW 2,1-SVD algorithm to the estimate of the number of sources. Here we give a brief explanation about the robustness of SW 2,1-SVD to the estimate of the number of sources and leave a detailed discussion to future work. For one thing, both the proposed SW 2,1-SVD algorithm and the original 1-SVD algorithm [8] use information about the number of sources to reduce the computational complexity, in which the incorrect determination of the number of sources does not incur catastrophic consequences [8]. For another, the weighted 2,1 minimization processing is not very sensitive to the determination of the number of sources. Considering two cases, i.e., the estimate of the number of sources P ^ =0 (the extreme underestimation case) and P< P ^ M-1 (the overestimation case), where the real value of the number of sources is assumed as 0 < P < M − 1. For the former the estimate of the noise subspace U ^ n is equal to U, and then all weights are identical because the matrix U is a unitary matrix, i.e., || a i H U ^ n | | 2 =|| a j H U ^ n | | 2 for ij. As a result, SW 2,1-SVD becomes 1-SVD in the extreme underestimation case. For the latter U ^ n = [ U P ^ + 1 , , U M ] and the subspace that its columns span is a true subset of the noise subspace. Therefore, the orthogonality between U ^ n and AΛ still exists and it only incurs gradual degradation of performance because the shrunk subspace dimension will weaken the multiple averaging effect [28]. Obviously, SW 2,1-SVD can cope with the overestimation case. We illustrate this conclusion in Section 4.

4 Examples

In this section, we present some examples to demonstrate the performance of the proposed SW 2,1-SVD algorithm. We first address source localization with a uniform linear array (ULA) and a nonuniform linear array (NULA). Then we consider sparse signal recovery problem with the random basis matrix in the presence of noise. Lastly, we employ the real data to illustrate the performance of the proposed method. Here we use the CVX package for solving the convex optimization problem [29].

4.1 Source localization with ULA

We consider the ULA composed of M = 10 sensors separated by half a wavelength for the source localization problem. The grid is uniformly sampled with 0.1° from −90° to 90° (unless specifically stated). The overcomplete basis matrix A =[a(ϕ1), ..., a (ϕ K )] is a deterministic basis matrix under this condition, where the vector a(ϕ k ) denotes the array steering vector and ϕ k is the k th sampling grid.

4.1.1 Localization accuracy

In the first experiment we suppose that there are three signals impinging on the array from θ1 = 12°, θ2 = 43°, and θ3 = 67°. The number of snapshots is T = 200. We compare the RMSE of the DOA estimates yielded by SW 2,1-SVD with those of 1-SVD, Root-MUSIC [30], and CRB [20]. In Figure 1a, three sources are assumed to be uncorrelated; and in Figure 1b, the sources at θ1 = 12° and θ2 = 43° are coherent, whereas the source at θ3 = 67° is uncorrelated to the first two sources. The spatial smoothing technique with a 4-element smoothing subarray is employed for Root MUSIC to decorrelate the coherent signals. As can be seen from Figure 1, the Root-MUSIC algorithm that can be seen as the typical representative of the subspace-like algorithm can provide good accuracy in uncorrelated sources case, whereas it need use the spatial smoothing technique to obtain competitive performance in coherent sources case. As for the 1-SVD algorithm that employs the regular 2,1 minimization, it can yield acceptable DOA estimates in coherent sources case but does not compete with the subspace-like algorithm in uncorrelated sources case. Since the weighted 2,1 minimization further consolidates the sparsity-encouraging nature of regular 2,1 minimization, the SW 2,1-SVD can improve the recovery accuracy. As a result, the presented SW 2,1-SVD algorithm gives competitive DOA estimates that are closer to the CRB for both uncorrelated and coherent sources.

Figure 1
figure 1

RMSE of angle estimation. (a) Uncorrelated sources. (b) Coherent sources. The number of snapshots is T = 200. Each point is average of 500 Monte-Carlo trials. Asterisk-solid curve: 1-SVD; Square-dash curve: SW 2,1-SVD; Circle-solid curve: Root-MUSIC; Dash curve: CRLB.

4.1.2 DOA tracking for mobile sources

An advantage of sparse signal representation methodology is that it has robustness to limited number of snapshots for DOA estimation [8]. Then, we design a scenario to validate the ability of the presented SW 2,1-SVD, which there are two uncorrelated moving sources in the array's viewing field and we need estimate their DOAs using a few snapshots.b We consider two moving sources in this simulation. A source moves linearly from 30° to 21°; the other one moves first from 0° to 3°, and then from 3° to 0°, and last from 0° to 3°. It is assumed that the moving step is 0.03° per snapshot over a course of 300 data snapshots. We use the most recent three snapshots to estimate DOAs for SNR = 12 dB. As can be seen from Figure 2, the Root-MUSIC algorithm has some strong outliers; especially some strong outliers confuse two sources over some periods. Although it is also demonstrated that the 1-SVD algorithm can work with a few snapshots, there are some outliers, which causes degradation in the performance of the trajectories. The presented SW 2,1-SVD has a few slight outliers, which do not affect DOA tracking for mobile sources. This shows that the weighted 2,1 minimization outperforms the regular 2,1 minimization in the sense that a few snapshots are employed to obtain exact sparse recovery.

Figure 2
figure 2

DOA tracking performance for two uncorrelated mobile sources. (a) Root-MUSIC; (b) 1-SVD; (c) SW 2,1-SVD. SNR is 12 dB, the number of snapshots T = 3. The solid lines and "·-" denote the trajectories of the true DOAs and the estimates, respectively.

4.2 Source localization with NULA

In this section, we consider the NULA composed of M = 10 sensors that are randomly selected from a ULA with 20 sensors for the source localization problem. Here we only consider the coherent sources case. Again, the sources at θ1 = 12° and θ2 = 43° are coherent, and the source at θ3 = 67° is uncorrelated to other sources. In Figure 3, we show the spatial spectrum obtained with MUSIC, 1-SVD, and SW 2,1-SVD in 100 Monte Carlo runs. In this experiment SNR is 10 dB, the number of snapshots is 200. The spatial smoothing technique is valid for an ULA, but not for NULA [17]. Therefore, as it is shown in Figure 3a, MUSIC has only one significant peak because spatial smoothing does not work for NULA. However, both 1-SVD and SW 2,1-SVD still provide good estimates. In addition, it also is noted that the proposed SW 2,1-SVD algorithm has smaller variance than that of the 1-SVD algorithm, which is consistent with the conclusion of the Section 4.1.1, i.e., the SW 2,1-SVD algorithm has better localization accuracy than that of the 1-SVD algorithm.

Figure 3
figure 3

Spatial spectrum for coherent sources. (a) MUSIC; (b) 1-SVD; (c) SW 2,1-SVD. Three sources at 12°, 43°, and 67°, and each power is 10 dB. Green circles denote the DOAs and power of sources in each plot. 100 Monte-Carlo trials are shown.

4.3 Sparse recovery for random basis matrix

A related problem is that the presented SW 2,1-SVD can be extended to other instances with a random basis matrix that accord with the model (3) in the sparse representation framework. Here, we give some examples to demonstrate the validity of the extension. We assume a MMV sparse matrix X K×Twith P non-zero rows that their indices are chosen randomly, and their amplitudes is chosen randomly from a standard normal distribution. The overcomplete basis matrix A M×Kis a random matrix with i.i.d. Gaussian entries or i.i.d symmetric Bernoulli ±1 entries, and its columns are normalized. Our object is to estimate the row support of the sparse signal X with the measurements Y = AX + N, where N is an additive Gaussian white noise matrix and its variance σ2 is determined from a specified SNR level as σ 2 = x F 2 T × P × 1 0 - SNR/10 . In the experiment parameter settings are as follows: M = 12, K = 30, and P = 6. The set of the indices corresponding to P peaks in the estimate X ^ SV ( 2 ) are regarded as the estimate of the row support of the jointly sparse signals. The estimate of the row support Λ ^ is considered to be correct if and only if it is fully consistent with the true row support Λ. As demonstrated in Figures 4 and 5, the SW 2,1-SVD algorithm improves the recovery performance, especially reducing both the SNR requirement and the required number of snapshots for stable recovery. In addition, for exploring robustness to the number of sources, we employ the assumed number of sources (ANS) to perturb the SW 2,1-SVD algorithm. It is shown in Figures 4 and 5 that the SW 2,1-SVD algorithm accomplishes the optimal performance when ANS is equal to the number of sources P (i.e., ANS = P = 6). Furthermore, they also demonstrate that the SW 2,1-SVD algorithm can cope with both the overestimation case and the underestimation case and it still excels the 1-SVD algorithm in these cases.

Figure 4
figure 4

Probability of success versus SNR. (a) Gaussian basis; (b) Bernoulli basis. Each point is average of 1000 Monte-Carlo trials and the number of snapshots is T = 80. Asterisk-solid curve: 1-SVD; pentagram-dash curve: SW 2,1-SVD (ANS = 1); triangle-down-dash curve: SW 2,1-SVD (ANS = 3); square-dash curve: SW 2,1-SVD (ANS = 6); diamond-dash curve: SW 2,1-SVD (ANS = 7); X-dash curve: SW 2,1-SVD (ANS = 7).

Figure 5
figure 5

Probability of success versus number of snapshots. (a) Gaussian basis; (b) Bernoulli basis. Each point is average of 1000 Monte-Carlo trials and SNR is 20 dB. Asterisk-solid curve: 1-SVD; pentagram-dash curve: SW 2,1-SVD (ANS = 1); triangle-down-dash curve: SW 2,1-SVD (ANS = 3); square-dash curve: SW 2,1-SVD (ANS = 6); diamond-dash curve: SW 2,1-SVD (ANS = 7); X-dash curve: SW 2,1-SVD (ANS = 7).

4.4 High resolution radar imaging via sparse recovery

Here we attempt to obtain high range resolution in data collected by a real stepped frequency radar. The radar is Ka band and the frequency step size Δf is 8 MHz, and the pule repeat interval (PRI) is 0.15 ms. In the observed scene two Corner Reflectors separated by 0.4 m are fixed on a straight road and they are collinear with radar. The width of the transmitted pulses is 1 μ s and 64 pulses data are collected. The data model can be written as y ( n ) = i = 1 P β i r i ( n ) + w ( n ) , where β i is the complex scattering intensity of the i th target, r i ( n ) = exp [ j 4 π ν c ( f 0 + n Δ f ) R i ] , v c is the speed of light, R i is the distance between the ith target and radar, f0 is the initial frequency, w(n) denotes the noise. Without loss of generality, in the recovery the item 2 v c ( f 0 + n Δ f ) R i is normalized (it is called as the normalized frequency in the context) and the interval [0 1] is uniformly sampled with 1024 grids.

In Figure 6, the normalized frequency spectrum obtained from MUSIC, 1-SVD, and SW 2,1-SVD are displayed, where the data window size is 32, i.e., 33 snapshots can be obtained by sliding the window. In this case the mentioned algorithms clearly discern the two targets. Compared with the sparse recovery algorithm, however, the peak obtained from MUSIC is obtuse. In addition, 1-SVD has a stronger spurious peak that may confuse the true target with spurious peaks. The confidence interval exerts an influence on how many spurious peaks exist in the solution. Although increasing the confidence interval can suppress spurious peaks to some extent, as it is shown in Figure 7, we may adopt a pessimistic attitude for 1-SVD. It is worthwhile to note that SW 2,1-SVD has a very slight spurious peak so that we can ignore it, especially for a higher confidence interval. Furthermore, the confidence interval determines the size of the regularization parameter, i.e., the higher the confidence interval, the larger the regularization parameter. Figures 6 and 7 show clearly SW 2,1-SVD has a good performance when the size of the regularization parameter varies widely.

Figure 6
figure 6

The normalized frequency spectrum for real measurement data. The data window size is 32. Asterisk-solid curve: 1-SVD; square-dash curve: SW 2,1-SVD; solid curve: MUSIC; triangle-up: true spacing.

Figure 7
figure 7

Robustness of SW 2,1 -SVD to the confidence interval. (a) 90% confidence interval; (b) 99.9% confidence interval; (c) 99.99% confidence interval. The data window size is 32. Asterisk-solid curve: 1-SVD; square-dash curve: SW 2,1-SVD; triangle-up: true spacing.

Next, we illustrate the robustness of the proposed SW 2,1-SVD algorithm to the estimate of the number of sources. In Figure 8, we artificially adjust the number of sources so that we can explore what extent underestimating or overestimating the number of sources affects the performance of the SW 2,1-SVD algorithm. The data window size is fixed at 32 and the assumed number of sources (ANS) is exploited to determine the dimension of the noise subspace and reduce the computational complexity (unless specially stated). Note that ANS = 0 is used only for determining the dimension of the noise subspace and DP =[I2 ; 0] is utilized to reduce the computational complexity in Figure 8a. It is illustrated in Figure 8, in the extreme underestimation case, i.e., ANS = 0, the result of SW 2,1-SVD and 1-SVD is identical, which corroborates 1-SVD is a special example of SW 2,1-SVD. In addition, we can observe the fact SW 2,1-SVD can clearly discern the two targets in both underestimation and overestimation cases, whereas MUSIC is invalid in the underestimation case. Therefore, this illustration shows that underestimation and overestimation of the number of sources do not incur catastrophic consequences for SW 2,1-SVD.

Figure 8
figure 8

Robustness of SW 2,1 -SVD to the assumed number of sources. (a) ANS = 0; (b) ANS = 1; (c) ANS = 6; (d) ANS = 8. The data window size is 32. Asterisk-solid curve: 1-SVD; square-dash curve: SW 2,1-SVD; solid curve: MUSIC; triangle-up: true spacing.

In Figure 9, we compare the estimation accuracy of the distance between two Corner Reflector obtained from MUSIC, 1-SVD, and SW 2,1-SVD as a function of the data window size M (the number of snapshots T = 64−M +1). When the data window size is too small or too large, MUSIC does not provide reliable results. Therefore, MUSIC need carefully select the data window size so as to it can obtain reliable estimates. Although the performance of the 1-SVD algorithm is better than that of MUSIC for large data window size (i.e., M ≥ 61), it not give reliable results for small data window size (i.e., M ≤ 7). For the same experiment context, as expected, SW 2,1-SVD yields competitive performance and it is robust to the data window size.

Figure 9
figure 9

The estimation accuracy of the distance versus the data window size. Asterisk-solid curve: 1-SVD; square-dash curve: SW 2,1-SVD; solid-circles curve: MUSIC; dash curve: true distance.

5 Conclusion

In this article, we proposed an effective weighted 2,1 minimization algorithm that exploits the relationship between the noise subspace and the overcomplete basis matrix to obtain the weights for the jointly-sparse signal recovery problem. The proposed SW 2,1-SVD algorithm appoints the large weights to those, whose indices are more likely to be outside of the support so that their indices are banished from the support. This can further promote the sparseness of the solution at right positions. We provided experimental results to testify that, for both deterministic basis matrix and random basis matrix, the proposed SW 2,1-SVD algorithm can obtain better performance than that of the 1-SVD algorithm with fewer number of snapshots and lower SNR.

Endnotes

The material in this article was presented in part at the 2011 International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2011), May 2011, Prague, Czech Republic. aIn the context, we use the method introduced in [8] to determine the regularization parameter β2, and the confidence interval that controls the size of β2 is set to 99% (unless specially stated). bAlthough the indices of non-zero rows of x(t) are dependent of the snapshot t for the moving sources, x(t) can still be seen as the jointly-sparse signals providing the number of snapshots T is very small.

References

  1. Cotter S, Rao B, Engan K, Kreutz-Delgado K: Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans Signal Process 2005, 53(7):2477-2488.

    Article  MathSciNet  Google Scholar 

  2. Chen J, Huo X: Theoretical results on sparse representations of multiple-measurement vectors. IEEE Trans Signal Process 2006, 54(12):4634-4643.

    Article  Google Scholar 

  3. Fletcher A, Rangan S, Goyal V, Ramchandran K: Denoising by sparse approximation: Error bounds based on rate-distortion theory. EURASIP J Appl Signal Process 2006, 2006: 1-19.

    Article  MathSciNet  Google Scholar 

  4. Mishali M, Eldar Y: Reduce and boost: Recovering arbitrary sets of jointly sparse vectors. IEEE Trans Signal Process 2008, 56(10):4692-4702.

    Article  MathSciNet  Google Scholar 

  5. Lv J, Fan Y: A unified approach to model selection and sparse recovery using regularized least squares. Annals Stat 2009, 37(6A):3498-3528. 10.1214/09-AOS683

    Article  MathSciNet  Google Scholar 

  6. Berg E, Friedlander M: Joint-sparse recovery from multiple measurements. Computing Research Repository - CORR 2009, 1-19. abs/0904.2

    Google Scholar 

  7. Malioutov D, Cetin M, Willsky A: Source localization by enforcing sparsity through a Laplacian prior: an SVD-based approach. In IEEE Workshop on Statistical Signal Processing. St Louis, MO., USA; 2003:573-576.

    Google Scholar 

  8. Malioutov D, Cetin M, Willsky A: A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans Signal Process 2005, 53(8):3010-3022.

    Article  MathSciNet  Google Scholar 

  9. Malioutov D: A sparse signal reconstruction perspective for source localization with sensor arrays. Master's thesis Mass. Inst. Technol., Cambridge, MA; 2003. [http://ssg.mit.edu/~dmm/publications/malioutov_MS_thesis.pdf]

    Google Scholar 

  10. Zheng J, Kaveh M, Tsuji H: Sparse spectral fitting for direction of arrival and power estimation. In IEEE/SP 15th Workshop on Statistical Signal Processing. Cardiff University and City Hall Cardiff, U.K.; 2009:429-432.

    Google Scholar 

  11. Tang Z, Blacquiere G, Leus G: Aliasing-free wideband beamforming using sparse signal representation. IEEE Trans Signal Process 2011, 59(7):3464-3469.

    Article  MathSciNet  Google Scholar 

  12. Candes E, Wakin M, Boyd S: Enhancing sparsity by reweighted 1minimization. J Fourier Anal Appl 2008, 14(5):877-905. 10.1007/s00041-008-9045-x

    Article  MathSciNet  Google Scholar 

  13. Wipf D, Nagarajan S: Iterative reweighted 1and 2methods for finding sparse solutions. IEEE J Sel Top Signal Process 2010, 4(2):317-329.

    Article  Google Scholar 

  14. Xu W, Khajehnejad M, Avestimehr A, Hassibi B: Breaking through the thresholds: an analysis for iterative reweighted 1minimization via the grassmann angle framework. In 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP). Dallas, TX, USA; 2010:5498-5501.

    Chapter  Google Scholar 

  15. Needell D: Noisy signal recovery via iterative reweighted 1-minimization. In Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers. Pacific Grove, Calif, USA; 2009:113-117.

    Google Scholar 

  16. Khajehnejad M, Xu W, Avestimehr A, Hassibi B: Analyzing weighted 1minimization for sparse recovery with nonuniform sparse models. IEEE Trans Signal Process 2011, 59(5):1985-2001.

    Article  MathSciNet  Google Scholar 

  17. Stoica P, Moses R: Spectral Analysis of Signals. Pearson/Prentice Hall, Upper Saddle River, NJ; 2005.

    Google Scholar 

  18. Fevrier I, Gelfand S, Fitz M: Reduced complexity decision feedback equalization for multipath channels with large delay spreads. IEEE Trans Commun 1999, 47(6):927-937. 10.1109/26.771349

    Article  Google Scholar 

  19. Schmidt R: Multiple emitter location and signal parameter estimation. IEEE Trans Antennas Propag 1986, 34(3):276-280. 10.1109/TAP.1986.1143830

    Article  Google Scholar 

  20. Stoica P, Arye N: MUSIC, maximum likelihood, and Cramer-Rao bound. IEEE Trans Acoustics Speech Signal Process 1989, 37(5):720-741. 10.1109/29.17564

    Article  Google Scholar 

  21. Kaveh M, Barabell A: The statistical performance of the MUSIC and the minimum-norm algorithms in resolving plane waves in noise. IEEE Trans Acoustics Speech Signal Process 1986, 34(2):331-341. 10.1109/TASSP.1986.1164815

    Article  Google Scholar 

  22. Kowalski M: Sparse regression using mixed norms. Appl Comput Harmonic Anal 2009, 27(3):303-324. 10.1016/j.acha.2009.05.006

    Article  Google Scholar 

  23. Zheng C, Li G, Zhang H, Wang X: An approach of DOA estimation using noise subspace weighted 1minimization. In 2011 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP). Prague Czech; 2011:2856-2859.

    Chapter  Google Scholar 

  24. Paulraj A, Reddy V, Shan T, Kailath T: Performance analysis of the MUSIC algorithm with spatial smoothing in the presence of coherent sources. In IEEE Military Communications Conference-Communications-Computers: Teamed for the 90's, 1986 MILCOM. Volume 3. Monterey, Calif, USA; 1986:41.5.1-41.5.5.

    Google Scholar 

  25. Sason E:Source localization based on sparse signal representation. [http://sipl.technion.ac.il/new/Archive/Annual_Proj_Pres/sipl2010/Posters/Source%20localization.pdf]

  26. Akaike H: A new look at the statistical model identification. IEEE Trans Automatic Control 1974, 19(6):716-723. 10.1109/TAC.1974.1100705

    Article  MathSciNet  Google Scholar 

  27. Rissanen J: Modeling by shortest data description. Automatica 1978, 14(5):465-471. 10.1016/0005-1098(78)90005-5

    Article  Google Scholar 

  28. Orfanidis SJ:Optimum Signal Processing. 2nd edition. 2007. [http://eceweb1.rutgers.edu/~orfanidi/osp2e/]

    Google Scholar 

  29. Grant M, Boyd S, CVX: Matlab software for disciplined convex programming. version 1.21 2010. [http://cvxr.com/cvx/]

    Google Scholar 

  30. Barabell A: Improving the resolution performance of eigenstructure-based direction-finding algorithms. In IEEE International Conference on Acoustics, Speech, Signal Processing. Volume 8. Boston, Mass., USA; 1983:336-339.

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 40901157, and in part by the National Basic Research Program of China (973 Program) under Grant 2010CB731901, and in part by the Doctoral Fund of Ministry of Education of China under Grant 200800031050, and in part by Tsinghua National Laboratory for Information Science and Technology (TNList) Cross-discipline Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gang Li.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zheng, C., Li, G., Liu, Y. et al. Subspace weighted 2,1 minimization for sparse signal recovery. EURASIP J. Adv. Signal Process. 2012, 98 (2012). https://doi.org/10.1186/1687-6180-2012-98

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-98

Keywords