Skip to main content

RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing

Abstract

Nonlinear sparse sensing (NSS) techniques have been adopted for realizing compressive sensing in many applications such as radar imaging. Unlike the NSS, in this paper, we propose an adaptive sparse sensing (ASS) approach using the reweighted zero-attracting normalized least mean fourth (RZA-NLMF) algorithm which depends on several given parameters, i.e., reweighted factor, regularization parameter, and initial step size. First, based on the independent assumption, Cramer-Rao lower bound (CRLB) is derived as for the performance comparisons. In addition, reweighted factor selection method is proposed for achieving robust estimation performance. Finally, to verify the algorithm, Monte Carlo-based computer simulations are given to show that the ASS achieves much better mean square error (MSE) performance than the NSS.

1 Introduction

Compressive sensing (CS) [1, 2] has been attracting high attention in compressive radar/sonar sensing [3, 4] due to its many applications such as civilian, military, and biomedical. The main task of CS problems can be divided into three aspects as follows: (1) sparse signal learning: The basic model suggests that natural signals can be compactly expressed, or efficiently approximated, as a linear combination of prespecified atom signals, where the linear coefficients are sparse as shown in Figure 1 (i.e., most of them zero). (2) Random measurement matrix design: It is important to make a sensing matrix which allows recovery of as many entries of unknown signal as possible by using as few measurements as possible. Hence, sensing matrix should satisfy the conditions of incoherence and restricted isometry property (RIP) [5]. Fortunately, some special matrices (e.g., Gaussian matrix and Fourier matrix) have been reported that they are satisfying RIP in high probability. (3) Sparse reconstruction algorithms: Based on the previous two steps, many sparse reconstruction algorithms have been proposed to find the suboptimal sparse solution.

Figure 1
figure 1

A typical example of sparse structure signal.

It is well known that the CS provides a robust framework that can reduce the number of measurements required to estimate a sparse signal. Many nonlinear sparse sensing (NSS) algorithms and their variants have been proposed to deal with CS problems. They mainly fall into two basic categories: convex relaxation (basis pursuit de-noise (BPDN) [6]) and greedy pursuit (orthogonal matching pursuit (OMP) [7]). The above NSS-based CS methods have either high complexity or low performance, especially in the case of low signal-to-noise ratio (SNR) regime. Indeed, it was very hard to adapt trade-off between high complexity and good performance.

In this paper, we propose an adaptive sparse sensing (ASS) method using the reweighted zero-attracting normalized mean fourth error algorithm (RZA-NLMF) [8] to solve the CS problems. Different from NSS methods, each observation and corresponding sensing signal vector will be implemented by the RZA-NLMF algorithm to reconstruct the sparse signal during the process of adaptive filtering. According to the concrete requirements, the complexity of the proposed ASS method could be adaptively reduced without sacrificing much recovery performance. The effectiveness of our proposed method is confirmed via computer simulation when comparing with NSS.

The remainder of the paper is organized as follows. The basic CS problem is introduced and the typical NSS method is presented in Section 2. In Section 3, ASS using the RZA-NLMF algorithm is proposed for solving CS problems and its derivation process is highlighted. Computer simulations are given in Section 4 in order to evaluate and compare performances of the proposed ASS method. Finally, our contributions are summarized in Section 5.

2 Nonlinear sparse sensing

Assume that a finite-length discrete signal vector s = [s1, s2,, s N ]T can be sparse represented in a signal domain D, that is,

s= i = 1 N d i h i = Dh ,
(1)

where h = [h1, h2,, h N ]T is the unknown K-sparse coefficient vector (KN) and D is an N × N orthogonal basis matrix with {d i , i = 1, 2,, N} as its columns. Take a random measurement signal matrix W, and then the received signal vector y = [y1,, y m ,, y M ]T can be written as

y = Ws + z = WDh + z = Xh + z ,
(2)

where X = WD denotes a M × N random sensing matrix as

X= x 1 T x m T x M T = x 11 x 1 n x 1 N x m 1 x mn x mN x M 1 x Mn x MN
(3)

and z = [z1,, z m ,, z M ]T is an additive white Gaussian noise (AWGN) with distribution CN 0 , σ n 2 I M , where I M denotes an M × M identity matrix. From the perspective of CS, the sensing matrix X satisfies the restricted isometry property (RIP) in overwhelming probability [5] so that the sparse signal h can be reconstructed correctly by NSS methods, e.g., BPDN [6] and OMP [7]. Take the BPDN as an example to illustrate the NSS realization approach. Since the sensing matrix X satisfies RIP of order K with positive parameter δ K (0, 1), i.e., X RIP(K, δ K ) if

1 - δ K h 2 2 Xh 2 2 1 + δ K h 2 2
(4)

holds for all h having no more than K nonzero coefficients, then the unknown sparse vector h can be reconstructed by BPDN as

h ˜ nss =arg lim h ˜ 1 2 y - Xh 2 2 + λ h 1 ,
(5)

where λ denotes a regularization parameter which balances the mean square error (MSE) term and sparsity of h. If the mutual interference of sensing matrix X can be completely removed, then the theoretical Cramer-Rao lower bound (CRLB) of the NSS can be derived as [9, 10]

CRLB h ˜ nss =E h ˜ nss - h 2 = K σ n 2 N .
(6)

3 Adaptive sparse sensing

We reconsider the above system model (2) with respect to the adaptive sensing case. At the observation side, the m th observed signal y m can be written as

y m = h T x m + z m ,
(7)

for m = 1, 2,, M. The objective of ASS is to adaptively estimate the unknown sparse vector h using the sensing signal vector x m and the observed signal y m . Different from NSS approaches, we proposed an alternative ASS method using the RZA-NLMF algorithm as shown in Figure 2. Assume that y ˜ m n = x m T h ˜ n is an estimated observed signal which depends on signal estimator h ˜ n and hence the n th observed signal error as e m (n) = y m  -  m (n). Notice that e m (n) is in correspondence with the n th iterative error when using the m th sensing signal vector x m and m = mod(n,M). Notice that mod() denotes a modulo function, for example, mod(5,3) = 2 and mod(5,2) = 1. First of all, the cost function of the RZA-NLMF algorithm is constructed as

Figure 2
figure 2

RZA-NLMF algorithm for ASS.

G n = 1 4 e m 4 n + λ ass i = 1 N log 1 + ϵ h i ,
(8)

where λass > 0 is a regularization parameter which trades off the sensing error and coefficient vector sparsity. ϵ > 0 denotes a reweighted factor which enhances to exploit the signal sparsity at each iteration. A figure example to show the relationship between reweighted factors and sparse constraint strength is given in Figure 3. According to the cost function (8), the corresponding update equation can be derived as

Figure 3
figure 3

Sparse constraint strength comparison using different reweights.

h ˜ n + 1 = h ˜ n - μ iss G n h ˜ n = h ˜ n + μ iss e m 3 n x m x m 2 2 x m 2 2 + e m 2 n - ρ sgn h ˜ n 1 + ϵ h ˜ n = h ˜ n + μ ass n e m n x m x m 2 2 - ρ sgn h ˜ n 1 + ϵ h ˜ n ,
(9)

where ρ = μissλ/ϵ is a parameter which depends on initial step size μiss, regularization parameter λ, and threshold ϵ. In the second term of (9), if the coefficient magnitudes of h ˜ n are smaller than 1/ϵ, then these small coefficients will be replaced by zeros in high probability [11]. Here, it is worth noting that μass(n) is a variable step size:

μ ass n = μ iss e m 2 n x m 2 2 + e m 2 n ,
(10)

which depends on three factors: initial step size μiss, input signal x m , and update iterative error e m (n). Since μiss is a given initial step size and x m is a random scaling input signal, hence, μass in Equation 10 can also be rewritten as

μ ass n = μ iss x m 2 2 / e m 2 n + 1 ,
(11)

which is a variable step size (VSS) that is adaptive to change as square sensing error e m 2(n); a smaller error incurs a smaller step size to ensure the stability of the gradient descent while a larger error yields a larger step size to accelerate the convergence speed of this algorithm [12]. According to the update equation in (9), our proposed ASS method can be concluded in Algorithm 1.

As for the trademark of the performance comparisons, the CRLB of the proposed ASS method is derived in the subsequent. The signal error is defined as v n := h ˜ n -h, and e(n) can be written as e m (n) = z m  - vT(n)x m . To simply derive the CRLB, four assumptions are considered in the subsequent analysis: (1) the input signal x m and noise z m are mutually independent, (2) each row x m of the signal matrix X is random independent with zero mean and random Gaussian variance σ2I N , (3) noise z m is random independent with zero mean and variance σ n 2, (4) h ˜ n is independent of X. Assume that the n th adaptive receive error e(n) is sufficiently small so that e m 2(n) x m ; hence, μ ass = μ iss e m 2 n / x m . According to (9), the n th update signal error v(n + 1) can be written as

v n + 1 =v n + μ iss e m 3 n x m x m 2 2 - ρ sgn h ˜ n 1 + ϵ h ˜ n ,
(12)

where e m 3(n) can be expanded as

e m 3 n = z m - v T n x m 3 = z m 3 - 3 z m 2 v T n x m + 3 z m v T n x m 2 - v T n x m 3 .
(13)

Substituting (13) into (12), v(n + 1) can be further represented as

v n + 1 = v n + μ iss z m 3 x m x m 2 2 - 3 μ iss z m 2 v T n x m x m x m 2 2 + 3 μ iss z m v T n x m 2 x m x m 2 2 - μ iss v T n x m 3 x m x m 2 2 - ρ sgn h ˜ n 1 + ϵ h ˜ n .
(14)

Hence, the steady-state mean square error (MSE) can be derived as

E v T n + 1 v n + 1 = E v T n v n + μ nss 2 E z m 6 / x m 2 2 + 9 μ iss 2 E z m 4 v T n x m 2 x m 2 2 + 9 μ nss 2 E z m 2 v T n x m 4 x m 2 2 + μ iss 2 E z m 2 v T n x m 6 x m 2 2 + ρ 2 E sgn h ˜ T n sgn h ˜ n 1 + ϵ h ˜ n 2 + 2 μ iss E z m 3 v T n x m x m 2 2 - 6 μ iss E z m 2 v T n x m 2 x m 2 2 + 6 μ iss E z m v T n x m 3 x m 2 2 - 2 μ iss E v T n x m 4 x m 2 2 - 2 ρE v T n sgn h ˜ n 1 + ϵ h ˜ n - 6 μ iss 2 E z m 5 v T n x m x m 2 2 + 6 μ iss 2 E z m 4 v T n x m 2 x m 2 2 - 2 μ iss 2 E z m 3 v T n x m 3 x m 2 2 - 2 ρ μ iss E z m 3 x m T sgn h ˜ n x m 2 2 1 + ϵ h ˜ n - 18 μ iss 2 E z m 3 v T n x m 3 x m 2 2 + 6 μ iss 2 E v T n x m 4 x m 2 2 + 6 ρ μ iss E z m 2 v T n x m x m T sgn h ˜ n x m 2 2 1 + ϵ h ˜ n .
(15)

Based on the abovementioned independent assumptions and ideal Gaussian noise assumption [13], we can get the following approximations:

E z m =E z m 3 =E z m 5 =0,
(16)
E z m 4 =3 σ n 4 ,
(17)
E z m 6 =15 σ n 6 ,
(18)
E x m T x m =N σ 2 .
(19)

Due to the independence between x m and v(n), {vT(n)x m } satisfies zero-mean Gaussian distribution, that is, E[vT(n)x m ] = 0 [13]. Hence, we can also get the following approximations:

E v T n x m 2 v n = σ 2 E v T n v n ,
(20)
E v T n x m 4 v n =3 σ 4 E v T n v n 2 ,
(21)
E v T n x m 6 v n =15 σ 6 E v T n v n 3 .
(22)

By neglecting the random fluctuations in vT(n)v(n) and using approximation equation vT(n)v(n) ≈ E[vT(n)v(n)] = b(n), substitute (16) to (22) into (15) which can be simplified as

b n + 1 = 1 + 27 μ iss 2 σ n 4 - 6 μ iss σ n 2 N b n + 27 μ iss 2 σ n 2 σ 2 - 2 μ iss 3 σ 2 N + 18 μ iss 2 σ 2 N σ 2 b 2 n + 15 μ iss 2 σ n 2 σ 4 N b 3 n - 15 μ iss 2 σ n 6 N σ 2 + ϕ n ,
(23)

where ϕ(n) is incurred by the last term of (12) and it is expressed by

ϕ n = 6 ρ μ iss σ n 2 N E v T n x m x m T sgn h ˜ n 1 + ϵ h ˜ n + ρ 2 E sgn h ˜ T n sgn h ˜ n 1 + ϵ h ˜ n 2 - 2 ρE v T n sgn h ˜ n 1 + ϵ h ˜ n .
(24)

Since the adaptive update square error b(n) is too small (i.e., b(n)  1), hence, higher than two-order errors are considered zero, i.e., b2(n) = 0 and b3(n) = 0. The MSE can be derived from (23) as

b = 5 μ iss σ n 4 9 μ iss σ n 2 σ 2 - 2 σ 2 - 27 μ iss 2 σ n 4 - 6 μ iss σ n 2 .
(25)

Assume that ideal reconstruction vector h ˜ n can be obtained, then one can get lim n h ˜ n 1 = h 1 and lim n sgn h ˜ T n sgn h ˜ n =K, where K denotes the number of nonzero coefficients in h. Hence, ϕ(∞) in (25) can be derived as

ϕ = lim n ϕ n = lim n 6 ρ μ iss σ n 2 σ 2 N - 2 ρ E h ˜ n - h T sgn h ˜ n 1 + ϵ h ˜ n + ρ 2 E sgn h ˜ T n sgn h ˜ n 1 + ϵ h ˜ n 2 = lim n 6 ρ μ iss σ n 2 σ 2 N - 2 ρ E h ˜ n 1 + ϵ h ˜ n 1 - h 1 + ϵ h ˜ n 1 + ρ 2 E sgn h ˜ T n sgn h ˜ n 1 + ϵ h ˜ n 2 ρ 2 K .
(26)

Finally, the CRLB of the proposed ASS can be obtained as

CRLB h ˜ ass = b = 5 μ iss σ n 4 9 μ iss σ n 2 σ 2 - 2 σ 2 - ρ 2 NK 27 μ iss 2 σ n 4 - 6 μ iss σ n 2 .
(27)

4 Computer simulations

In this section, the proposed ASS approach using the RZA-NLMF algorithm is evaluated. For achieving average performance, 1,000 independent Monte Carlo runs are adopted. For easy evaluation of the effectiveness of the proposed approach, signal representation domain D is assumed as an identity matrix IN × N and unknown signal s is set as sparse directly. Sensing matrix is equivalent to random measurement matrix, i.e., X = W. For ensuring that X satisfies RIP, W is set as a random Gaussian matrix [5]. Then, sparse coefficient vector h equals to s. The details of the simulation parameters are listed in Table 1. Notice that each nonzero coefficient of h follows random Gaussian distribution as CN 0 , σ 2 and their positions are randomly allocated within the signal length of h which is subject to E{||h||22} = 1, where E{∙} denotes the expectation operator. The output signal-to-noise ratio (SNR) is defined as 20 log (E s /σ n 2), where E s  = 1 is the unit transmission power. All of the step sizes and regularization parameters are listed in Table 1. The estimation performance is evaluated by average mean square error (MSE) which is defined by

Table 1 Simulation parameters
AverageMSE h ˜ n :=E h - h ˜ n 2 2 ,
(28)

where h and h ˜ n are the actual channel vector and its n th iterative adaptive channel estimator, respectively. According to our previous work [8], the regularization parameter for RZA-NLMF is set as λ = 5 × 10-8 so that it can exploit signal sparsity robustly. Since the RZA-NLMF-based ASS method depends highly on the reweighted factor ϵ, hence, we first select the reasonable factor ϵ by virtue of the Monte Carlo method. Later, we compare the proposed method with two typical NSS ones, i.e., BPDN [6] and OMP [7].

4.1 Reweighted factor selection

Since the RZA-NLMF algorithm depends highly on reweighted factor, hence, selection of the robust reweighted factor for different noise environments and different signal sparsities is a typical important step for the RZA-NLMF algorithm. It is well known that 0-norm normalized least mean fourth (L0-NLMF) for CS can achieve optimal solution, but it is a NP-hard problem in practical applications such as noise environment [2]. One can find that RZA-NLMF reduces to L0-NLMF when the reweighted factor approaches to infinity. Due to the noise interference, we should select the suitable reweighted factor which not only can exploit signal sparsity but also can mitigate noise interference effectively. Hence, the reweighted factor of RZA-NLMF is selected empirically. By means of the Monte Carlo method, the performance curves of the proposed ASS method with different reweighted factors ϵ {2, 20, 200, 2,000, 20,000} with respect to different numbers of nonzero coefficients K {2, 6, 10} and different SNR regimes (5 and 10 dB) are depicted in Figures 4, 5, 6, 7. Under the simulation setup considered, RZA-NLMF using ϵ = 2,000 can achieve robust performance in different cases as shown in Figures 4, 5, 6, 7. From the four figures, one can find that sparser signal requires larger reweighted factor but no more than 20,000 in this system. This is concise with the fact that stronger sparse penalty not only exploits more sparse information but also mitigates more noise interference.

Figure 4
figure 4

RZA-NLMF performance versus reweighted factors ( K= 2 and SNR = 5 dB).

Figure 5
figure 5

RZA-NLMF performance versus reweighted factors ( K= 2 and SNR = 10 dB).

Figure 6
figure 6

RZA-NLMF performance versus reweighted factors ( K= 6 and SNR = 10 dB).

Figure 7
figure 7

RZA-NLMF performance versus reweighted factors ( K= 10 and SNR = 10 dB).

4.2 Performance comparisons with NSS

Two experiments of ASS are verified in performance comparisons with conventional NSS methods (e.g., BPDN [6] and OMP [7]). In the first experiment, the ASS method is evaluated in the case of SNR = 10 dB as shown in Figure 8. On the one hand, according to this figure, we can find that the proposed ASS method using the RZA-NLMF algorithm achieves much lower MSE performance than NSS methods and even if it is CRLB. The existing big performance gap between ASS and NSS is because ASS using RZA-NLMF not only exploits the signal sparsity but also mitigates the noise interference using high-order error statistics for adaptive error updating. On the other hand, we can also find that ASS depends on the signal sparseness. That is to say, for sparser signal, ASS can exploit more signal structure information as for prior information and vice versa. In the second experiment, the number of nonzero coefficients is fixed as K = 2 as shown in Figure 9. It is easy to find that our proposed ASS is much better than conventional NSS as the SNR increases.

Figure 8
figure 8

Performance comparisons versus signal sparsity.

Figure 9
figure 9

Performance comparisons versus SNR.

5 Conclusions

In this paper, we proposed an ASS method using the RZA-NLMF algorithm for dealing with CS problems. First, we selected the reweighted factor and regularization parameter for the proposed algorithm by virtue of the Monte Carlo method. Later, based on the update equation of RZA-NLMF, the CRLB of ASS was also derived based on random independent assumptions. Finally, several representative simulations have been given to show that the proposed method achieves much better MSE performance than NSS with respect to different signal sparsities, especially in the case of low SNR regime.

Since the empirical reweighted factor was selected for RZA-NLMF in the noise environment, in the future work, we will develop the learning reweighted factor for RZA-NLMF in the case of a noiseless environment. It is expected that RZA-NLMF using learning reweighted factor can achieve much better recovery performance without sacrificing much computational complexity.

References

  1. Candes EJ, Romberg J, Tao T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52(2):489-509.

    Article  MathSciNet  MATH  Google Scholar 

  2. Donoho DL: Compressed sensing. IEEE Trans. Inf. Theory 2006, 52(4):1289-1306.

    Article  MathSciNet  MATH  Google Scholar 

  3. Baraniuk R: Compressive radar imaging. IEEE Radar Conference, Boston, 17–20 Apr 2007 ᅟ, 128-133.

    Google Scholar 

  4. Herman M, Strohmer T: Compressed sensing radar. IEEE Radar Conference, Rome, 26–30 May 2008 ᅟ, 1-6.

    Google Scholar 

  5. Candes EJ: The restricted isometry property and its implications for compressed sensing. Comptes Rendus Math 2008, 1(346):589-592.

    Article  MathSciNet  MATH  Google Scholar 

  6. Chen SS, Donoho DL, Saunders MA: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 1998, 20(1):33-61. 10.1137/S1064827596304010

    Article  MathSciNet  MATH  Google Scholar 

  7. Tropp JA, Gilbert AC: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53(12):4655-4666.

    Article  MathSciNet  MATH  Google Scholar 

  8. Gui G, Mehbodniya A, Adachi F: Adaptive sparse channel estimation using re-weighted zero-attracting normalized least mean fourth. 2nd IEEE/CIC International Conference on Communications in China (ICCC), Xian, 12 Aug 2013 ᅟ, 368-373.

    Google Scholar 

  9. Dai L, Zhaocheng W, Yang Z: Compressive sensing based time domain synchronous OFDM transmission for vehicular communications. IEEE J. Sel. Areas Commun. 2013, 31(9):460-469.

    Article  Google Scholar 

  10. Dai L, Wang Z, Yang Z: Spectrally efficient time-frequency training OFDM for mobile large-scale MIMO systems. IEEE J. Sel. Areas Commun. 2013, 31(2):251-263.

    Article  Google Scholar 

  11. Chen Y, Gu Y, Hero AO III: Sparse LMS for system identification. IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, 19–24 Apr 2009 ᅟ, 3125-3128.

    Google Scholar 

  12. Gui G, Dai L, Kumagai S, Adachi F: Variable earns profit: improved adaptive channel estimation using sparse VSS-NLMS algorithms. IEEE International Conference on Communications (ICC), Sydney, 10–14 June 2014 ᅟ, 1-5.

    Google Scholar 

  13. Eweda E, Bershad NJ: Stochastic analysis of a stable normalized least mean fourth algorithm for adaptive noise canceling with a white Gaussian reference. IEEE Trans. Signal Process. 2012, 60(12):6235-6244.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to appreciate the editor and the anonymous reviewers for their help comments and suggestion to improve the quality of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guan Gui.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gui, G., Xu, L. & Adachi, F. RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing. EURASIP J. Adv. Signal Process. 2014, 125 (2014). https://doi.org/10.1186/1687-6180-2014-125

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2014-125

Keywords