Skip to main content

Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing

Abstract

Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.

1 Introduction

Colocated multiple-input-multiple-output (MIMO) radars have been extensively studied in literature for surveillance applications. In phased array radars, each antenna transmits the phase shifted version of the same waveform to steer the transmit beam. Therefore, in phased array radars, the transmitted waveforms at each antenna element are sufficiently correlated resulting in a single beamformed waveform. In contrast, MIMO radar can be seen as an extension of phased array radar, where transmitted waveforms can be independent or partially correlated. Such waveforms yield extra degrees of freedom that can be exploited for better detection performance and resolution and to achieve desired beam patterns achieving uniform transmit energy in the desired direction. For MIMO radar, many parameter estimation algorithms have been studied, e.g., Capon, amplitude-and-phase estimation (APES), Capon and APES (CAPES), and Capon and approximate maximum likelihood (CAML) [1, 2]. These algorithms require the inverse of the covariance matrix of the received samples. The covariance matrix of the received samples is full rank if the number of snapshots is greater than or equal to the number of receive antenna elements. Therefore, the conventional algorithms like Capon and APES require a large number of snapshots for parameter estimation. Moreover, for the case of large arrays, the inversion of the covariance matrix of a larger number of received snapshots will become computationally expensive.

Compressive sensing (CS) [3, 4] is a useful tool for data recovery in sparse environments. Some efficient algorithms are proposed that fall in the category of greedy algorithms that include orthogonal matching pursuit(OMP) [5], regularized orthogonal matching pursuit (ROMP) [6], stagewise orthogonal matching pursuit (StOMP) [7], and compressive sampling matching pursuit (CoSaMP) [8]. There is another category of CS algorithms called Bayesian algorithms that assume the a priori statistics are known. These algorithms include sparse Bayes [9], Bayesian compressive sensing (BCS) [10] and the fast Bayesian matching pursuit (FBMP) [11]. Another reduced complexity algorithm based on the structure of the sensing matrix is proposed in [12]. In addition to these algorithms, support agnostic Bayesian matching pursuit (SABMP) is proposed in [13] which assumes that the support distribution is unknown and finds the Bayesian estimate for the sparse signal by utilizing noise statistics and sparsity rate.

The target parameters to be estimated are the reflection coefficients (path gains) and location of the target. To estimate the reflection coefficient and location angle of the target, existing CS algorithms can be utilized by formulating the MIMO radar parameter estimation problem as a sparse estimation problem. It is shown in [14–16] that the MIMO radar problem can be seen as an ℓ 1-norm minimization problem. In direction of arrival (DOA) estimation, a discretized grid is selected to search all possible DOA estimates. The grid is equal to the search points in the angle domain of MIMO radar. The complexity of the CS method developed in [15] grows with the size of the discretized grid. In [16], the minimization problem is solved based on the covariance matrix estimation approach which requires a large number of snapshots. The work in [17] does not provide a fast parameter estimation algorithm and assumes that the number of targets, sparsity rate, and noise variance are known. The authors in [18] have used CVX (a package to solve convex problems) to solve the minimization problem obtained by CS formulation of MIMO radar. The solution of CS problems by CVX is computationally expensive for large angle grid. In [19], off-grid direction of arrival is estimated using sparse Bayesian inference where the number of sources or targets is assumed to be known. An off-grid CS algorithm called adaptive matching pursuit with constrained total least squares is proposed in [20] with application to DOA estimation. Another algorithm based on iterative recovery of off-grid target is proposed in [21, 22]. For recent developments that are useful in off-grid recovery, please see [23] and references therein.

In this work, our contribution is twofold. First, we solve the spatial formulation for parameter estimation by SABMP for on-grid targets assuming that the number of targets and noise variance are unknown. Second, we solve an alternate temporal formulation to find estimates for the unknown parameters. We also make comparisons of MSE and complexity of our work with the existing conventional algorithms. Specifically, the advantages of using a CS based algorithm are as follows:

  1. 1.

    The spatial formulation can recover the unknown parameters when the number of snapshots is less than the number of receiving antennas.

  2. 2.

    The proposed approach for parameter estimation is capable of estimating unknown parameters even away from the broadside of the beam pattern.

  3. 3.

    The recovery of the reflection coefficient in CS temporal formulation using SABMP is better than Capon, APES, and CoSaMP algorithms.

  4. 4.

    The complexity of SABMP algorithm is not much effected by the number of receive antenna elements in the spatial formulation.

1.1 Organization of the paper

The rest of the paper is organized as follows: In Section 2, the signal model for MIMO radar DOA problem is formulated. In Section 3, the system model is reformulated in a CS environment for on-grid parameter estimation along with the spatial and temporal formulations for large and small arrays (Sections 3.1 and 3.2, respectively). In Section 4, we show the derivation for the Cramér Rao lower bound (CRLB). The simulation results are discussed in Section 5, and the paper is concluded in Section 6.

1.2 Notation

We assume complex-valued data which is more general. Bold lower case letters, e.g., x, and bold upper case letters, e.g., X, respectively, denote vectors and matrices. The notations x T and X T, respectively, denote the transpose of a vector x and transpose of a matrix X. The notations x H denote the complex conjugate transpose of a vector x. The notation diag{a,b} denotes a diagonal matrix with diagonal entries a and b.

1.3 Support agnostic Bayesian matching pursuit

CS technique is used to recover information from signals that are sparse in some domain, using fewer measurements than required by Nyquist theory. Let \(\mathbf {x} \in \mathcal {C}^{N}\) be a sparse signal which consists of K non-zero coefficients in an N-dimensional space where K≪N. If \(\mathbf {y} \in \mathcal {C}^{M}\) be the observation vector with M≪N, then the CS problem can be formulated as

$$\begin{array}{*{20}l} \mathbf{y} = \text{\boldmath\(\Phi\)} \mathbf{x} + \mathbf{z} \end{array} $$
(1)

where \(\text {\boldmath \(\Phi \)} \in \mathcal {C}^{M \times N}\) is referred to as sensing matrix and \(\mathbf {z} \in \mathcal {C}^{M}\) is complex additive white Gaussian noise, \(\mathcal {CN} (\mathbf {0},\sigma _{\mathbf {z}}^{2} \mathbf {I}_{M})\). The theoretical way to reconstruct x is to solve an â„“ 0-norm minimization problem when it is known a priori that the signal x is sparse and measurements are noise free, i.e.,

$$\begin{array}{*{20}l} \min \|\mathbf{x} \|_{0}, ~~~~~ \text{subject to} ~~~~~ \mathbf{y}=\text{\boldmath\(\Phi\)} \mathbf{x}. \end{array} $$
(2)

Solving the â„“ 0-norm minimization problem is NP-hard problem and requires exhaustive search to find the solution. Therefore, a more tractable solution [24] is to minimize the â„“ 1-norm with a relaxed constraint, i.e.,

$$\begin{array}{*{20}l} \min \|\mathbf{x}\|_{1}, ~~~~~ \text{subject to} ~~~~~ \|\mathbf{y} - \boldsymbol{\Phi} \mathbf{x}\|_{2} \leq \delta, \end{array} $$
(3)

where \(\delta = \sqrt {\sigma _{\mathbf {z}}^{2} (M+\sqrt {2M})}\). â„“ 1-norm minimization problem reduces to a linear program known as basis pursuit.

SABMP algorithm [13] is a Bayesian algorithm which provides robust sparse reconstruction. As discussed in [13], Bayesian estimation finds the estimate of x by solving the conditional expectation

$$ \hat{\mathbf{x}} = \mathsf{E}~ \left[\mathbf{x}|\mathbf{y} \right] = \sum\limits_{\mathcal{S}} p (\mathcal{S}|\mathbf{y}) \mathsf{E} \left[\mathbf{x}|\mathbf{y},\mathcal{S} \right] $$
(4)

where \(\mathcal {S}\) denotes the support set which contains the location of non-zero entries and \(p(\mathcal {S}|\mathbf {y})\) is the probability of \(\mathcal {S}\) given y which is found by evaluating Bayes rule. In SABMP algorithm, the support set \(\mathcal {S}\) is found by greedy approach. Once the support set \(\mathcal {S}\) is known, the best linear unbiased estimator is found using y to estimate x. SABMP algorithm, like other Bayesian algorithms, utilizes statistics of noise and sparsity rate. SABMP algorithm assumes prior Gaussian statistics of the additive noise and the sparsity rate. The estimates of noise variance and sparsity rate need not to be known rather SABMP algorithm estimates them in a robust manner. The statistics of locations of non-zero coefficients or signal support are assumed either non-Gaussian or unknown. Hence, it is agnostic to the support distribution. SABMP is a low complexity algorithm as it searches for the solution in a greedy manner. The matrix inversion involved in the calculations is done in an order-recursive manner which leads to further reduction in complexity.

2 Signal model

We focus on a colocated MIMO radar setup as illustrated in Fig. 1. In colocated MIMO radar, the transmitting antenna elements in the transmitter and the receiving antenna elements in the receiver are closely spaced. Both the transmitter and receiver are closely spaced too in a monostatic configuration. In the monostatic configuration, the transmitter and receiver see the same aspects of a target. In other words, the distance between the target and transmitter/receiver is large enough that the distance between transmitter and receiver becomes insignificant. Consider a MIMO radar system of n T transmit and n R receive antenna elements. The antenna arrays at the transmitter and receiver are uniform and linear, the inter-element-spacing between any two adjacent antennas is half of the transmitted signal wavelength, and there are K possible targets located at angles θ k ∈ [ θ 1,θ 2,…,θ K ]. Let s(n) denote the vector of transmitted symbols which are uncorrelated quadrature phase shift keying (QPSK) sequences. If z(n) denote the vector of circularly symmetric white Gaussian noise samples at n R receive antennas at time index n, the vector of baseband samples at all n R receive antennas can be written as [25]

$$ \mathbf{y}(n) = \sum\limits_{k=1}^{K} \beta_{k}(\theta_{k}) \mathbf{a}_{R}(\theta_{k}) \mathbf{a}_{T}^{\mathsf{T}} (\theta_{k}) \mathbf{s}(n) + \mathbf{z}(n), $$
(5)
Fig. 1
figure 1

Colocated MIMO radar setup

where (.)T denotes the transpose, β k denotes the reflection coefficient of the k-th target at location angle θ k , while \(\mathbf {a}_{T} (\theta _{k}) = [\!1, e^{i \pi \sin (\theta _{k})}, \ldots, e^{i \pi (n_{T} - 1) \sin (\theta _{k})}]^{\mathsf {T}}\) and \(\mathbf {a}_{R} (\theta _{k}) = [\!1, e^{i \pi \sin (\theta _{k})}, \ldots, e^{i \pi (n_{R} - 1) \sin (\theta _{k})}]^{\mathsf {T}}\), respectively, denote the transmit and receive steering vectors. We have assumed z(n) as uncorrelated noise. A correlated noise model can be found in [26]. We are interested in estimating the two parameters: DOA represented by θ k and reflection coefficient β k which is proportional to the radar cross section (RCS) of the target. It is assumed that the targets are in the same range bins.

3 CS for target parameter estimation

CS formulation for target parameter estimation can be done in two different ways. First, via spatial formulation in which the samples at all antennas constitute a measurement vector. In the second approach, termed as temporal formulation, all snapshots in time at one antenna represent a measurement vector. These two methods are discussed next.

3.1 Spatial formulation

Suppose each antenna transmit L uncorrelated symbols, the matrix of all received samples can be written as [18, 27]

$$ \mathbf{Y} = \sum\limits_{k=1}^{K} \beta_{k}(\theta_{k}) \mathbf{a}_{R}(\theta_{k}) \mathbf{a}_{T}^{\mathsf{T}} (\theta_{k}) \mathbf{S} + \mathbf{Z}, $$
(6)

where

$$ \mathbf{Y} = [\!\mathbf{y}(0), \mathbf{y}(1), \ldots, \mathbf{y}(L-1)] \in \mathcal{C}^{n_{R} \times L} $$
(7)

and

$$ \mathbf{S} = [\!\mathbf{s}(0), \mathbf{s}(1), \ldots, \mathbf{s}(L-1)] \in \mathcal{C}^{n_{T} \times L} $$
(8)

is a matrix of all transmitted symbols from all antennas. For independent transmitted waveforms, the rows of S will be uncorrelated. It should be noted that (6) holds if and only if the targets fall in the same range bins which is a special case. The model in (6) can be generalized for delay by adding the delay parameter in the transmitted waveform S. If the targets are in different range bin, there will be another parameter of delay or time of arrival associated with each target making the problem more complex. Since the targets are located at only finite discretized locations in the angle range [ −π/2,π/2], by dividing the region-of-interest into N grid points \(\{\hat \theta _{1},\hat \theta _{2},\ldots,\hat \theta _{N}\}\) and assuming \(\mathbf {A}_{R} = [\!\mathbf {a}_{R}(\hat \theta _{1}), \mathbf {a}_{R}(\hat \theta _{2}), \ldots, \mathbf {a}_{R}(\hat \theta _{N})], \mathbf {A}_{T} = [\!\mathbf {a}_{T}(\hat \theta _{1}), \mathbf {a}_{T}(\hat \theta _{2}), \ldots, \mathbf {a}_{T}(\hat \theta _{N})]\), and B=diag{β 1,β 2,…,β N }, we have

$$ \mathbf{Y} = \mathbf{A}_{R} \mathbf{B} \mathbf{A}_{T}^{\mathsf{T}} \mathbf{S} + \mathbf{Z} $$
(9)

It should be noted here that the diagonal elements of B will be non-zero if and only if the target is present at the corresponding grid location. If N≫K, the columns of the matrix \(\mathbf {B} \mathbf {A}_{T}^{\mathsf {T}} \mathbf {S}\) will be sparse. Therefore, (9) can be written as

$$\begin{array}{*{20}l} [\!\mathbf{y}(0), \mathbf{y}(1), \ldots, \mathbf{y}(L-1)] &= \mathbf{A}_{R} [\!\tilde{\mathbf{x}}(0), \tilde{\mathbf{x}}(1), \ldots,\\ & \tilde{\mathbf{x}}(L-1)] + \mathbf{Z}, \end{array} $$
(10)

where \(\tilde {\mathbf {x}}(l) = \mathbf {B} \mathbf {A}_{T}^{\mathsf {T}} \mathbf {s}(l)\) for l=0,1,…,L−1 is a sparse vector. For a single snapshot, we can solve

$$ \mathbf{y}(l) = \mathbf{A}_{R} \tilde{\mathbf{x}}(l) + \mathbf{z}(l) $$
(11)

by optimizing the cost function

$$ \min_{\tilde{\mathbf{x}}(l)} \|\tilde{\mathbf{x}} (l) \|_{1} ~~~~~ \text{subject to} ~~~~~ \|\mathbf{y} - \mathbf{A}_{R} \tilde{\mathbf{x}}(l) \|_{2} \leq \eta $$
(12)

and assuming A R as the sensing matrix using convex optimization tools. The sensing matrix A R is a structured matrix similar to the Fourier matrix. For guaranteed sparse recovery, there are conditions on the sensing matrix. One such condition is called restricted isometry property (RIP) [28] which says for a matrix Φ satisfies RIP with constant δ k if

$$ (1-\delta_{k}) \|\mathbf{x}\|_{2}^{2} \leq \|\boldsymbol{\Phi} \mathbf{x} \|^{2} \leq (1+\delta_{k}) \|\mathbf{x} \|^{2}_{2} $$
(13)

for every vector x with sparsity k. For guaranteed sparse recovery in unbounded noise, δ 2k should be less than \(\sqrt {2}-1\). To find the exact value of δ k is a combinatorial problem which requires exhaustive search. For noiseless recovery of sparse vectors, coherence criteria is more tractable. The coherence of a sensing matrix with column norms 1 is given by

$$ \mu(\boldsymbol{\Phi}) = \max_{i\neq j} |\langle \phi_{i}, \phi_{j} \rangle | $$
(14)

where {i,j}=1,2,…,N and ϕ i is the i-the column of Φ. In general for any matrix, Φ,0<μ≤1 but for guaranteed sparse recovery μ should be as small as possible and it must be less than one. The sensing matrix A R can be used for sparse reconstruction because it satisfies the coherence criteria with μ(A R )<1. The convex optimization methods require randomness in the sensing matrix. The structure in sensing matrix deteriorates the performance of convex optimizations methods due to high μ(Φ). But, the properties of structured sensing matrix can be exploited for reduced complexity sparse reconstruction. It is shown in [12] that for Toeplitz matrix exhibiting structure and μ(Φ)≃0.9, Bayesian reconstruction is more efficient than convex optimization methods. Furthermore, the matrix A R has Vandermonde structure and its usage for sparse recovery with a similar matrix to A R is also discussed in [17]. Ref [29] analyzed Fourier-based structured matrices for compressed sensing.

Group sparsity algorithms were used to solve (10) for multiple snapshots and showed that the complexity grows with the number of measurement vectors as well as handling of the sensing matrix becomes difficult due to a Kronecker product involved in the construction of the group sensing matrix [30]. Since the column vectors \(\tilde {\mathbf {x}} (l)\), for l=0,1,…,L−1 in (12) are sparse, using A R as the sensing matrix, CS algorithms can be used to estimate the location and corresponding values of non-zero elements in \(\tilde {\mathbf {x}} (l)\). Once they are known, the reflection coefficients and location angles of the targets can be easily found.

The formulation developed in (9) can be considered as block-sparse and can be solved by SABMP for block sparse signals [31]. SABMP is a low complexity algorithm and provides an approximate MMSE estimate of the sparse vector with unknown support distribution. The authors would like to emphasize that SABMP does not require the estimates of sparsity rate and noise variance rather it refines the initial estimates of these parameters in an iterative fashion. Therefore, we will assume that the noise variance and the number of targets are unknown. Moreover, SABMP is a low complexity algorithm because it calculates the inverses by order-recursive updates. The undersampling ratio in CS environment is defined as the length of sparse vector divided by the number of measurements, i.e., N/M. As the undersampling ratio increases, the performance of CS algorithms deteriorates (please see [13] and the references therein). The results in [13] show that the best performance of SABMP algorithm can be achieved when the undersampling ratio is 1<N/M<7. Since the number of measurements is n R , it can be deduced for the number of receiving antennas that N/7<n r <N. For a given grid size and to maintain a low undersampling ratio, the spatial formulation is best suitable for large arrays.

3.2 Temporal formulation

For smaller antenna arrays, where n R ≪N, the formulation mentioned above can have a very high undersampling ratio which will lead to poor sparse recovery. To overcome this problem, by taking the transpose of (9) an alternate formulation can be written as

$$ \mathbf{Y}^{\mathsf{T}} = \mathbf{S}^{\mathsf{T}} \mathbf{A}_{T} \mathbf{B} \mathbf{A}_{R}^{\mathsf{T}} + \mathbf{Z}^{\mathsf{T}} $$
(15)

Since B is sparse, \(\bar {\mathbf {X}} = \mathbf {B} \mathbf {A}_{R}^{\mathsf {T}}\) will consist of sparse column vectors, the new sensing matrix will be

$$ \boldsymbol{\Psi} = \mathbf{S}^{\mathsf{T}} \mathbf{A}_{T} ~~ \in \mathcal{C}^{L \times N}. $$
(16)

Similar to the argument of target range bins on (6), the model in (15) holds if and only if the targets fall in the same range bins. Moreover, if there is any delay in waveform S, it will effect the RIP of Ψ. Although the sensing matrix Ψ exhibit structure, the coherence of this sensing matrix is less than 1. Here, we are assuming that the transmitted waveforms matrix S is known at the receiver and A T can be reconstructed at the receiver in the absence of any calibration error. Therefore, the second formulation for CS becomes

$$ \bar{\mathbf{Y}} = \boldsymbol{\Psi} \bar{\mathbf{X}} + \bar{\mathbf{Z}}, $$
(17)

where \(\bar {\mathbf {Y}} = \mathbf {Y}^{\mathsf {T}}\) and \(\bar {\mathbf {Z}} = \mathbf {Z}^{\mathsf {T}}\). As long as μ(Ψ)<1, the solution obtained for \(\bar {\mathbf {X}}\) is the sparsest solution. More specifically, if any vector \(\bar {\mathbf {X}}\) in \(\bar {\mathbf {X}}\) satisfies the following inequality

$$ \|\bar{\mathbf{x}}\|_{0} < \frac{1}{2} \left(1+{\mu(\boldsymbol{\Psi})}^{-1} \right) $$
(18)

then â„“ 1-minimization recovers \(\bar {\mathbf {X}}\) [32, 33].

With this new formulation, the advantage that we get is that the undersampling ratio will become N/L. Using a similar argument for the undersampling ratio as made in the spatial formulation, it can be shown that N/7<L<N because the number of measurements is now L. Since the undersampling ratio is determined by the number of snapshots for a given grid size, this formulation is more suitable for small arrays. This formulation also has the additional advantage of increasing the number of grid points N for finer resolution by keeping a low undersampling ratio N/L by increasing the number of snapshots L at the same time.

4 Cramér Rao lower bound

In the following subsections, we discuss the CRLB for two cases, i.e. for known θ k and for unknown θ k respectively. Although both θ k and β k are unknown, yet we need to differentiate between the two cases of CRLB based on the assumption that either the target lies on-grid or off-grid. For CRLB, the error has to be consistent. In order to keep the consistency of error for CRLB, we will use the CRLB for known θ k when the target is on-grid and we will use CRLB for unknown θ k when the target is off-grid.

4.1 CRLB for known θ k

Let us define:

$$\begin{array}{*{20}l} \boldsymbol{\eta} = \left[\begin{array}{cccc} \Re(\beta_{k}) & \Im (\beta_{k}) \end{array} \right] \end{array} $$
(19)

The Fisher information matrix (FIM) for the unknown parameters is given by the Slepian-Bang’s formula assuming that the noise samples are uncorrelated.

$$\begin{array}{*{20}l} \mathbf{F} (\boldsymbol{\eta}) = \frac{2}{\sigma_{\mathbf{z}}^{2}} \Re \left[\sum_{n=0}^{N-1} \left(\frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \boldsymbol{\eta}} \frac{\partial \mathbf{u}(n)}{\partial \boldsymbol{\eta}^{\mathsf{T}}} \right) \right] \end{array} $$
(20)

where

$$\begin{array}{*{20}l} \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \boldsymbol{\eta}} = \left[ \begin{array}{c} \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \Re(\beta_{k})} \\ \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \Im(\beta_{k})} \end{array} \right]_{2 \times n_{R}}, \end{array} $$
(21)
$$\begin{array}{*{20}l} \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \boldsymbol{\eta}^{\mathsf{T}}} = \left[ \begin{array}{cccc} \frac{\partial \mathbf{u}}{\partial \Re(\beta_{k})} &\frac{\partial \mathbf{u}}{\partial \Im(\beta_{k})} \end{array} \right]_{n_{R} \times 2} \end{array} $$
(22)

and

$$\begin{array}{*{20}l} \mathbf{u}(n) = \beta_{k}(\theta_{k}) \mathbf{a}_{R}(\theta_{k}) \mathbf{a}_{T}^{\mathsf{T}} (\theta_{k}) \mathbf{s}(n) \end{array} $$
(23)

The two terms with partial derivatives in (22) are found to be:

$$\begin{array}{*{20}l} \frac{\partial \mathbf{u}(n)}{\partial \Re(\beta_{k})} &= \mathbf{a}_{R}(\theta_{k}) \mathbf{a}_{T}^{\mathsf{T}}(\theta_{k}) \mathbf{s}(n) \end{array} $$
(24)

and

$$\begin{array}{*{20}l} \frac{\partial \mathbf{u}(n)}{\partial \Im(\beta_{k})} &= j \mathbf{a}_{R}(\theta_{k}) \mathbf{a}_{T}^{\mathsf{T}}(\theta_{k}) \mathbf{s}(n) \end{array} $$
(25)

The other two partial derivatives in (21) can be found by using the identity ∂ x H=(∂ x)H. Thus, (20) can be solved by using (24) and (25). The CRLB is found by inverting F(η).

4.2 CRLB for unknown θ k

Next, we derive CRLB for unknown θ k . Let us define:

$$\begin{array}{*{20}l} \boldsymbol{\alpha} = \left[\begin{array}{cccc} \Re(\beta_{k}) & \Im (\beta_{k}) & \theta_{k} \end{array} \right] \end{array} $$
(26)

The Fisher information matrix for the unknown parameters is given by the Slepian-Bang’s formula assuming that the noise samples are uncorrelated.

$$\begin{array}{*{20}l} \mathbf{F} (\boldsymbol{\alpha}) = \frac{2}{\sigma_{\mathbf{z}}^{2}} \Re \left[\sum_{n=0}^{N-1} \left(\frac{\partial \mathbf{u}^{\mathsf H}(n)}{\partial \boldsymbol{\alpha}} \frac{\partial \mathbf{u}(n)}{\partial \boldsymbol{\alpha}^{\mathsf{T}}} \right) \right] \end{array} $$
(27)

where

$$\begin{array}{*{20}l} \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \boldsymbol{\alpha}} = \left[ \begin{array}{c} \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \Re(\beta_{k})} \\ \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \Im(\beta_{k})} \\ \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \theta_{k}} \end{array} \right]_{3 \times n_{R}} \end{array} $$
(28)

and

$$\begin{array}{*{20}l} \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \boldsymbol{\alpha}^{\mathsf{T}}} = \left[ \begin{array}{cccc} \frac{\partial \mathbf{u}}{\partial \Re(\beta_{k})} &\frac{\partial \mathbf{u}}{\partial \Im(\beta_{k})} &\frac{\partial \mathbf{u}}{\partial \theta_{k}} \end{array} \right]_{n_{R} \times 3} \end{array} $$
(29)

The partial derivatives with respect to ℜ(β k ) and I(β k ) are given in (24) and (25), respectively. The third partial derivative is found as follows by taking the second order derivative. Therefore,

$$\begin{array}{*{20}l} \frac{\partial \mathbf{u}(n)}{\partial \theta_{k}} &= \beta_{k} \left(j\pi\cos(\theta_{k})\right) \left(\mathbf{a}_{T}^{\mathsf{T}}(\theta_{k}) \mathbf{A}_{T} \mathbf{s}(n) \mathbf{a}_{R}(\theta_{k}) \right.\\ & \quad + \left. \mathbf{a}_{T}^{\mathsf{T}}(\theta_{k}) \mathbf{s}(n) \mathbf{A}_{T} \mathbf{a}_{R}(\theta_{k}) \right) \end{array} $$
(30)

where

$$\begin{array}{*{20}l} \mathbf{A}_{T} = {\mathsf{diag}}\{0, 1, \ldots, n_{T} - 1 \} \end{array} $$

FIM can be found by above Eq. (30) along with (24) and (25) and the inversion of F(α) leads to CRLB.

5 Simulation results

We present here some simulation results to validate the methods discussed in this work. We assume a single target located at θ k . The parameters to be estimated are the reflection coefficient β k and DOA of the target θ k . To assess the performance of the algorithms, the unknown parameters are generated randomly according to \(\theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) and \(\beta _{k} = e^{j \varphi _{k}}\phantom {\dot {i}\!}\) of amplitude unity where \( \varphi _{k} \sim \mathcal {U} (0,1)\). The grid is uniformly discretized between −90° to +90° with N grid points. The number of grid points N is 512 in all the simulations. All algorithms are iterated for 104 iterations. The noise is assumed to be uncorrelated Gaussian with zero mean and variance σ 2. The algorithms that are included for comparisons are Capon, APES and CoSaMP algorithms. In the simulation results, while referring to SABMP means the SABMP for block sparse signals. Also, for CoSaMP algorithm, its block-CoSaMP version [34] is used.

5.1 CS spatial formulation

We discuss the simulation results for the spatial formulation. Figures 2 and 3 shows the mean square error (MSE) performance for β k and θ k , respectively. The number of antenna elements n T and n R is 16 and the number of snapshots L is 20. This is the case where L>n R . Both APES and Capon algorithms require L>n R to evaluate the correlation of the received signal. The estimation performance of β k for Capon reaches an error floor because Capon estimates are always biased [1]. APES algorithm shows the best estimation for β k for SNR greater than −8 dB. Both SABMP and CoSaMP algorithms do not perform well due to high under-sampling ratio. But, SABMP has better performance than CoSaMP algorithm for β k estimation. For θ k estimation, the results in Fig. 3 show that the Capon algorithm has the best performance at SNR greater than 3 dB. In Capon algorithm, at high SNR, the covariance matrix of received signals becomes close to singular causing poor estimation of θ k . That is why, the results are not plotted after 22 dB. Nevertheless, the results available in Fig. 3 will serve the purpose of comparison. SABMP performs worse in this scenario because it requires more measurements for better sparse recovery. All four algorithms reach an error floor because the grid is finite. In [35], this phenomenon is referred to as off-grid effect.

Fig. 2
figure 2

MSE performance for β k estimation. Simulation parameters: \(L = 20, n_{T} = 16, n_{R} = 16, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j {\varphi }_{k}}\) where \(\varphi _{k} \sim \mathcal {U} (0,1)\)

Fig. 3
figure 3

MSE performance for θ k estimation. Simulation parameters: \(L = 20, n_{T} = 16, n_{R} = 16, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but falling off-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\)

In Figs. 4 and 5, we discuss the case when L<n R . To simulate this case, we choose n T and n R equal to 128 and L is kept to 10 only. In this case, both Capon and APES will fail to recover the estimates due to rank deficiency of received signal covariance matrix. However, CoSaMP and SABMP algorithms will still be able to work for both β k and θ k estimation. For β k estimation, SABMP algorithm has better estimation than CoSaMP algorithm up to SNR 22 dB. At high SNR, both CoSaMP and SABMP algorithms almost have the same performance for β k estimation. Both CoSaMP and SABMP are not able to achieve the CRLB due to high under-sampling ratio. The results obtained in Fig. 5 show that SABMP algorithm has slightly better performance than CoSaMP algorithm for θ k estimation.

Fig. 4
figure 4

MSE performance for β k estimation. Simulation parameters: \(L = 10, n_{T} = 128, n_{R} = 128, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\). No recovery for Capon and APES methods

Fig. 5
figure 5

MSE performance for θ k estimation. Simulation parameters: \(L = 10, n_{T} = 128, n_{R} = 128, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but falling off-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\). No recovery for Capon and APES methods

We show the complexity comparison in Fig. 6. The plot is shown for processing time against n R . For all cases of n R , the number of snapshots L is 10 for CS. For both Capon and APES algorithms, if we keep L=10, it will not recover the unknown parameters. However, the comparison remains fair if we assume L at least equal to n R because the computational burden is on the inversion of the covariance matrix. It can be seen that as n R increases, the processing time for Capon and APES algorithm increases significantly. Since the size of the covariance matrix is equal to n R ×n R , the size of covariance matrix increases with n R . Both Capon and APES need to invert the covariance matrix obtained from the received samples which increase the processing time with increased n R . For SABMP, the increase in computation is mainly dependent on L in spatial formulation and is less dependent on n R . That is why SABMP complexity does not change drastically with n R . From Fig. 6, we can note that for n R greater than or equal to 32, the complexity of SABMP algorithm is lower than APES but higher than Capon algorithm. CoSaMP algorithm has lower complexity than SABMP algorithm but is increasing significantly with n R because its complexity is dependent on both the number of measurements n R and the number of blocks L. Since it has lower complexity, a trade-off between performance and complexity exists between SABMP and CoSaMP with spatial formulation.

Fig. 6
figure 6

Complexity comparison. Simulation parameters: n T =n R , SNR =20 dB, \(\theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\)

5.2 CS temporal formulation

In this subsection, we present simulation results for the temporal formulation as an alternative to the spatial one. First, we make a comparison of resolution. Figure 7 shows a comparison of resolution of the three algorithms. APES has wider resolution than both Capon and SABMP algorithms. Capon has finer resolution, but its amplitude is biased downwards. SABMP algorithm gives the best resolution because on-grid CS algorithms are based on recovery of non-zero entries. That is why SABMP algorithm provides a single sample at the target location. A similar behavior can be anticipated for CoSaMP algorithm because it is also an on-grid CS algorithm.

Fig. 7
figure 7

Resolution comparison. Simulation parameters: L = 256,n T = 10,n R =10, SNR =0 dB (left), SNR =25 dB (right)

The MSE of β k and θ k estimates is shown in Figs. 8 and 9, respectively. The number of snapshots L=256 and the array size is kept small, i.e. n T =10 and n R =10. We plot the MSE obtained by existing algorithms Capon, APES and CoSaMP along with SABMP for comparison. CRLB is also plotted for comparison. In Fig. 8, we assume that the target lies on the grid to plot MSE of β k and to compare it with CRLB for known θ k . Otherwise, we need infinite grid points to compare the performance of algorithms with CRLB. The simulation results show that SABMP performs better than all three Capon, APES and CoSaMP algorithms to estimate β k at high SNR. This better performance of SABMP is due to its Bayesian approach and its robustness to noise. Moreover, the coherence of the sensing matrix is also less than 1 which guarantees sparse recovery at low noise. In Fig. 9, we simulate the algorithms by generating θ k anywhere randomly and not necessarily on the grid. Due to this reason, it can be seen that MSE of θ k reached the error floor which is due to the discretized grid and depends on the difference between the two consecutive grid points. For θ k estimation, SABMP performs better than APES algorithm after 10 dB but worse than Capon algorithm. CoSaMP algorithm has the worst performance because it cannot work well with structured sensing matrices.

Fig. 8
figure 8

MSE performance for β k estimation. Simulation parameters: \(L = 256, n_{T} = 10, n_{R} = 10, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\)

Fig. 9
figure 9

MSE performance for θ k estimation. Simulation parameters: \(L = 256, n_{T} = 10, n_{R} = 10, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but falling off-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\)

The above mentioned simulation results are obtained for L>n R . Now, we discuss the case when L<n R and the number of snapshots is low. In the simulation results shown in Figs. 10 and 11, the number of snapshots L is 8 only. In this case, there will be no recovery by both Capon and APES methods due to rank deficiency of covariance matrix. But both CS algorithms can work in this scenario. SABMP performs better than CoSaMP algorithm for both β k and θ k estimation. SABMP cannot achieve the CRLB because of very low number of measurements in this case.

Fig. 10
figure 10

MSE performance for β k estimation. Simulation parameters: \(L = 8, n_{T} = 10, n_{R} = 10, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\). No recovery for Capon and APES methods

Fig. 11
figure 11

MSE performance for θ k estimation. Simulation parameters: \(L = 8, n_{T} = 10, n_{R} = 10, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but falling off-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\). No recovery for Capon and APES methods

Next, we compare the performance of algorithms at two different target locations. We choose one location at 5° and a second location at 70°. The simulation results in Figs. 12 and 13 show estimation performance for θ k and β k respectively. The performance of all algorithms is degraded for θ k =70° case because it comes in the low power region. For β k estimation, the results show that for the θ k =5°, the APES and SABMP algorithms achieve the bound earlier than θ k =70°.

Fig. 12
figure 12

MSE performance for β k estimation. Simulation parameters: L=256,n T =10,n R =10,N=512,θ k =5° (solid lines) & θ k =70° (dashed lines) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\) and is same for all iterations

Fig. 13
figure 13

MSE performance for θ k estimation. Simulation parameters: L=256,n T =10,n R =10,N=512,θ k =5° (solid lines) & θ k =70° (dashed lines) but falling off-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\) and is same for all iterations

We compare the complexity of the discussed algorithms. Figure 14 gives the processing time plotted against the number of grid points N. The results show that SABMP algorithm has the higher complexity than Capon and APES algorithms but lower than CoSaMP algorithm. CoSaMP algorithm has the highest complexity due to a Kronecker product involved in the construction of its sensing matrix. The complexity of SABMP is dependent on the number of multiple-measurement-vectors. In this case the number of multiple-measurement-vectors is equal to number of receive antennas. Therefore, there exists a tradeoff between performance and complexity of Capon, APES, CoSaMP and SABMP algorithm.

Fig. 14
figure 14

Complexity comparison. Simulation parameters: L = 256,n T =10,n R =10, SNR =20 dB, \(\theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\)

Lastly, we show a comparison of receiver operating characteristic (ROC) curves. At high SNR, the probability of detection for all algorithms is 1 almost for all probabilities of false alarm. Therefore, MSE criteria is better to compare performance of different algorithms at high SNRs. However, we can choose small SNR value of -12 dB to plot ROCs for all four algorithms. Figure 15 shows the ROC comparison of the four algorithms discussed. The probability of detection is close to one for both Capon and APES algorithms for a wide range of probability of false alarm. SABMP algorithm has a little worse performance than both Capon and APES algorithms because we have chosen a low SNR value of -12 dB but SABMP performance gains are at usually at high SNRs. CoSaMP algorithm has slightly better performance than SABMP algorithm for low values of probability of false alarm but its performance deteriorates afterwards.

Fig. 15
figure 15

ROC comparison. Simulation parameters: n T =10,n R =10, SNR =−12 dB, \(\theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\). (Markers are added in this plot only for the purpose of identification of different curves)

6 Conclusions

In this work, the authors solved the MIMO radar parameter estimation problem by two methods: the spatial method for large arrays and temporal method for small arrays by a fast and robust CS algorithm. It is shown that SABMP provides the best estimates for parameter estimation at high SNR even when the number of targets and noise variance are unknown.

References

  1. J Li, P Stoica, MIMO Radar signal processing (John Wiley & Sons, New Jersey, 2009).

    Google Scholar 

  2. JA Scheer, WA Holm, Principles of modern radar: advanced techniques (SciTech Publishing, Edison, NJ, USA, 2013).

    Google Scholar 

  3. DL Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006).

    Article  MathSciNet  MATH  Google Scholar 

  4. EJ Candes, PA Randall, Highly robust error correction by convex programming. IEEE Trans. Inf. Theory. 54(7), 2829–2840 (2008).

    Article  MathSciNet  MATH  Google Scholar 

  5. YC Pati, R Rezaiifar, PS Krishnaprasad, in Proc. 27th Asilomar Conf. Signals, Syst. Comput. Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition (IEEE, 1993), pp. 40–44.

  6. D Needell, R Vershynin, Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit. Found. Comput. Math. 9(3), 317–334 (2008).

    Article  MathSciNet  MATH  Google Scholar 

  7. DL Donoho, Y Tsaig, I Drori, J-L Starck, Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theory. 58(2), 1094–1121 (2012).

    Article  MathSciNet  Google Scholar 

  8. D Needell, JA Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009).

    Article  MathSciNet  MATH  Google Scholar 

  9. ME Tipping, Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res.1:, 211–244 (2001).

    MathSciNet  MATH  Google Scholar 

  10. S Ji, Y Xue, L Carin, Bayesian compressive sensing. IEEE Trans. Signal Process. 56(6), 2346–2356 (2008).

    Article  MathSciNet  Google Scholar 

  11. P Schniter, LC Potter, J Ziniel, in 2008 Inf. Theory Appl. Work. Fast Bayesian matching pursuit (IEEE, 2008), pp. 326–333.

  12. AA Quadeer, TY Al-Naffouri, Structure-based Bayesian sparse reconstruction. IEEE Trans. Signal Process. 60(12), 6354–6367 (2012).

    Article  MathSciNet  Google Scholar 

  13. M Masood, TY Al-Naffouri, Sparse reconstruction using distribution agnostic Bayesian matching pursuit. IEEE Trans. Signal Process. 61(21), 5298–5309 (2013).

    Article  Google Scholar 

  14. JHG Ender, On compressive sensing applied to radar. Signal Process.90(5), 1402–1414 (2010).

    Article  MATH  Google Scholar 

  15. Y Yu, AP Petropulu, HV Poor, MIMO radar using compressive sampling. IEEE J. Sel. Top. Signal Process. 4(1), 146–163 (2010).

    Article  Google Scholar 

  16. P Stoica, P Babu, J Li, SPICE: A sparse covariance-based estimation method for array processing. IEEE Trans. Signal Process. 59(2), 629–638 (2011).

    Article  MathSciNet  Google Scholar 

  17. M Rossi, AM Haimovich, YC Eldar, Spatial compressive sensing for MIMO radar. IEEE Trans. Signal Process.62(2), 419–430 (2014).

    Article  MathSciNet  Google Scholar 

  18. Y Yu, S Sun, RN Madan, A Petropulu, Power allocation and waveform design for the compressive sensing based MIMO radar. IEEE Trans. Aerosp. Electron. Syst. 50(2), 898–909 (2014).

    Article  Google Scholar 

  19. Z Yang, L Xie, C Zhang, Off-grid direction of arrival estimation using sparse Bayesian inference. IEEE Trans. Signal Process. 61(1), 38–43 (2013).

    Article  MathSciNet  Google Scholar 

  20. T Huang, Y Liu, H Meng, X Wang, Adaptive matching pursuit with constrained total least squares. EURASIP J. Adv. Signal Process. 2012(1), 252 (2012).

    Article  Google Scholar 

  21. S Jardak, S Ahmed, M-S Alouini, in 2015 Sens. Signal Process. Def. Low complexity parameter estimation for off-the-grid targets (IEEE, 2015).

  22. S Jardak, S Ahmed, M-S Alouini, in 2014 Int. Radar Conf. Low complexity joint estimation of reflection coefficient, spatial location, and Doppler shift for MIMO-radar by exploiting 2D-FFT (IEEE, 2014).

  23. KV Mishra, M Cho, A Kruger, W Xu, Spectral super-resolution with prior knowledge. IEEE Trans. Signal Process. 63(20), 5342–5357 (2015).

    Article  MathSciNet  Google Scholar 

  24. Cande, EJ̀,s, JK Romberg, T Tao, Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006).

    Article  MathSciNet  MATH  Google Scholar 

  25. J Li, P Stoica, MIMO radar with colocated antennas. IEEE Signal Process. Mag. 24(5), 106–114 (2007).

    Article  Google Scholar 

  26. H Jiang, J-K Zhang, KM Wong, Joint DOD and DOA estimation for bistatic MIMO radar in unknown correlated noise. IEEE Trans. Veh. Technol. 64(11), 5113–5125 (2015).

    Article  Google Scholar 

  27. P Stoica, Target detection and parameter estimation for MIMO radar systems. IEEE Trans. Aerosp. Electron. Syst. 44(3), 927–939 (2008).

    Article  Google Scholar 

  28. EJ Candes, T Tao, Decoding by linear programming. IEEE Trans. Inf. Theory. 51(12), 4203–4215 (2005).

    Article  MathSciNet  MATH  Google Scholar 

  29. N Yu, Y Li, Deterministic construction of Fourier-based compressed sensing matrices using an almost difference set. EURASIP J. Adv. Signal Process.2013(1), 155 (2013).

    Article  MathSciNet  Google Scholar 

  30. H Ali, S Ahmed, TY Al-Naffouri, M-S Alouini, in Int. Radar Conf. Reduction of snapshots for MIMO radar detection by block/group orthogonal matching pursuit (IEEE, 2014).

  31. M Masood, TY Al-Naffouri, in IEEE Int. Conf. Acoust. Speech Signal Process. Support agnostic Bayesian matching pursuit for block sparse signals (IEEE, 2013), pp. 4643–4647.

  32. DL Donoho, M Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization. Proc. Natl. Acad. Sci. 100(5), 2197–2202 (2003).

    Article  MathSciNet  MATH  Google Scholar 

  33. R Gribonval, M Nielsen, Sparse representations in unions of bases. IEEE Trans. Inf. Theory. 49(12), 3320–3325 (2003).

    Article  MathSciNet  MATH  Google Scholar 

  34. RG Baraniuk, V Cevher, MF Duarte, C Hegde, Model-based compressive sensing. IEEE Trans. Inf. Theory. 56(4), 1982–2001 (2010).

    Article  MathSciNet  Google Scholar 

  35. S Fortunati, R Grasso, F Gini, MS Greco, K LePage, Single-snapshot DOA estimation by using compressed sensing. EURASIP J. Adv. Signal Process.2014(1), 120 (2014).

    Article  Google Scholar 

Download references

Acknowledgements

This research was funded by a grant from the office of competitive research funding (OCRF) at the King Abdullah University of Science and Technology (KAUST).

The work was also supported by the Deanship of Scientific Research (DSR) at King Fahd University of Petroleum and Minerals (KFUPM), Dhahran, Saudi Arabia, through project number KAUST-002.

The authors acknowledge the Information Technology Center at King Fahd University of Petroleum and Minerals (KFUPM) for providing high performance computing resources that have contributed to the research results reported within this paper.

Authors’ contributions

HA, SA, and TYA contributed to the formulation of the problem. HA and SA carried out the simulations. MSS and MSA commented and criticized the work to improve the manuscript. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hussain Ali.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ali, H., Ahmed, S., Al-Naffouri, T.Y. et al. Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing. EURASIP J. Adv. Signal Process. 2017, 6 (2017). https://doi.org/10.1186/s13634-016-0436-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-016-0436-x

Keywords