Skip to main content

Partially sparse imaging of stationary indoor scenes

Abstract

In this paper, we exploit the notion of partial sparsity for scene reconstruction associated with through-the-wall radar imaging of stationary targets under reduced data volume. Partial sparsity implies that the scene being imaged consists of a sparse part and a dense part, with the support of the latter assumed to be known. For the problem at hand, sparsity is represented by a few stationary indoor targets, whereas the high scene density is defined by exterior and interior walls. Prior knowledge of wall positions and extent may be available either through building blueprints or from prior surveillance operations. The contributions of the exterior and interior walls are removed from the data through the use of projection matrices, which are determined from wall- and corner-specific dictionaries. The projected data, with enhanced sparsity, is then processed using l1 norm reconstruction techniques. Numerical electromagnetic data is used to demonstrate the effectiveness of the proposed approach for imaging stationary indoor scenes using a reduced set of measurements.

1 Introduction

The ultimate objective of achieving actionable intelligence in an efficient and reliable manner is faced with a host of challenges underlying the urban sensing and through-the-wall radar imaging (TWRI) applications [1–9]. First and foremost is the increasing demand on radar systems to deliver high-resolution images in both range and crossrange, which requires the use of wideband signals and large array apertures, respectively. Second, the backscatter from the first wall, which is the exterior wall of the building being imaged, is much stronger than the returns from the interior of the building. This is because the signal undergoes attenuation in the wall materials and the more walls the signal penetrates to reach the indoor targets, the weaker are the returns. Therefore, the clutter caused by the wall backscatter can significantly contaminate the radar data and hinder the main intent of providing enhanced system capabilities for imaging of building interiors and detection and localization of stationary indoor targets.

Recently, compressive sensing (CS) has been used for efficient data acquisition in radar systems in general [10–14] and in urban radar systems in particular [15–19]. For urban radar systems, removal of clutter and stationary targets via change detection or exploitation of sparsity in the Doppler domain readily enables the application of CS techniques for moving target detection inside buildings [20–23]. However, these means are not available for detection and localization of stationary targets of interest, thereby requiring significant mitigation of wall reflections.

Several approaches have been proposed for dealing with front wall returns under full data volume without the need for reference or background data [8, 24–26]. In [8], the wall parameters, such as thickness and dielectric constant, are estimated from first wave arrivals and then used to model and subtract the front wall contributions from the received data. The approach in [24] for wall clutter mitigation is based on spatial filtering. It utilizes the strong similarity between wall electromagnetic (EM) responses, as viewed by different antennas along the entire physical or synthesized array aperture. A spatial filter is applied to remove the dc component corresponding to the constant-type radar return associated with the front wall. The subspace decomposition method, presented in [25, 26], utilizes not only the approximately identical wall scattering characteristics across the array elements but also the higher strength of the wall reflections compared to that of target reflections. When singular value decomposition (SVD) is applied to the measured data matrix, the wall subspace can be captured by the singular vectors associated with the dominant singular values. As a result, the wall contributions can be removed by projecting the data measurement vector at each antenna on the wall orthogonal subspace. It is noted that as the round-trip signal traveling times from the antennas to each interior wall, which is parallel to the front wall, are constant across the array aperture, both spatial filtering and the subspace decomposition methods will also mitigate returns from interior parallel walls as long as they are not shadowed by other contents of the building [27].

Both spatial filtering and subspace projection approaches have been shown to be equally effective for synthetic aperture radar (SAR) imaging under reduced data volume, provided that the same reduced set of frequencies or time samples is used at each available antenna position [28, 29]. Requiring the same frequency observations or time samples across all antennas is very restrictive and may not always be feasible. For example, some individual frequencies or frequency subbands may be unavailable due to competing wireless services or intentional interferences. Further, antenna positions may signify radar units operating independently, each with a separate frequency band to avoid cross-interference. In this paper, we propose an alternate scheme for imaging of stationary indoor scenes which overcomes this limitation of wall clutter mitigation techniques under reduced data volume by exploiting prior knowledge of the room layout. This information may be available either through building blueprints or from prior surveillance operations specifically dedicated to determining the building layout. We consider the scene being imaged to be partially sparse. That is, the scene consists of two parts, one of which is sparse and contains the stationary indoor targets of interest while the other, corresponding to the exterior and interior walls, is dense with known support [30, 31]. We focus on stepped-frequency SAR operation and assume that few frequency observations are available, which could be the same or different from one antenna position to another, constituting the set of reduced spatial measurements. We employ projection matrices that are determined from wall- and corner-specific scattering responses to remove the exterior and interior wall contributions from the measurements. In so doing, we enable the application of conventional sparse reconstruction schemes to obtain an image of the sparse part of the scene containing the stationary indoor targets. We demonstrate the effectiveness of the partial sparsity-based approach for reconstruction of stationary through-the-wall scenes using numerical electromagnetic data of a single-story building for both cases of having the same reduced set of frequencies at each of the available antenna locations and also when different frequency measurements are employed at different antenna locations.

It is noted that, as an alternative to the proposed approach, the wall and corner responses can be modeled and then subtracted from the received data. However, this approach has two issues. First, it is very sensitive to phase errors. Second, it may not always work since some corners and parts of interior walls may be shadowed by objects in the interior of the building. Additionally, it is important to draw a distinction between the proposed approach and the subspace projection approach for wall removal [25, 26]. Although both approaches involve data projections, they exploit fundamentally different characteristics of the data measurements to achieve the desired objective. The subspace projection approach does not require knowledge of wall positions and extent. However, it assumes nominal geometry of the walls. In this respect, it relies on a specific radar configuration, whose defining characteristics are the constant distance of each antenna from the walls and normal incidence illuminations. This ensures approximately constant wall contributions in the data received at all antennas. The partial sparsity approach, on the other hand, relies on accurate knowledge of the building geometry to create wall- and corner-specific dictionaries, which are subsequently used for mitigating the contributions of exterior and interior walls. It does not, however, demand the invariance of wall returns across the partial or entire radar aperture. The impact of these fundamental differences on the performance of the two approaches is highlighted in Section 5.

The remainder of this paper is organized as follows. Section 2 presents the signal model under the assumption of known support of the exterior and interior walls. The wall contribution removal technique and scene reconstruction are discussed in Section 3. Section 4 evaluates the performance of the proposed partially sparse through-the-wall scene reconstruction approach using numerical EM data of a single-story building. Performance comparison with the subspace projection approach is also provided. Conclusions are drawn in Section 5.

2 Signal model

In this section, we develop the forward scattering model for through-the-wall radar imaging. The model is based on physical optics and any associated nonlinearities are ignored [32].

Consider a monostatic SAR with N antenna positions located along the x-axis parallel to a homogenous front wall. The transmit waveform is assumed to be a stepped-frequency signal of M frequencies, which are equispaced over the desired bandwidth ωM − 1 − ω0,

ω m = ω 0 + m Δ ω , m = 0 , 1 , … , M − 1 ,
(1)

where ω0 is the lowest frequency in the desired frequency band and Δω is the frequency step size. The scene behind the front wall is assumed to be composed of P point targets, L-1 interior walls, which are parallel to the front wall and to the radar scan direction, and K corners corresponding to the junctions of two walls perpendicular to each other. It is noted that, due to the specular nature of the wall reflections, a SAR system located parallel to the front wall will only be able to receive backscattered signals from interior walls, which are parallel to the front wall. The contributions of walls perpendicular to the front wall will be captured primarily through the backscattered signals from the corners [33, 34].

The component of the received signal corresponding to the m th frequency at the n th antenna position, with phase center at x tn  = (x tn , 0), due to the P point targets is given by [35, 36]

z tgt m , n = ∑ p = 0 P − 1 σ p exp − j ω m τ p , n ,
(2)

where σ p is the complex amplitude corresponding to the p th target return and τp,n is the two-way traveling time between the n th antenna and the p th target. The reflections from the L walls measured at the n th antenna location corresponding to the m th frequency can be expressed as [24]

z wall m , n = ∑ l = 0 L − 1 σ w , l exp − j ω m τ w , l ,
(3)

where σw,l is the complex amplitude associated with the l th wall and τw,l is the two-way traveling time of the signal from the n th antenna to the l th wall. Note that since the scan direction is parallel to the walls, the delay τw,l does not depend on the variable n and is a function only of the downrange distance between the l th wall and the antenna baseline. Finally, the reflections from the K corners measured at the n th antenna location corresponding to the m th frequency can be expressed as [33, 37]

z corner m , n = ∑ k = 0 K − 1 Γ k , n σ ¯ k sinc ω m L ¯ k / c sin θ k , n − θ ¯ k exp − j ω m τ k , n ,
(4)

where σ ¯ k is the complex amplitude, L ¯ k is the length, θ ¯ k is the orientation angle of the k th corner, τk,n is the two-way propagation delay between the n th antenna and the k th corner, θk,n is the aspect angle associated with the k th corner and the n th antenna, and Γk,n is an indicator function which assumes a unit value only when the n th antenna illuminates the concave side of the k th corner. We note that each of the complex amplitudes σp,σw,l, and σ ¯ k in (2) to (4) contains contributions from free-space path loss, attenuation due to propagation through the wall(s), and the reflectivity of the corresponding scatterer. The n th received signal corresponding to the m th frequency is, thus, given by

z m , n = z wall m , n + z corner m , n + z tgt m , n
(5)

Assume that the scene being imaged is divided into a finite number of grid-points, say Q, in crossrange and downrange. Let z n represent the received signal vector corresponding to the M frequencies and the n th antenna location, and s be the concatenated Q × 1 scene reflectivity vector corresponding to the spatial sampling grid. Under the assumption that the building layout is known a priori, s can be expressed as = s 1 T s 2 T T , where s 1 ∈ ℂ Q 1 is the dense part whose support is known and s 2 ∈ ℂ Q 2 , Q 2 = Q − Q 1 , is the sparse part. Note that s1 corresponds to the walls that are parallel to the antenna baseline. Further, since the wall junctions lie along the parallel walls, the corner locations would correspond to the support of a subset of s1, say s ¯ 1 ∈ ℂ R , R < Q 1 . Then, using (2) to (5), we obtain the matrix-vector form

z n = A n s 1 + B n s ¯ 1 + C n s 2 ,
(6)

where A n , B n , and C n are the dictionary matrices corresponding to the wall, corner reflector, and point target, respectively. The matrix C n is of dimension M × Q2 with its (m, q2)th element given by

C n m , q 2 = exp − j ω m τ q 2 , n
(7)

where τ q 2 , n is the two-way traveling time between the n th antenna and the q2th grid-point of the sparse part. The wall dictionary A n is an M × Q1 matrix, whose (m, q1)th element takes the form [38]

A n m , q 1 = exp − j ω m 2 y q 1 / c ℑ q 1 , n
(8)

In (8), y q 1 represents the downrange coordinate of the q1th grid-point in the dense part, and â„‘ q 1 , n is an indicator function, which assumes a unit value only when the q1th grid-point lies in front of the n th antenna, as illustrated in Figure 1. That is, if x q 1 represents the crossrange coordinate of the q1th dense grid-point and δx represents the crossrange sampling step, then â„‘ q 1 , n = 1 provided that x q 1 − δx 2 ≤ x tn ≤ x q 1 + δx 2 . The corner dictionary B n is an M × R matrix whose (m, r)th element is given by

Figure 1
figure 1

Illustration of the indicator function for the wall returns. It depicts that the indicator function will assume unit values only for the gray pixels when the antenna is at the position shown.

B n m , r = exp − j ω m τ r , n Γ r , n sinc ω m L ¯ r c sin θ r , n − θ ¯ r
(9)

Equation 6 considers the contribution of only one antenna location. Stacking the measurement vectors corresponding to all N antennas to form a tall vector,

z = z 1 T z 2 T ⋯ z N − 1 T T ,
(10)

we obtain the linear system of equations

z = A s 1 + B s ¯ 1 + C s 2 ,
(11)

where

A = A 0 T A 1 T ⋯ A N − 1 T T , B = B 0 T B 1 T ⋯ B N − 1 T T , C = C 0 T C 1 T ⋯ C N − 1 T T
(12)

The vector z contains the full dataset corresponding to the N antenna locations and the M frequencies. For the case of reduced data volume, consider ξ, which is a J (<<MN)-dimensional vector consisting of elements randomly chosen from z as follows:

ξ = Φz = Φ A s 1 + Φ B s ¯ 1 + Φ C s 2
(13)

In (13), Φ is a J × MN measurement matrix of the form

Φ = kron ψ , I J 1 ⋅ diag φ 0 , φ 1 , … , φ N − 1 , J = J 1 J 2 ,
(14)

where ‘kron’ denotes the Kronecker product, I J 1 is a J1 × J1 identity matrix, ψ is a J2 × N measurement matrix constructed by randomly selecting J2 rows of an N × N identity matrix, and φ n , n = 0, 1,…, N − 1, is a J1 × M measurement matrix constructed by randomly selecting J1 rows of an M × M identity matrix. We note that ψ determines the reduced antenna locations, whereas φ n determines the reduced set of frequencies corresponding to the n th antenna location.

3 Wall contribution removal and scene reconstruction

Given the reduced measurement vector ξ and knowledge of the support of the walls and corners, the goal is to reconstruct the sparse part of the image where the targets of interest are located. Towards this goal, we first need to remove the contributions of the dense part of the scene from ξ. Let P A be the matrix of the orthogonal projection from ℂQ onto the orthogonal complement of the range space of the matrix Φ A. If Φ A is a full rank matrix, then P A can be expressed as

P A = I J − Φ A Φ A †
(15)

where I J is a J × J identity matrix and (Φ A)† denotes the pseudoinverse of (Φ A). On the other hand, if Φ A has a reduced rank, then we have to resort to the SVD of Φ A to obtain the matrix P A as

P A = U A U A H
(16)

where U A is the matrix consisting of the left singular vectors corresponding to the zero singular values and the superscript ‘H’ denotes the Hermitian operation. Applying the projection matrix P A to the observation vector ξ, we obtain

ξ A ≡ P A ξ = P A Φ A s 1 + Φ B s ¯ 1 + Φ C s 2 = P A Φ B s ¯ 1 + P A Φ C s 2
(17)

Next, consider the projection matrix P B given by

P B = I J − P A Φ B P A Φ B † if P A Φ B has a full rank U B U B H otherwise
(18)

where U B is the matrix consisting of the left singular vectors corresponding to the zero singular values of the matrix P A Φ B. Application of P B to the measurement vector ξ A leads to

ξ BA ≡ P B ξ A = P B P A Φ B s ¯ 1 + P A Φ C s 2 = P B P A Φ C s 2
(19)

Thus, after the sequential application of the two projection matrices, the measurement vector ξ BA contains contributions from only the sparse image part, s2, which can then be recovered by solving the problem

s ̂ 2 = arg ∥ s 2 ∥ 1 subject to ξ BA ≈ P B P A Φ C s 2
(20)

The problem in (20) belongs to the classical setting of CS and, thus, can be solved using convex relaxation, greedy pursuit, or combinatorial algorithms [39–43]. In this work, we choose orthogonal matching pursuit (OMP), which is an iterative greedy algorithm [44].

It is noted that the dimensionality of the orthogonal complements of the range spaces of the matrices Φ A and Φ B is at least J − Q1 and J − R, respectively. Further, if access to full data volume is available, the proposed wall removal procedure can also be applied to the full data vector z with appropriate projection matrices determined from the dictionary matrices A and B instead of Φ A and Φ B, respectively. The wall-free data can then be processed using conventional image formation techniques [45].

4 Simulation results

In this section, we present scene reconstruction results for the partial sparsity technique using numerical EM data and provide performance comparison with the subspace projection approach for both full and reduced data volumes. For the reduced data volume, we consider both cases of having different frequency measurements at different available antenna locations and also when the same reduced set of frequencies is employed at each of the available antenna locations. Note that for the subspace projection-based wall mitigation CS approach proposed in [28], the former casts a more challenging problem than the latter, as it is not amenable to wall removal using direct implementation of the subspace projection technique. Instead, the range profile at each employed antenna location first needs to be reconstructed through l1 norm minimization using the reduced frequency set [28]. Then, the Fourier transform of each reconstructed range profile is taken to recover the full frequency data measurements at each antenna location. Direct application of the subspace projection technique can then proceed, followed by the scene reconstruction.

4.1 Electromagnetic modeling

The simulation is based on Xpatch®, developed by SAIC/DEMACO (Champaign, IL, USA), which is a computational EM code implementing an approximate ray tracing/physical optics computational approach. We created the computer model of a single-story building, with overall dimensions of 7 m × 10 m × 2.2 m, containing four humans (labeled 1 through 4) and several furniture items, as shown in Figure 2. The origin of the coordinate system was chosen to be in the center of the building, with the x-axis and the y-axis oriented as shown in Figure 2b. The exterior walls were made of 0.2-m-thick bricks and had glass windows and a wooden door. The interior walls were made of 5-cm-thick Sheetrock and had a wooden door. The ceiling/roof is flat, made of a 7.5-cm-thick concrete slab. The entire building is placed on top of a dielectric ground plane. The furniture items, namely, a bed, a couch, a bookshelf, a dresser, and a table with four chairs, were made of wood, while the mattress and cushions were made of a generic foam/fabric material. Humans 1 through 4 were positioned at various locations in the interior of the building with 45°, 0°, −20°, and 10° azimuthal orientation angles, respectively. Note that an orientation angle of 0° corresponds to the human facing along the positive x direction and the positive angles correspond to a counterclockwise rotation in the horizontal plane. Human 3, positioned inside the interior room, was carrying an AK-47 rifle. The human model was made of a uniform dielectric material with properties close to those of the skin and is described in [46]. The human body radar cross section (RCS) depends on the aspect angle but is generally bounded between −10 and 0 dBsm. Interestingly, the average human body RCS is fairly constant over the frequency range considered in this paper. More detailed results on the human body radar signature can be found in [46]. The AK-47 model is made of metal and wood and is described in [47, 48]. The dielectric properties of the various materials employed are listed in Table 1.

Figure 2
figure 2

Scene layout. (a) 3-D view. (b) Top-down view.

Table 1 Complex dielectric constant for materials used in the simulation code

A 6-m-long synthetic aperture line array, with an inter-element spacing of 2.54 cm and located parallel to the front of the building at a standoff distance of 4 m, was used for data collection. Monostatic operation was assumed. The antenna had a 3-dB beamwidth of 60° in both elevation and azimuth and was positioned 0.5 m above the ground plane. The antenna boresight was aimed perpendicular to the exterior wall. A stepped-frequency signal covering the 0.7- to 2-GHz frequency band with a step size of 8.79 MHz was employed. Thus, at each of the 239 scan positions, the radar collected 148 frequency measurements over the 1.3-GHz bandwidth.

4.2 Image reconstruction under full data volume

The region to be imaged was chosen to be 9 m × 12 m centered at the origin and divided into 121 × 161 pixels, respectively. Figure 3a shows the image obtained with backprojection using the full raw dataset. In this figure and all subsequent figures in this paper, we plot the image intensity with the maximum intensity value in each image normalized to 0 dB. The Hanning window was applied to the data along the frequency dimension in order to reduce the range sidelobes in the image. The humans in the image are indicated by red circles. We can clearly see the front wall, some of the corners, and humans 1 and 2. Human 3 in the interior room is barely visible due to the additional EM loss as the transmitted signal has to penetrate through both the exterior and interior walls. Likewise, it is a challenge to detect human 4, who is the farthest away from the front wall. Figure 3b shows the backprojected image after masking out the dense regions with known support. Although all the targets are visible in the masked image, the image is cluttered due to the presence of the wall and corner sidelobes.

Figure 3
figure 3

Backprojection results using raw data under the full data volume. (a) Full image. (b) Image with the support region of the exterior and interior walls masked out.

Next, we reconstructed the scene using the subspace decomposition-based wall mitigation approach. The first two dominant singular vectors of the frequency vs. antenna raw data matrix were used to reconstruct the wall subspace. The wall subspace dimension was selected using the method reported in [49]. Finally, backprojection was performed on the wall clutter mitigated data, and the corresponding image is shown in Figure 4a. We observe that although the stationary targets are more visible and the front and interior wall reflections are successfully removed, the corners indicating the presence of doors and windows are still present. So is most of the back wall due to shadowing effects. The approach also removed the reflections from the edge of the couch, and only the couch corners survive. More importantly, the presence of discontinuities in the front wall (windows and door) causes the subspace decomposition-based approach to introduce artifacts in the image, indicated by the red rectangles. Such artifacts in the interior of the building are more visible in Figure 4b, which shows the image after masking out the dense regions with known support. These artifacts are attributed to the fact that the subspace projection scheme assumes the wall response to be the same from one antenna to the other, which is violated by the presence of windows and doors.Finally, Figure 5 presents the backprojected image obtained using the proposed approach. The dense part of the scene, corresponding to the building layout (exterior and interior walls parallel to the array and corners), consisted of 7,196 pixels, while the sparse part of the scene consisted of the remaining 12,285 pixels. Compared to Figures 3b and 4b, the image in Figure 5 is the least cluttered since the wall sidelobes, in particular near the back wall, are absent. All of the humans and the furniture items are clearly visible in the image. We, therefore, conclude that the proposed approach provides superior performance compared to the subspace decomposition-based wall mitigation approach under the full data volume.

Figure 4
figure 4

Backprojection results after application of the subspace projection technique under the full data volume. (a) Full image. (b) Image with the support region of the exterior and interior walls masked out.

Figure 5
figure 5

Backprojection results using the proposed technique under the full data volume.

In addition to the visual inspections, we also assess the performance of the various methods in terms of the target-to-clutter ratio (TCR), which is defined as the ratio between the average pixel power I t in the target region to the average pixel power I c in the clutter region of the reconstructed image s Ì‚ 2

TCR = 10 log 10 I t I c , I t = 1 N t ∑ q 2 ∈ R t s ̂ 2 q 2 2 , I c = 1 N c ∑ q 2 ∈ R c s ̂ 2 q 2 2
(21)

where R t is the target region, R c is the clutter area, N t is the number of pixels in the target area, and N c is the number of pixels in the clutter region. The target area is composed of regions containing the four humans, and the remaining pixels of s Ì‚ 2 constitute the clutter region. Note that we consider furniture reflections as unwanted returns, and accordingly, they are treated as clutter. Table 2 shows the TCR values for the reconstructed images of Figures 3b, 4b, and 5. As expected, the TCR is improved using the proposed scheme over the subspace projection-based wall clutter mitigation scheme.

Table 2 TCR - backprojection under full data volume

4.3 Image reconstruction under different sets of reduced frequencies at each available antenna location

Conventional image formation techniques, such as backprojection, compromise the image quality when a reduced number of measurements is considered, thereby impeding the detection of targets behind the wall in the image domain. This is illustrated in Figure 6, wherein we used 118 randomly selected frequencies (79.7% of 148) and 79 randomly chosen antenna locations (30% of 239) for backprojection, which collectively represent 26% of the total data volume. The corresponding space-frequency sampling pattern is shown in Figure 7, where the vertical axis represents the antenna location and the horizontal axis represents the frequency. The filled boxes represent the data samples constituting the reduced set of measurements.Next, we reconstructed the scene using the partial sparsity approach with 26% data volume. The number of OMP iterations, usually associated with the sparsity level of the scene, was set to 10. In this case, and for all subsequent sparse imaging results, each imaged pixel is the result of averaging 200 runs, with a different random selection for each run. The partial sparsity-based reconstruction of the sparse part of the scene is shown in Figure 8a. We observe from Figure 8a that the partial sparsity-based scheme was able to detect and localize humans 1 through 3 successfully, while it missed human 4. In addition, some clutter (arising from the left chair and table) and background noise is also visible in the reconstructed image.

Figure 6
figure 6

Backprojection results using 26% of the raw data.

Figure 7
figure 7

An illustration of the random subsampling pattern employed for data reduction. Different sets of reduced frequencies are employed at each antenna location.

Figure 8
figure 8

Scene reconstruction corresponding to the subsampling in Figure 7. (a) Partial sparsity-based approach. (b) Subspace projection wall mitigation-based CS approach.

For comparison, we also performed scene reconstruction using the subspace projection-based wall mitigation CS approach of [28] with 26% data volume. Full frequency data measurements were first recovered from the l1 norm-reconstructed range profiles at each considered antenna location. The number of OMP iterations was set to 100 for each range profile reconstruction. This is because the presence of the wall return and clutter renders the range profile quite dense. The subspace projection approach was then applied wherein the first two dominant singular vectors of the 148 × 79 data matrix were used to reconstruct the wall subspace. Finally, standard l1 norm image reconstruction was performed on the wall clutter mitigated full frequency recovered data to form an image of the sparse part of the scene, shown in Figure 8b. Similar to the partial sparsity approach, the number of OMP iterations in this case was chosen to be 10. We observe from Figure 5b that human 1 was detected, human 2 was barely detected, while humans 3 and 4 were both missing from the reconstruction. Moreover, significantly more clutter and noise was reconstructed compared to Figure 8a. We, therefore, conclude that the partial sparsity-based approach compared to the subspace projection-based wall mitigation CS approach provides superior performance for the same reduced data volume when different sets of frequencies are employed at the available antennas. This is also confirmed by the corresponding TCR values provided in Table 3 (first two rows).

Table 3 TCR - sparse reconstruction when different sets of reduced frequencies are used from different antenna locations

4.4 Image reconstruction under the same set of reduced frequencies at each available antenna location

We now proceed with image reconstruction when the same reduced set of frequencies is employed at each of the available antenna locations. The corresponding space-frequency sampling pattern is shown in Figure 9. We use the same set of 118 randomly selected frequencies (79.7% of 148) at each of the 79 randomly chosen antenna locations (30% of 239). Figure 10a presents the result of the partial sparsity-based approach with the number of OMP iterations set to 10. We observe from Figure 10a that humans 1 through 3 were successfully localized, while human 4 was missed. In addition, some clutter arising from the furniture and noise were also reconstructed.

Figure 9
figure 9

An illustration of the random subsampling pattern employed for data reduction. Same set of reduced frequencies is employed from different antenna locations.

Figure 10
figure 10

Scene reconstruction corresponding to the subsampling in Figure 9. (a) Partial sparsity-based approach. (b) Subspace projection wall mitigation-based CS approach.

We next applied the subspace projection-based wall mitigation approach directly to the reduced dimension data matrix, 118 × 79 instead of 148 × 239 [28]. The wall suppressed data was then used to obtain the l1 norm-reconstructed image of the sparse part of the scene, shown in Figure 10b, obtained using OMP with 10 iterations. We observe that the subspace projection scheme was able to detect and localize humans 1, 2, and 4 successfully, while human 3 was barely detected. Residual wall clutter and some of the furniture returns are also visible in the reconstructed image. We, therefore, conclude that the partial sparsity- and the subspace decomposition-based wall mitigation CS approaches provide comparable performance for the same reduced data volume when the same set of frequencies is employed at the available antennas. This is validated by Table 4 which shows that the two schemes have comparable TCRs.

Table 4 TCR - sparse reconstruction when the same set of reduced frequencies is used from different antennas

4.5 A note on the number of OMP iterations

OMP, like other greedy iterative algorithms, requires the specification of the scene sparsity for exact reconstruction [44]. In most practical situations, including through-the-wall imaging, this information is not available a priori. Therefore, the stopping criterion based on the fixed number of iterations, which is tied to the scene sparsity, is heuristic. Figure 11a,b shows the reconstruction result for the scene in Figure 2 using 26% of the data volume with the number of OMP iterations chosen to be 25 and 45, respectively. The space-frequency sampling pattern of Figure 7 was employed. In both cases, humans 1 through 3 are clearly visible. However, a higher amount of clutter and background noise is reconstructed with increasing number of iterations. The corresponding TCR values are provided in Table 3 (rows 3 and 4).

Figure 11
figure 11

Scene reconstruction using partial sparsity-based approach with (a) 25 and (b) 45 OMP iterations.

Use of cross-validation has been proposed to prevent early/late termination of greedy reconstruction algorithms [50]. Cross-validation is a statistical technique that separates a dataset into a training/estimation set and a test/cross-validation set. The test set is used to prevent underfitting/overfitting on the training set. The cross-validation-based OMP reconstruction result using 26% of the data volume is depicted in Figure 12, with one fifth of the measurements used for cross-validation. We observe that the cross-validation-based approach fails to solve the problem and only humans 1 and 2 are visible in the reconstructed image. This is because the signal strength from humans 3 and 4 is either comparable to or weaker than those from sources of clutter, and the cross-validation-based approach regards humans 3 and 4 as part of the background noise level [51]. Although only two of the four humans are detected, the corresponding TCR value is quite high, as shown in Table 3 (last row). This is because very little clutter is reconstructed. Various adaptive approaches have recently been proposed to counter the problem of low signal-to-noise-and-clutter ratio [51, 52]. The offering of these schemes to the problem at hand remains to be explored.

Figure 12
figure 12

Partially sparse scene reconstruction using cross-validation-based OMP.

5 Conclusions

In this paper, we applied partial sparsity to scene reconstruction associated with through-the-wall radar imaging of stationary targets. Partially sparse recovery deals with the case when it is known a priori that part of the scene being imaged is dense while the rest is sparse. For the underlying problem, the dense part of the scene corresponds to the building layout and the support of the corresponding part of the image is assumed to be known beforehand. This knowledge may be available either through building blueprints or from prior surveillance operations. Using numerical EM data of a single-story building, we demonstrated the effectiveness of the partially sparse reconstruction in detecting and locating stationary targets in through-the-wall scenes while achieving a sizable reduction in the data volume.

Abbreviations

CS:

compressive sensing

EM:

electromagnetic

OMP:

orthogonal matching pursuit

RCS:

radar cross section

SAR:

synthetic aperture radar

SVD:

singular value decomposition

TCR:

target-to-clutter ratio

TWRI:

through-the-wall radar imaging.

References

  1. Amin MG (Ed): Through the Wall Radar Imaging. CRC, Boca Raton; 2011.

    Google Scholar 

  2. Amin MG, Sarabandi K: Remote sensing of building interiors. IEEE Trans. Geosci. Rem. Sens. 2009, 47(5):1267-1420.

    Article  Google Scholar 

  3. Lai C-P, Narayanan RM: Ultrawideband random noise radar design for through-wall surveillance. IEEE Trans. Aerosp. Electron. Syst. 2010, 46(4):1716-1730.

    Article  Google Scholar 

  4. Chang PC, Burkholder RL, Volakis JL, Marhefka RJ, Bayram Y: High-frequency EM characterization of through-wall building imaging. IEEE Trans. Geosci. Rem. Sens. 2009, 47(5):1375-1387.

    Article  Google Scholar 

  5. Ahmad F, Amin MG, Zemany PD: Dual-frequency radars for target localization in urban setting. IEEE Trans. Aerosp. Electron. Syst. 2009, 45(4):1598-1609.

    Article  Google Scholar 

  6. Le C, Dogaru T, Nguyen L, Ressler MA: Ultrawideband (UWB) radar imaging of building interior: measurements and predictions. IEEE Trans. Geosci. Rem. Sens. 2009, 47(5):1409-1420.

    Article  Google Scholar 

  7. Thajudeen C, Hoorfar A, Ahmad F, Dogaru T: Measured complex permittivity of walls with different hydration levels and the effect on power estimation of TWRI target returns. Progress Electromagnet. Res. B 2011, 30: 177-199.

    Article  Google Scholar 

  8. Dehmollaian M, Sarabandi K: Refocusing through the building walls using synthetic aperture radar. IEEE Trans. Geosci. Rem. Sens. 2008, 46(6):1589-1599.

    Article  Google Scholar 

  9. Soldovieri F, Solimene R: Through-wall imaging via a linear inverse scattering algorithm. IEEE Geosci. Remote Sens. Lett. 2007, 4(4):513-517.

    Article  Google Scholar 

  10. Baraniuk R, Steeghs P: Compressive radar imaging. Proc IEEE Radar Conference, Waltham, 17–20 Apr 2007 128-133.

    Google Scholar 

  11. Herman M, Strohmer T: High-resolution radar via compressed sensing. IEEE Trans. Signal Process 2009, 57(6):2275-2284.

    Article  MathSciNet  Google Scholar 

  12. Gurbuz A, McClellan J, Scott W Jr: Compressive sensing for subsurface imaging using ground penetrating radar. Signal Process. 2009, 89(10):1959-1972. 10.1016/j.sigpro.2009.03.030

    Article  Google Scholar 

  13. Ender JHG: On compressive sensing applied to radar. Signal Process. 2010, 90(5):1402-1414. 10.1016/j.sigpro.2009.11.009

    Article  Google Scholar 

  14. Potter LC, Ertin E, Parker JT, Cetin M: Sparsity and compressed sensing in radar imaging. Proc. of the IEEE 2010, 98(6):1006-1020.

    Article  Google Scholar 

  15. Yoon Y-S, Amin MG: Compressed sensing technique for high-resolution radar imaging, in Proc. SPIE, vol. 6968. SPIE, Bellingham; 2008:69681A.

    Google Scholar 

  16. Huang Q, Qu L, Wu B, Fang G: UWB through-wall imaging based on compressive sensing. IEEE Trans. Geosci. Rem. Sens. 2010, 48(3):1408-1415.

    Article  Google Scholar 

  17. Leigsnering M, Debes C, Zoubir AM: Compressive sensing in through-the-wall radar imaging. Proc IEEE Int. Conf. Acoustics, Speech, and Signal Process, Prague, 22–27 May 2011 4008-4011.

    Google Scholar 

  18. Solimene R, Ahmad F, Soldovieri F: A novel CS-SVD strategy to perform data reduction in linear inverse scattering problems. IEEE Geosci. Remote Sens. Lett. 2012, 9(5):881-885.

    Article  Google Scholar 

  19. Amin MG, Ahmad F: Compressive sensing for through-the-wall radar imaging. J. Electron. Imag. 2013., 22(3): doi: 10.1117/1.JEI.22.3.030901

    Google Scholar 

  20. Amin M, Ahmad F, Zhang W, Amin M, Ahmad F, Zhang W: A compressive sensing approach to moving target indication for urban sensing. Proc. IEEE Radar Conf Kansas City 23–27 May 2011 509-512.

    Google Scholar 

  21. Ahmad F, Amin MG: Through-the-wall human motion indication using sparsity-driven change detection. IEEE Trans. Geosci. Rem. Sens. 2013, 51(2):881-890.

    Article  Google Scholar 

  22. Ahmad F, Amin MG, Qian J: Through-the-wall moving target detection and localization using sparse regularization. In Proc. SPIE, vol. 8365. SPIE, Bellingham; 2012.

    Google Scholar 

  23. Qian J, Ahmad F, Amin MG: Joint localization of stationary and moving targets behind walls using sparse scene recovery. J. Electron. Imag. 2013., 22(2): doi: 10.1117/1.JEI.22.2.021002

    Google Scholar 

  24. Yoon Y-S, Amin MG: Spatial filtering for wall-clutter mitigation in through-the-wall radar imaging. IEEE Trans. Geosci. Rem. Sens. 2009, 47(9):3192-3208.

    Article  Google Scholar 

  25. Chandra A, Gaikwad D, Singh D, Nigam M: An approach to remove the clutter and detect the target for ultra-wideband through-wall imaging. J Geophys. Eng. 2008, 5(4):412-419. 10.1088/1742-2132/5/4/005

    Article  Google Scholar 

  26. Tivive F, Bouzerdoum A, Amin M: An SVD-based approach for mitigating wall reflections in through-the-wall radar imaging. Proc. IEEE Radar Conf., Kansas City, 23–27 May 2011 519-524.

    Google Scholar 

  27. Ahmad F, Amin MG: Wall clutter mitigation for MIMO radar configurations in urban sensing. Proc. 11th Int. Conf. Information Science, Signal Process., and their Applications, Montreal, 2–5 July 2012

    Google Scholar 

  28. Lagunas E, Amin M, Ahmad F, Nájar M: Joint wall mitigation and compressive sensing for indoor image reconstruction. IEEE Trans. Geosci Rem. Sens. 2013, 51(2):891-906.

    Article  Google Scholar 

  29. Lagunas E, Amin M, Ahmad F, Nájar M: Wall mitigation techniques for indoor sensing within the CS framework. Proc. Seventh IEEE Workshop on Sensor Array and Multi-Channel Signal Processing, Hoboken 17–20 June 2012.

    Google Scholar 

  30. Bandeira AS, Scheinberg K, Vicente LN: On partially sparse recovery (preprint 11-13, Dept. of Mathematics, Univ. Coimbra, 2011). . Accessed 27 Feb 2014 http://www.optimization-online.org/DB_FILE/2011/04/2990.pdf

  31. Vaswani N, Lu W: Modified-CS: modifying compressive sensing for problems with partially known support. IEEE Trans. Signal Process 2010, 58(9):4595-4607.

    Article  MathSciNet  Google Scholar 

  32. Leigsnering M, Amin MG, Ahmad F, Zoubir AM: Multipath exploitation and suppression for SAR imaging of building interiors. IEEE Signal Process Mag. 2014., 31(4): doi: 10.1109/MSP.2014.2312203

    Google Scholar 

  33. Lagunas E, Amin MG, Ahmad F, Najar M: Determining building interior structures using compressive sensing. J. Electron. Imag. 2013., 22(2): doi: 10.1117/1.JEI.22.2.02100

    Google Scholar 

  34. van Rossum W, de Wit J, Tan R: Radar imaging of building interiors using sparse reconstruction. Proc. 9th European Radar Conference, Amsterdam, 31 Oct–2 Nov 2012 30-33.

    Google Scholar 

  35. Amin MG, Ahmad F: Wideband synthetic aperture beamforming for through-the-wall imaging. IEEE Signal Process Mag. 2008, 25(4):110-113.

    Article  Google Scholar 

  36. Ahmad F, Amin MG, Kassam SA: A beamforming approach to stepped-frequency synthetic aperture through-the-wall radar imaging. Proc. First IEEE Int. Workshop on Computational Advances in Multi-sensor Adaptive Process, Puerto Vallarta, 13–15 Dec 2005 24-27.

    Chapter  Google Scholar 

  37. Gerry M, Potter L, Gupta I, van der Merwe A: A parametric model for synthetic aperture radar measurements. IEEE Trans. Antenn. Propag. 1999, 47(7):1179-1188. 10.1109/8.785750

    Article  Google Scholar 

  38. Ahmad F, Amin MG: Partially sparse reconstruction of behind-the-wall scenes, in Proc. SPIE, vol. 8365. SPIE, Bellingham; 2012:83650W.

    Google Scholar 

  39. Boyd S, Vandenberghe L: Convex Optimization. Cambridge University Press, Cambridge; 2004.

    Book  Google Scholar 

  40. Candes EJ, Tao T: Near optimal signal recovery from random projections: universal encoding strategies. IEEE Trans. Inf. Theory 2006, 52(12):5406-5425.

    Article  MathSciNet  Google Scholar 

  41. Chen SS, Donoho DL, Saunders MA: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 1999, 20(1):33-61.

    Article  MathSciNet  Google Scholar 

  42. Mallat S, Zhang Z: Matching pursuit with time-frequency dictionaries. IEEE Trans. Signal Process 1993, 41(12):3397-3415. 10.1109/78.258082

    Article  Google Scholar 

  43. Tropp JA: Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 2004, 50(10):2231-2242. 10.1109/TIT.2004.834793

    Article  MathSciNet  Google Scholar 

  44. Tropp JA, Gilbert AC: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53(12):4655-4666.

    Article  MathSciNet  Google Scholar 

  45. Ahmad F, Amin MG, Dogaru T: A beamforming approach to imaging of stationary indoor scenes under known building layout. Proc. IEEE 5th Int. Workshop on Computational Advances in Multi-Sensor Adaptive Process, St. Martin, 15–18 Dec 2013 105-108.

    Google Scholar 

  46. Dogaru T, Nguyen L, Le C: Computer models of the human body signature for sensing through the wall radar applications (ARL-TR-4290, U.S. Army Research Lab, Adelphi, MD, 2007). . Accessed 27 Feb 2014 http://www.dtic.mil/dtic/tr/fulltext/u2/a473937.pdf

  47. Dogaru T, Le C: Through-the-wall small weapon detection based on polarimetric radar techniques (ARL-TR-5041, U.S. Army Research Lab, Adelphi, MD, 2009). . Accessed 27 Feb 2014 http://www.dtic.mil/dtic/tr/fulltext/u2/a510201.pdf

  48. Ahmad F, Amin M: Stochastic model based radar waveform design for weapon detection. IEEE Trans. Aerosp. Electron. Syst. 2012, 48(2):1815-1826.

    Article  Google Scholar 

  49. Tivive FHC, Amin MG, Bouzerdoum A: Wall clutter mitigation based on eigen-analysis in through-the-wall radar imaging. Proc. 17th Int. Conf. on Digital Signal Process, Corfu, 6–8 July 2011

    Google Scholar 

  50. Boufounos P, Duarte M, Baraniuk R: Sparse signal reconstruction from noisy compressive measurements using cross validation. Proc. IEEE Workshop on Statistical Signal Process, Madison, 26–29 Aug 2007 299-303.

    Google Scholar 

  51. Sun H, Nallanathan A, Jiang J, Poor HV: Compressive autonomous sensing (CASe) for wideband spectrum sensing. Proc. IEEE International Communications Conf., 10–15 June 2012 4442-4446.

    Google Scholar 

  52. Do TT, Lu G, Nguyen N, Tran TD: Sparsity adaptive matching pursuit algorithm for practical compressed sensing. Proc. 42nd Asilomar Conf. on Signals, Systems and Computers, 26–29 Oct 2008 581-587.

    Chapter  Google Scholar 

Download references

Acknowledgements

This work is supported by ARO and ARL under contract W911NF-11-1-0536.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fauzia Ahmad.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ahmad, F., Amin, M.G. & Dogaru, T. Partially sparse imaging of stationary indoor scenes. EURASIP J. Adv. Signal Process. 2014, 100 (2014). https://doi.org/10.1186/1687-6180-2014-100

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2014-100

Keywords