Skip to main content

Multi-dimensional model order selection

Abstract

Multi-dimensional model order selection (MOS) techniques achieve an improved accuracy, reliability, and robustness, since they consider all dimensions jointly during the estimation of parameters. Additionally, from fundamental identifiability results of multi-dimensional decompositions, it is known that the number of main components can be larger when compared to matrix-based decompositions. In this article, we show how to use tensor calculus to extend matrix-based MOS schemes and we also present our proposed multi-dimensional model order selection scheme based on the closed-form PARAFAC algorithm, which is only applicable to multi-dimensional data. In general, as shown by means of simulations, the Probability of correct Detection (PoD) of our proposed multi-dimensional MOS schemes is much better than the PoD of matrix-based schemes.

Introduction

In the literature, matrix array signal processing techniques are extensively used in a variety of applications including radar, mobile communications, sonar, and seismology. To estimate geometrical/physical parameters such as direction of arrival, direction of departure, time of direction of arrival, and Doppler frequency, the first step is to estimate the model order, i.e., the number of signal components.

By taking into account only one dimension, the problem is seen from just one perspective, i.e., one projection. Consequently, parameters cannot be estimated properly for certain scenarios. To handle that, multi-dimensional array signal processing, which considers several dimensions, is studied. These dimensions can correspond to time, frequency, or polarization, but also spatial dimensions such as one- or two-dimensional arrays at the transmitter and the receiver. With multi-dimensional array signal processing, it is possible to estimate parameters using all the dimensions jointly, even if they are not resolvable for each dimension separately. Moreover, by considering all dimensions jointly, the accuracy, reliability, and robustness can be improved. Another important advantage of using multi-dimensional data, also known as tensors, is the identifiability, since with tensors the typical rank can be much higher than using matrices. Here, we focus particularly on the development of techniques for the estimation of the model order.

The estimation of the model order, also known as the number of principal components, has been investigated in several science fields, and usually model order selection schemes are proposed only for specific scenarios in the literature. Therefore, as a first important contribution, we have proposed in [1, 2] the one-dimensional model order selection scheme called Modified Exponential Fitting Test (M-EFT), which outperforms all the other schemes for scenarios involving white Gaussian noise. Additionally, we have proposed in [1, 2] improved versions of the Akaike's Information Criterion (AIC) and Minimum Description Length (MDL).

As reviewed in this article, the multi-dimensional structure of the data can be taken into account to improve further the estimation of the model order. As an example of such improvement, we show our proposed R-dimensional Exponential Fitting Test (R-D EFT) for multi-dimensional applications, where the noise is additive white Gaussian. The R-D EFT successfully outperforms the M-EFT confirming that even the technique with the best performance can be improved by taking into account the multi-dimensional structure of the data [1, 3, 4]. In addition, we also extend our modified versions of AIC and MDL to their respective multi-dimensional versions R-D AIC and R-D MDL. For scenarios with colored noise, we present our proposed multi-dimensional model order selection technique called closed-form PARAFAC-based model order selection (CFP-MOS) scheme [3, 5].

The remainder of this article is organized as follows. After reviewing the notation in second section, the data model is presented in third section. Then the R-dimensional exponential fitting test (R-D EFT) and closed-form PARAFAC-based model order selection (CFP-MOS) scheme are reviewed in fourth section. The simulation results in fifth section confirm the improved performance of R-D EFT and CFP-MOS. Conclusions are drawn finally.

Tensor and matrix notation

In order to facilitate the distinction between scalars, matrices, and tensors, the following notation is used: Scalars are denoted as italic letters (a, b, ..., A, B, ..., α, β, ...), column vectors as lower-case bold-face letters (a, b, ...), matrices as bold-face capitals (A, B, ...), and tensors are written as bold-face calligraphic letters . Lower-order parts are consistently named: the (i, j)-element of the matrix A is denoted as a i,j and the (i, j, k)-element of a third order tensor as x i,j,k . The n-mode vectors of a tensor are obtained by varying the n th index within its range (1, 2, ..., I n ) and keeping all the other indices fixed. We use the superscripts T, H, -1, +, and * for transposition, Hermitian transposition, matrix inversion, the Moore-Penrose pseudo inverse of matrices, and complex conjugation, respectively. Moreover the Khatri-Rao product (columnwise Kronecker product) is denoted by A B.

The tensor operations we use are consistent with [6]: The r -mode product of a tensor and a matrix along the r th mode is denoted as . It is obtained by multiplying all r-mode vectors of from the left-hand side by the matrix U. A certain r-mode vector of a tensor is obtained by fixing the r th index and by varying all the other indices.

The higher-order SVD (HOSVD) of a tensor is given by

(1)

where is the core-tensor which satisfies the all-orthogonality conditions [6] and , r = 1, 2, ..., R are the unitary matrices of r-mode singular vectors.

Finally, the r-mode unfolding of a tensor is symbolized by , i.e., it represents the matrix of r-mode vectors of the tensor . The order of the columns is chosen in accordance with [6].

Data model

To validate the general applicability of our proposed schemes, we adopt the PARAFAC data model below

(2)

where is the m r th element of the n th factor of the r th mode for m r = 1, ..., M r and r = 1, 2, ..., R, R +1. The MR+1can be alternatively represented by N, which stands for the number of snapshots.

By defining the vectors and using the outer product operator , another possible representation of (2) is given by

(3)

where is composed of the sum of d rank one tensors. Therefore, the tensor rank of coincides with the model order d.

For applications, where the multi-dimensional data obeys a PARAFAC decomposition, it is important to estimate the factors of the tensor , which are defined as , and we assume that the rank of each F(r)is equal to min(M r , d). This definition of the factor matrices allows us to rewrite (3) according to the notation proposed in [7]

(4)

where × r is the r-mode product defined in Section 2, and the tensor represents the R-dimensional identity tensor of size d × d × d, whose elements are equal to one when the indices i1 = i2 ... = iR+1and zero otherwise.

In practice, the data is contaminated by noise, which we represent by the following data model

(5)

where is the additive noise tensor, whose elements are i.i.d. zero-mean circularly symmetric complex Gaussian (ZMCSCG) random variables. Thereby, the tensor rank is different from d and usually it assumes extremely large values as shown in [8]. Hence, the problem we are solving can therefore be stated in the following fashion: given a noisy measurement tensor , we desire to estimate the model order d. Note that according to Comon [8], the typical rank of is much bigger than any of the dimensions M r for r = 1, ..., R + 1.

The objective of the PARAFAC decomposition is to compute the estimated factors such that

(6)

Since one requirement to apply the PARAFAC decomposition is to estimate d.

We evaluate the performance of the model order selection scheme in the presence of colored noise, which is given by replacing the white Gaussian white noise tensor by the colored Gaussian noise tensor in (5). Note that the data model used in this article is simply a linear superposition of rank-one components superimposed by additive noise.

Particularly, for multi-dimensional data, the colored noise with a Kronecker structure is present in several applications. For example, in EEG applications [9], the noise is correlated in both space and time dimensions, and it has been shown that a model of the noise combining these two correlation matrices using the Kronecker product can fit noise measurements. Moreover, for MIMO systems the noise covariance matrix is often assumed to be the Kronecker product of the temporal and spatial correlation matrices [10].

The multi-dimensional colored noise, which is assumed to have a Kronecker correlation structure, can be written as

(7)

where represents the Kronecker product. We can also rewrite (7) using the n-mode products in the following fashion

(8)

where is a tensor with uncorrelated ZMCSCG elements with variance , and is the correlation factor of the i th dimension of the colored noise tensor. The noise covariance matrix in the i th mode is defined as

(9)

where α is a normalization constant, such that . The equivalence between (7), (8), and (9) is shown in [11].

To simplify the notation, let us define . For the r-mode unfolding we compute the sample covariance matrix as

(10)

The eigenvalues of these r-mode sample covariance matrices play a major role in the model order estimation step. Let us denote the i th eigenvalue of the sample covariance matrix of the r-mode unfolding as . Notice that possesses M r eigenvalues, which we order in such a way that . The eigenvalues may be computed from the HOSVD of the measurement tensor

(11)

as

(12)

Note that the eigenvalues are related to the r-mode singular values of through . The r-mode singular values can also be computed via the SVD of the r-mode unfolding as follows

(13)

where and are unitary matrices, and is a diagonal matrix, which contains the singular values on the main diagonal.

Multi-dimensional model order selection schemes

In this section, the multi-dimensional model order selection schemes are proposed based on the global eigenvalues, the R-D subspace, or tensor-based data model. First, we show the proposed definition of the global eigenvalues together with the presentation of the proposed R-D EFT. Then, we summarize our multi-dimensional extension of AIC and MDL. Besides the global eigenvalues-based schemes, we also propose a tensor data-based multi-dimensional model order selection scheme. Followed by the closed-form PARAFAC-based model order selection scheme is proposed for white and also colored noise scenarios. For data sampled on a grid and an array with centro-symmetric symmetries, we show how to improve the performance of model order selection schemes for such data by incorporating forward-backward averaging (FBA).

R-D exponential fitting test (R-D EFT)

The global eigenvalues are based on the r-mode eigenvalues represented by for r = 1, ..., R and for i = 1, ..., M r . To obtain the r-mode eigenvalues, there are two ways. The first way shown in (10) is possible via the EVD of each r-mode sample covariance matrix, and the second way in (12) is given via an HOSVD.

According to Grouffaud et al. [12] and Quinlan et al. [13], the noise eigenvalues that exhibit a Wishart profile can have their profile approximated by an exponential curve. Therefore, by applying the exponential approximation for every r-mode, we obtain that

(14)

where , , i = 1,2, ..., M r and r = 1, 2, ..., R + 1. The rate of the exponential profile q(α r , β r ) is defined as

(15)

where α = min (M, N) and β = max (M, N). Note that (15) of the M-EFT is an extension of the EFT expression in [12, 13].

In order to be even more precise in the computation of q, the following polynomial can be solved

(16)

Although from (16) α + 1 solutions are possible, we select only the q that belongs to the interval (0, 1). For MN (15) is equal to the q of the EFT [12, 13], which means that the PoD of the EFT and the PoD of the M-EFT are the same for M < N. Consequently, the M-EFT automatically inherits from the EFT the property that it outperforms the other matrix-based MOS techniques in the literature for MN in the presence of white Gaussian noise as shown in [2].

For the sake of simplicity, let us first assume that M1 = M2 = = M R . Then we can define global eigenvalues as being [1]

(17)

Therefore, based on (14), it is straightforward that the noise global eigenvalues also follow an exponential profile, since

(18)

where i = 1, ..., MR+1.

In Figure 1, we show an example of the exponential profile property that is assumed for the noise eigenvalues. This exponential profile approximates the distribution of the noise eigenvalues and the distribution of the global noise eigenvalues. The exemplified data in Figure 1 have the model order equal to one, since the first eigenvalue does not fit the exponential profile. To estimate the model order, the noise eigenvalue profile gets predicted based on the exponential profile assumption starting from the smallest noise eigenvalue. When a significant gap is detected compared to this predicted exponential profile, the model order, i.e., the smallest signal eigenvalue, is found.

Figure 1
figure 1

Comparison between the global eigenvalues profile and the R -mode eigenvalues profile for a scenario with array size M 1 = 4, M 2 = 4, M 3 = 4, M 4 = 4, M 5 = 4, d = 1 and SNR = 0 dB.

The product across modes increases the gap between the predicted and the actual eigenvalues as shown in Figure 1. We compare the gap between the actual eigenvalues and the predicted eigenvalues in the r th mode to the gap between the actual global eigenvalues and the predicted global eigenvalues. Here, we consider that is a rank one tensor, and noise is added according to (5) Then, in this case, d = 1. For the first gap, we have , while for the second one, we have . Therefore, the break in the profile is easier to detect via global eigenvalues than using only one mode eigenvalues

Since all tensor dimensions may be not necessarily equal to each other, without loss of generality, let us consider the case in which M1M2MR+1. In Figures 2, 3, and 4, we have sets of eigenvalues obtained from each r-mode of a tensor with sizes M1 = 13, M2 = 11, M3 = 8 and M4 = 3. The index i indicates the position of the eigenvalues in each r th eigenvalues set.

Figure 2
figure 2

Sequential definition of the global eigenvalues-1st eigenvalue set.

Figure 3
figure 3

Sequential definition of the global eigenvalues-1st and 2nd eigenvalue sets.

Figure 4
figure 4

Sequential definition of the global eigenvalues-1st, 2nd, and 3rd eigenvalue sets.

We start by estimating with a certain eigenvalue-based model order selection method considering the first unfolding only, which in the example in Figure 2 has a size M1= 13. If , we could have taken advantage of the second mode as well. Therefore, we compute the global eigenvalues as in (17) for 1 ≤ iM2, thus discarding the M1 - M2 last eigenvalues of the first mode. We can obtain a new estimate . As illustrated in Figure 3, we utilize only the first M2 highest eigenvalues of the first and of the second modes to estimate the model order. If we could continue in the same fashion, by computing the global eigenvalues considering the first three modes. In the example in Figure 4, since the model order is equal to 6, which is greater than M4, the sequential definition algorithm of the global eigenvalues stops using the three first modes. Clearly, the full potential of the proposed method can be achieved when all modes are used to compute the global eigenvalues. This happens when , so that can be computed for 1 ≤ iMR+1.

Note that using the global eigenvalues, the assumptions of M-EFT, that the noise eigenvalues can be approximated by an exponential profile, and the assumptions of AIC and MDL, that the noise eigenvalues are constant, still hold. Moreover, the maximum model order is equal to , for r = 1, ..., R.

The R-D EFT is an extended version of the M-EFT operating on the . Therefore,

  1. 1)

    It exploits the fact that the noise global eigenvalues still exhibit an exponential profile;

  2. 2)

    The increase of the threshold between the actual signal global eigenvalue and the predicted noise global eigenvalue leads to a significant improvements in the performance;

  3. 3)

    It is applicable to arrays of arbitrary size and dimension through the sequential definition of the global eigenvalues as long as the data is arranged on a multi-dimensional grid.

To derive the proposed multi-dimensional extension of the M-EFT algorithm, namely the R-D EFT, we start by looking at an R-dimensional noise-only case. For the R-D EFT, it is our intention to predict the noise global eigenvalues defined in (18). Each r-mode eigenvalue can be estimated via

(19)
(20)

Equations (19) and (20) are the same expressions as in the case of the M-EFT in [2], however, in contrast to the M-EFT, here they are applied to each r-mode eigenvalue.

Let us apply the definition of the global eigenvalues according to (17)

(21)

where in (18) the approximation by an exponential profile is assumed. Therefore,

(22)

where α(G) is the minimum α r for all the r-modes considered in the sequential definition of the global eigenvalue. In (22), is a function of only the last global eigenvalue , which is the smallest global eigenvalue and is assumed a noise eigenvalue, and of the rates for all the r-modes considered in the sequential definition. Instead of using directly (22), we use according to (19) for all the r-modes considered in the sequential definition. Therefore, the previous eigenvalues that were already estimated as noise eigenvalues are taken into account in the prediction step.

Similarly to the M-EFT, using the predicted global eigenvalue expression (21) considering white Gaussian noise samples, we compute the global threshold coefficients via the hypotheses for the tensor case

(23)

Once all are found for a certain higher order array of sizes M1, M2, ..., M R , and for a certain Pfa, then the model order can be estimated by applying the following cost function

where α(G) is the total number of sequentially defined global eigenvalues.

R-D AIC and R-D MDL

In AIC and MDL, it is assumed that the noise eigenvalues are all equal. Therefore, once this assumption is valid for all r-mode eigenvalues, it is straightforward that it is also valid for our global eigenvalue definition. Moreover, since we have shown in [2] that 1-D AIC and 1-D MDL are more general and superior in terms of performance than AIC and MDL, respectively, we extend 1-D AIC and 1-D MDL to the multi-dimensional form using the global eigenvalues. Note that the PoD of 1-D AIC and 1-D MDL is only greater than the PoD of AIC and MDL for cases where , which cannot be fulfilled for one-dimensional data.

The corresponding R-dimensional versions of 1-D AIC and 1-D MDL are obtained by first replacing the eigenvalues by the global eigenvalues defined in (17). Additionally, to compute the number of free parameters for the 1-D AIC and 1-D MDL methods and their R-D extensions, we propose to set the parameter and α(G) is the total number of sequentially defined global eigenvalues similarly as we propose in [1]. Therefore, the optimization problem for the R-D AIC and R-D MDL is given by

(24)

where represents an estimate of the model order d, and g(G)(P) and a(G)(P) are the geometric and arithmetic means of the P smallest global eigenvalues, respectively. The penalty functions p(P, N α(G)) for R-D AIC and R-D MDL are given in Table 1.

Table 1 Penalty functions for R-D information theoretic criteria

Note that the R-dimensional extension described in this section can be applied to any model order selection scheme that is based on the profile of eigenvalues, i.e., also to the 1-D MDL and the 1-D AIC methods.

Closed-form PARAFAC-based model order selection (CFP-MOS) scheme

In this section, we present the Closed-form PARAFAC-based model order selection (CFP-MOS) technique proposed in [5]. The major motivation of CFP-MOS is the fact that R-D AIC, R-D MDL, and R-D EFT are applicable only in the presence of white Gaussian noise. Therefore, it is very appealing to apply CFP-MOS, since it has a performance close to R-D EFT in the presence of white Gaussian noise, and at the same time it is also applicable in the presence of colored Gaussian noise.

According to Roemer and Haardt [14], the estimation of the factors F(r)via the PARAFAC decomposition is transformed into a set of simultaneous diagonalization problems based on the relation between the truncated HOSVD [6]-based low-rank approximation of

(25)

and the PARAFAC decomposition of

(26)

where , , p r = min (M r , d), and for a nonsingular transformation matrix T r d × dfor all modes where denotes the set of non-degenerate modes. As shown in (25) and in (26), the operator denotes a compact representation of R r-mode products between a tensor and R + 1 matrices.

The closed-form PARAFAC (CFP) [14] decomposition constructs two simultaneous diagonalization problems for every tuple (k,ℓ), such that k, , and k < ℓ. In order to reference each simultaneous matrix diagonalization (SMD) problem, we define the enumerator function e(k, ℓ, i) that assigns the triple (k, ℓ, i) to a sequence of consecutive integer numbers in the range 1, 2, ..., T. Here i = 1, 2 refers to the two simultaneous matrix diagonalizations (SMD) for our specific k and ℓ. Consequently, SMD (e (k, ℓ, 1), P) represents the first SMD for a given k and ℓ, which is associated to the simultaneous diagonalization of the matrices by T k . Initially, we consider that the candidate value of the model order P = d, which is the model order. Similarly, SMD (e (k, ℓ, 2), P) corresponds to the second SMD for a given k and referring to the simultaneous diagonalizations of by T. and are defined in [14]. Note that each SMD(e(k, ℓ, i), P) yields an estimate of all factors F(r)[14, 15], where r = 1, ..., R. Consequently, for each factor F(r)there are T estimates.

For instance, consider a 4-D tensor, where the third mode is degenerate, i.e., M3 < d. Then, the set is given by {1, 2, 4}, and the possible (k, ℓ)-tuples are (1,2), (1,4), and (2,4). Consequently, the six possible SMDs are enumerated via e(k, ℓ, i) as follows: e(1, 2, 1) = 1, e(1, 2, 2) = 2, e(1, 4, 1) = 3, e(1, 4, 2) = 4, e(2, 4, 1) = 5, and e(2, 4, 2) = 6. In general, the total number of SMD problems T is equal to .

There are different heuristics to select the best estimates of each factor F(r)as shown in [14]. We define the function to compute the residuals (RESID) of the simultaneous matrix diagonalizations (SMD) as RESID(SMD(·)). For instance, we apply it to e(k, ℓ, 1)

(27)

and for e(k, ℓ,2)

(28)

where .

Since each residual is a positive real-valued number, we can order the SMDs by the magnitude of the corresponding residual. For the sake of simplicity, we represent the ordered sequence of SMDs to e(k, ℓ, i) by a single index e(t)for t = 1, 2, ..., T, such that RESID(SMD(e(t), P)) ≤ RESID(SMD(e(t+1), P)). Since in practice d is not known, P denotes a candidate value for , which is our estimate of the model order d. Our task is to select P from the interval , where is a lower bound and is an upper bound for our candidate values. For instance, equal to 1 is used, and is chosen such that no dimension is degenerate [14], i.e. dM r for r = 1, ..., R. We define RESID(SMD(e(t), P)) as being the t th lowest residual of the SMD considering the number of components per factor equal to P. Based on the definition of RESID(SMD(e(t),P)), one first direct way to estimate the model order d can be performed using the following properties

1) If there is no noise and P < d, then RESID(SMD(e(t), P)) > RESID(SMD(e(t), d)), since the matrices generated are composed of mixed components as shown in [16].

2) If noise is present and P > d, then RESID(SMD(e(t), P)) > RESID(SMD(e(t), d)), since the matrices generated with the noise components are not diagonalizable commuting matrices. Therefore, the simultaneous diagonalizations are not valid anymore.

Based on these properties, a first model order selection scheme can be proposed

(29)

However, the model order selection scheme in (29) yields a Probability of correct Detection (PoD) inferior to the some MOS techniques found in the literature. Therefore, to improve the PoD of (29), we propose to exploit the redundant information provided only by the closed-form PARAFAC (CFP) [14].

Let denote the ordered sequence of estimates for F(r)assuming that the model order is P. In order to combine factors estimated in different diagonalizations processes, the permutation and scaling ambiguities should be solved. For this task, we apply the amplitude approach according to Weis et al. [15]. For the correct model order and in the absence of noise, the subspaces of should not depend on t. Consequently, a measure for the reliability of the estimate is given by comparing the angle between the vectors for different t, where corresponds to the estimate of the v th column of . Hence, this gives rise to an expression to estimate the model order using CFP-MOS

(30)

where the operator gives the angle between two vectors and Tlim represents the total number of simultaneous matrix diagonalizations taken into account. Tlim, a design parameter of the CFP-MOS algorithm, can be chosen between 2 and T. Similar to the Threshold Core Consistency Analysis (T-CORCONDIA) in [4], the CFP-MOS requires weights Δ(P), otherwise the Probabilities of correct Dectection (PoD) for different values of d have a significant gap from each other. Therefore, to have a fair estimation for all candidates P, we introduce the weights Δ(P), which are calibrated in a scenario with white Gaussian noise, where the number of sources d varies. For the calibration of weights, we use the probability of correct detection (PoD) of the R-D EFT [1, 4] as a reference, since the R-D EFT achieves the best PoD in the literature even in the low SNR regime. Consequently, we propose the following expression to obtain the calibrated weights Δvar

(31)

where returns the averaged probability of correct detection over a certain predefined SNR range using the R-D EFT for a given scenario assuming P as the model order, dmax is defined as being the maximum candidate value of P, and Δvar is the vector with the threshold coefficients for each value of P. Note that the elements of the vector of weights Δ vary according to a certain defined range and interval and that the averaged PoD of the CFP-MOS is compared to the averaged PoD of the R-D EFT. When the cost function is minimized, then we have the desired Δvar.

Up to this point, the CFP-MOS is applicable to scenarios without any specific structure in the factor matrices. If the vectors have a Vandermonde structure, we can propose another expression. Again let be the estimate for the r th factor matrix obtained from SMD(e(t), P). Using the Vandermonde structure of each factor we can estimate the scalars corresponding to the v th column of As already proposed previously, for the correct model order and in the absence of noise, the estimated spatial frequencies should not depend on t. Consequently, a measure for the reliability of the estimate is given by comparing the estimates for different t. Hence, this gives rise to the new cost function

(32)

Similar to the cost function in (30), to have a fair estimation for all candidates P, we introduce the weights Δ(P), which are calculated in a similar fashion as for T-CORCONDIA Var in [4] by considering data contaminated by white Gaussian noise.

Applying forward-backward averaging (FBA)

In many applications, the complex-valued data obeys additional symmetry relations that can be exploited to enhance resolution and accuracy. For instance, when sampling data uniformly or on centro-symmetric grids, the corresponding r-mode subspaces are invariant under flipping and conjugation. Such scenarios are known as having centro-symmetric symmetries. Also in such scenarios, we can incorporate FBA [17] to all model order selection schemes even with a multi-dimensional data model. First, let us present modifications in the data model, which should be considered to apply the FBA. Comparing the data model of (4) to the data model to be introduced in this section, we summarize two main differences. The first one is the size of , which has R + 1 dimensions instead of the R dimensions as in (4). Therefore, the noiseless data tensor is given by

(33)

This additional (R + 1)th dimension is due to the fact that the (R + 1)th factor represents the source symbols matrix F(R+1)= ST. The second difference is the restriction of the factor matrices F(r)= for r = 1, ..., R of the tensor in (33) to a matrix, where each vector is a function of a certain scalar related to the r th dimension and the i th source. In many applications, these vectors have a Vandermonde structure. For the sake of notation, the factor matrices for r = 1, ..., R are represented by A(r), and it can be written as a function of as follows

(34)

In [18, 19] it was demonstrated that in the tensor case, forward-backward averaging can be expressed in the following form

(35)

where represents the concatenation of two tensors and along the nth mode. Note that all the other modes of and should have exactly the same sizes. The matrix Π n is defined as

(36)

In multi-dimensional model order selection schemes, forward-backward averaging is incorporated by replacing the data tensor in (11) by . Moreover, we have to replace N by 2 · N in the subsequent formulas since the number of snapshots is virtually doubled.

In schemes like AIC, MDL, 1-D AIC, and 1-D MDL, which requires the information about the number of sensors and the number of snapshots for the computation of the free parameters, once FBA is applied, the number of snapshots in the free parameters should be updated from N to 2 · N.

To reduce the computational complexity, the forward-backward averaged data matrix Z can be replaced by a real-valued data matrix φ{Z} M × 2Nwhich has the same singular values as Z[20]. This transformation can be extended to the tensor case where the forward-backward averaged data tensor is replaced by a real-valued data tensor possessing the same r-mode singular values for all r = 1, 2, ..., R + 1 (see [19] for details).

(37)

where is given in (35), and if p is odd, then Q p is given as

(38)

and p = 2 · n + 1. On the other hand, if p is even, then Q p is given as

(39)

and p = 2 · n.

Simulation results

In this section, we evaluate the performance, in terms of the probability of correct detection (PoD), of all multi-dimensional model order selection techniques presented previously via Monte Carlo simulations considering different scenarios.

Comparing the two versions of the CORCONDIA [4, 21] and the HOSVD-based approaches, we can notice that the computational complexity is much lower in the R-D methods. Moreover, the HOSVD-based approaches outperform the iterative approaches, since none of them are close to the 100% Probability of correct Detection (PoD). The techniques based on global eigenvalues, R-D EFT, R-D AIC, and R-D MDL maintain a good performance even for lower SNR scenarios, and the R-D EFT shows the best performance if we compare all the techniques.

In Figures 5 and 6, we observe the performance of the classical methods and the R-D EFT, R-D AIC, and R-D MDL for a scenario with the following dimensions M1 = 7, M2 = 7, M3 = 7, and M4 = 7. The methods described as M-EFT, AIC, and MDL correspond to the simplified one-dimensional cases of the R-D methods, in which we consider only one unfolding for r = 4.

Figure 5
figure 5

Probability of correct Detection (PoD) versus SNR considering a system with a data model of M 1 = 7, M 2 = 7, M 3 = 7, M 4 = 7, and d = 3 sources.

Figure 6
figure 6

Probability of correct Detection (PoD) versus SNR considering a system with a data model of M 1 = 7, M 2 = 7, M 3 = 7, M 4 = 7, and d = 4 sources.

In Figures 7 and 8, we compare our proposed approach to all mentioned techniques for the case that white noise is present. To compare the performance of CFP-MOS for various values of the design parameter Tlim, we select Tlim = 2 for the legend CFP 2f and Tlim = 4 for CFP 4f. In Figure 7, the model order d is equal to 2, while in Figure 8, d = 3. In these two scenarios, the proposed CFP-MOS has a performance very close to R-D EFT, which has the best performance.

Figure 7
figure 7

Probability of correct Detection (PoD) versus SNR. In the simulated scenario, R = 5, M1 = 5, M2 = 5, M3 = 5, M4 = 5, M5 = 5, and N = 5 presence of white noise. We fixed d = 2.

Figure 8
figure 8

Probability of correct Detection (PoD) versus SNR. In the simulated scenario, R = 5, M1 = 5, M2 = 5, M3 = 5, M4 = 5, M5 = 5, and N = 5 presence of white noise. We fixed d = 3.

In Figures 9 and 10, we assume the noise correlation structure of Equation (9), where W i of the i th factor for M i = 3 is given by

Figure 9
figure 9

Probability of correct Detection (PoD) versus SNR. In the simulated scenario, R = 5, M1 = 5, M2 = 5, M3 = 5, M4 = 5, M5 = 5, and N = 5 presence of colored noise, where ρ1 = 0.9, ρ2 = 0.95, ρ3 = 0.85, and ρ4 = 0.8. We fixed d = 2.

Figure 10
figure 10

Probability of correct Detection (PoD) versus SNR. In the simulated scenario, R = 5, M1 = 5, M2 = 5, M3 = 5, M4 = 5, M5 = 5, and N = 5 presence of colored noise, where ρ1 = 0.9, ρ2 = 0.95, ρ3 = 0.85, and ρ4 = 0.8. We fixed d = 3.

(40)

where ρ i is the correlation coefficient. Note that also other types of correlation models different from (40) can be used.

In Figures 9 and 10, the noise is colored with a very high correlation, and the factors L i are computed based on (9) and (40) as a function of ρ i . As expected for this scenario, the R-D EFT, R-D AIC, and R-D MDL completely fail. In case of colored noise with high correlation, the noise power is much more concentrated in the signal components. Therefore, the smaller are the values of d, the worse is the PoD. The behavior of the CFP-MOS, AIC, MDL, and EFT are consistent with this effect. The PoD of AIC, MDL, and EFT increases from 0.85, 0.7, and 0.7 in Figure 9 to 0.9, 0.85, and 0.85 in Figure 10. CFP-MOS 4f has a PoD = 0.98 for SNR = 20 dB in Figure 9, while a PoD = 0.98 for SNR = 15 dB in Figure 10.

In contrast to CFP-MOS, AIC, MDL, and EFT, the PoD of RADOI [22] degrades from Figures 9 and 10. In Figure 9, RADOI has a better performance than the CFP-MOS version, while in Figure 10, CFP-MOS outperforms RADOI. Note that the PoD for RADOI becomes constant for SNR ≤ 3 dB, which corresponds to a biased estimation. Therefore, for severely colored noise scenarios, the model order selection using CFP-MOS is more stable than the other approaches.

In Figure 11, no FBA is applied in all model order selection techniques, while in Figure 12 FBA is applied in all of them according to section 4. In general, an improvement of approximately 3 dB is obtained when FBA is applied.

Figure 11
figure 11

Probability of correct Detection (PoD) versus SNR for an array of size M 1 = 5, M 2 = 7, and M 3 = 9. The number of snapshots N is set to 10 and the number of sources d = 3. No FBA is applied.

Figure 12
figure 12

Probability of correct Detection (PoD) versus SNR for an array of size M 1 = 5, M 2 = 7, and M 3 = 9. The number of snapshots N is set to 10 and the number of sources d = 3. FBA is applied.

In Figure 12, d = 3. Therefore, using the sequential definition of the global eigenvalues from "R-D Exponential Fitting Test (R-D EFT)", we can estimate the model order considering four modes. By increasing the number of sources to 5 in Figure 13, the sequential definition of the global eigenvalues is computed considering the second, third, and fourth modes, which are related to M2, M3, and N.

Figure 13
figure 13

Probability of correct Detection (PoD) versus SNR for an array of size M 1 = 5, M 2 = 7, and M 3 = 9. The number of snapshots N is set to 10 and the number of sources d = 5. FBA is applied.

By increasing the number of sources even more such that only one mode can be applied, the curves of the R-D EFT, R-D AIC and R-D MDL are the same as the curves of M-EFT, 1-D AIC, and 1-D MDL, as shown in Figure 14.

Figure 14
figure 14

Probability of correct Detection (PoD) versus SNR for an array of size M 1 = 5, M 2 = 7, and M 3 = 9. The number of snapshots N is set to 10 and the number of sources d = 9. FBA is applied.

Conclusions

In this article, we have compared different model order selection techniques for multi-dimensional high-resolution parameter estimation schemes. We have achieved the following results considering a multi-dimensional data model.

1) In case of white Gaussian noise scenarios, our R-D EFT outperforms the other techniques presented in the literature.

2) In the presence of colored noise, the CFP-MOS is the best technique, since it has a performance close to the R-D EFT in case of no correlation, and a performance more stable than RADOI, in case of severely correlated noise.

3) For researchers, which prefer to use information theoretic criteria (ITC) techniques, we have also proposed multi-dimensional extensions of AIC and MDL, called R-D AIC and R-D MDL, respectively.

In Table 2, we summarize the scenarios to apply the different techniques shown in this article. Also in Table 2, wht stands for white noise and clr stands for colored noise. Note that the PoD of the CFP-MOS is close to the one of the R-D EFT for white noise, which means that it has a multi-dimensional gain. Moreover, since the CFP-MOS is suitable for white and colored noise applications, we consider it the best general-purpose scheme.

Table 2 Summarized table comparing characteristics of the multi-dimensional model order selection schemes

Abbreviations

AIC:

Akaike's Information Criterion

CFP-MOS:

closed-form PARAFAC-based model order selection

FBA:

forward-backward averaging

HOSVD:

higher-order SVD

MDL:

minimum description length

MOS:

model order selection

M-EFT:

modified exponential fitting test

PoD:

probability of correct detection

R-DEFT R:

-dimensional Exponential Fitting Test

RESID:

residuals

SMD:

simultaneous matrix diagonalization

T-CORCONDIA:

threshold core consistency analysis

ZMCSCG:

zero-mean circularly symmetric complex Gaussian.

References

  1. da Costa JPCL, Haardt M, Roemer F, Del Galdo G: Enhanced model order estimation using higher-order arrays. Proceedings of the 40th Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, CA, USA 2007.

    Google Scholar 

  2. da Costa JPCL, Thakre A, Roemer F, Haardt M: Comparison of model order selection techniques for high-resolution parameter estimation algorithms. Proceedings of the 54th International Scientific Colloquium(IWK'09), Ilmenau, Germany 2009.

    Google Scholar 

  3. da Costa JPCL: Parameter Estimation Techniques for Multi-dimensional Array Signal Processing. 1st edition. Shaker Publisher, Aachen, Germany; 2010.

    Google Scholar 

  4. da Costa JPCL, Haardt M, Roemer F: Robust methods based on HOSVD for estimating the model order in PARAFAC models. Proceedings of the IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM'08), Darmstadt, Germany 2008.

    Google Scholar 

  5. da Costa JPCL, Roemer F, Weis M, Haardt M: Robust R -D parameter estimation via closed-form PARAFAC. Proceedings of the ITG Workshop on Smart Antennas (WSA'10), Bremen, Germany 2010.

    Google Scholar 

  6. De Lathauwer L, De Moor B, Vandewalle J: A multilinear singular value decomposition. SIAMJ MatrixAnal Appl 2000,21(4):1253-1278. (26 pages)

    Article  MathSciNet  Google Scholar 

  7. Roemer F, Haardt M: A closed-form solution for parallel factor (PARAFAC) analysis. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2008) Las Vegas, USA 2008, 2365-2368.

    Google Scholar 

  8. Comon P, ten Berge JMF: Generic and typical ranks of three-way arrays. Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2008), Las Vegas, USA 2008, 3313-3316.

    Google Scholar 

  9. Huizenga HM, de Munck JC, Waldorp LJ, Grasman RPPP: Spatiotemporal EEG/MEG source analysis based on a parametric noise covariance model. IEEE Trans Biomed Eng 2002,49(6):533-539. 10.1109/TBME.2002.1001967

    Article  Google Scholar 

  10. Park B, Wong TF: Training sequence optimization in MIMO systems with colored noise. Military Communications Conference (MILCOM 2003), Gainesville, USA 2003.

    Google Scholar 

  11. da Costa JPCL, Roemer F, Haardt M: Sequential GSVD based prewhitening for multidimensional HOSVD based subspace estimation. Proceedings of the ITG Workshop on Smart Antennas, Berlin, Germany 2009.

    Google Scholar 

  12. Grouffaud J, Larzabal P, Clergeot H: Some properties of ordered eigenvalues of a wishart matrix: application in detection test and model order selection. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'96) 1996, 5: 2463-2466.

    Google Scholar 

  13. Quinlan A, Barbot J-P, Larzabal P, Haardt M: Model order selection for short data: an exponential fitting test (EFT). EURASIP J Appl Signal Process 2007, 2007: 54-64.

    Article  Google Scholar 

  14. Roemer F, Haardt M: A closed-form solution for multilinear PARAFAC decompositions. Proceedings of the 5th IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM 2008), Darmstadt, Germany 2008, 487-491.

    Chapter  Google Scholar 

  15. Weis M, Roemer F, Haardt M, Jannek D, Husar P: Multi-dimensional Space-Time-Frequency component analysis of event-related EEG data using closed-form PARAFAC. Proceedings of the IEEE International Conference Acoustics, Speech, and Signal Processing (ICASSP 2009), Taipei, Taiwan 2009.

    Google Scholar 

  16. Badeau R, David B, Richard G: Selecting the modeling order for the ESPRIT high resolution method: an alternative approach. Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP2004), Montreal, Canada 2004.

    Google Scholar 

  17. Xu G, Roy RH, Kailath T: Detection of number of sources via exploitation of centro-symmetry property. IEEE Trans Signal Process 1994, 42: 102-112. 10.1109/78.258125

    Article  Google Scholar 

  18. Haardt M, Roemer F, Del Galdo G: Higher-order SVD based subspace estimation to improve the parameter estimation accuracy in multi-dimensional harmonic retrieval problems. IEEE Trans Signal Process 2008,56(7):3198-3213.

    Article  MathSciNet  Google Scholar 

  19. Roemer F, Haardt M, Del Galdo G: Higher order SVD based subspace estimation to improve multi-dimensional parameter estimation algorithms. Proceedings of the 40th Asilomar Conference on Signals, Systems, and Computers 2006, 961-965.

    Google Scholar 

  20. Lee A: Centrohermitian and skew-centrohermitian matrices. Linear Algebra Appl 1980, 29: 205-210. 10.1016/0024-3795(80)90241-4

    Article  MathSciNet  Google Scholar 

  21. Bro R, Kiers HAL: A new efficient method for determining the number of components in PARAFAC models. J Chemom 2003, 17: 274-286. 10.1002/cem.801

    Article  Google Scholar 

  22. Radoi E, Quinquis A: A new method for estimating the number of harmonic components in noise with application in high resolution radar. EURASIP J Appl Signal Process 2004,2004(8):1177-1188. 10.1155/S1110865704401097

    Article  Google Scholar 

Download references

Acknowledgements

The authors gratefully acknowledge the partial support of the German Research Foundation (Deutsche Forschungsge-meinschaft, DFG) under contract no. HA 2239/2-1.

The authors would like to thank the anonymous reviewer for the comments, which improved the readability of this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to João Paulo Carvalho Lustosa da Costa.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

da Costa, J.P.C.L., Roemer, F., Haardt, M. et al. Multi-dimensional model order selection. EURASIP J. Adv. Signal Process. 2011, 26 (2011). https://doi.org/10.1186/1687-6180-2011-26

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2011-26

Keywords