Skip to main content

Bayesian EM approach for GNSS parameters of interest estimation under constant modulus interference

Abstract

Interferences pose a significant risk to applications that rely on global navigation satellite systems (GNSSs). They have the potential to degrade GNSS performance and even result in service disruptions. The most notable type of intentional interference is characterized by a constant modulus, such as chirp and tone interferences. These interferences have a straightforward structure, leading to the creation of complex circles when attempting to identify their contribution. To address the interference and improve the situation, we calculate the maximum likelihood estimator for the relevant parameters (time delay and Doppler shift) while considering the presence of these latent variables. To achieve this, we employ the expectation–maximization algorithm, which has previously demonstrated its effectiveness in similar scenarios. Experiments conducted using synthetic signals confirm the efficiency of the proposed algorithm.

1 Introduction

Global navigation satellite systems (GNSSs) [1] have a wide range of applications, extending beyond navigation and timing to fields like Earth observation, attitude estimation, and space weather characterization. As a result, the accuracy of position, navigation, and timing information is crucial, especially for critical applications like intelligent transportation systems and autonomous unmanned ground/air vehicles. While GNSS has become the primary source of positioning, it was originally designed for optimal performance in clear-sky conditions, making its reliability susceptible to degradation in challenging environments. For instance, phenomenon such as multipath (reflections) [2], spoofing and interferences (intentional or unintentional) are the most challenging ones, being a key issue in safety-critical scenarios [3], such as civilian aviation [4]. These effects have been reported in the state of the art, and several mitigation countermeasures have already been proposed [5]. In the field of intentional interference, real-world scenarios have identified jammers broadcasting interference characterized by a constant modulus (CM). Initially, these devices utilized constant amplitude tones to disrupt receiver functionality. With this type of signal (even with their straightforward structure), they were able to prevent the receiver from functioning. As a countermeasure, in the time domain, two common methods are employed

  • pulse blanking [6], which involves zeroing-out samples of the incoming signal exceeding a predefined power threshold to mitigate the impact of pulsed interference

  • adaptive notch filtering [7], where the jamming signal’s instantaneous frequency is continuously estimated using a recurrence equation in the time domain, and the corresponding frequency components are filtered out from the incoming signal. This approach avoids the need for frequency-domain transformations.

However, they proved ineffective when the tone’s frequency varied, paving the way for chirp interference, which remains a significant issue today. Notably, notch filters fail to attenuate chirp interference adequately, particularly with nonlinear frequency variations. Even with linear variations, these countermeasures encounter challenges. Alternative approaches involve signal processing techniques such as the discrete Fourier transform (DFT), which project the signal into the frequency domain, allowing the application of a threshold to eliminate suspicious elements. Another transformation with potential for interference mitigation is the Karhunen–Loeve transform (KLT) [8, 9], which relies on the eigenvalues and eigenvectors of the incoming signal’s autocorrelation. However, these methods often overly degrade the signal, especially with wide chirp bandwidths.

In this article, we introduce a novel approach to mitigate interference characterized by a CM. This interference category is among the most prominent forms of intentional interference reported in the literature and includes signals like pure tones and chirped signals with time-varying tones. The constant modulus property results in a complex circular search space when attempting to identify interference at the receiver. To characterize these circles, latent variables are introduced. The primary contribution of this article is the computation of the maximum likelihood estimator (MLE) for key parameters, specifically the time delay and Doppler shift, in the presence of these latent variables. To calculate the MLE, we choose independent von Mises distributions with unknown parameters for the interference phases and we employ the expectation–maximization (EM) algorithm, which has demonstrated asymptotic efficiency in similar scenarios and has proven effective for N-hypersphere estimation [10]. To evaluate the performance of our proposed algorithm, we compare it against the theoretical limits of time-delay and Doppler shift estimation under the following particular cases: i) we consider the scenario where no interference corrupts the GNSS signal. Note that this is the best possible scenario and the theoretical limits are provided by the Cramér–Rao bound (CRB) derived in [11], ii) the misspecified conditional model [12, 13]. In this scenario, the signal is corrupted by an interference but the receiver estimates the parameters of interest without considering it. Then, the time-delay and Doppler estimates are biased. Note that this is the worst possible case and the performance limits are characterized by the misspecified CRB (MCRB) derived in [14]. The fair comparison of our algorithm should be with respect to the CRB, which takes into account the parameters describing the structure of the interference. However, the corresponding derivation is intractable and we resort to the so-called modified CRB (MoCRB) [15], which is a looser bound, normally used in problems involving missing variables.

The article’s structure is organized as follows: In Sect. 2, we present the GNSS received signal model in the presence of a CM bandlimited interfering signal. In Sect. 3, we derive a closed-form expression of the MoCRB for the parameters of interest, considering that the signal is bandlimited. This expression only depends on the baseband samples and the parameters that define the structure of the interference. Section 4 provides an in-depth description of the proposed EM algorithm, which is employed for mitigating interference under the CM hypothesis. Section 5 presents the simulation results that confirm the effectiveness of the proposed approach for two synthetic signal scenarios. Finally, Sect. 6 offers the concluding remarks.

2 Signal model and complete likelihood function

2.1 Signal model

In this article, we consider a bandlimited signal s(t), with bandwidth B, transmitted over a carrier frequency \(f_c\) and traveling at the speed of light c, from a GNSS satellite to a receiver. The transmitter and receiver are assumed to be in uniform linear motion such as the distance can be modeled by a first-order \(d-v\) distance-velocity model [16]. At the receiver, a narrow-band signal model is assumed and the received signal x(t) at the output of the receiver’s Hilbert filter can be approximated by [11, 17]

$$\begin{aligned} x(t) = \rho e^{j\phi } s(t-\tau )e^{-j2\pi f_c b(t-\tau )}+n(t) \end{aligned}$$
(1)

where \(\rho\) and \(\phi\) are the amplitude and phase of the complex coefficient \(\alpha =\rho e^{j\phi }\in \mathbb {C}\) induced by the propagation characteristics, \(\tau =d/c\) is the unknown propagation delay, \(b=v/c\) is the unknown Doppler shift and n(t) is a zero-mean white complex circular Gaussian noise. An interfering signal I(t), unknown and bandlimited within the frequency band of interest, is also arriving at the receiver. Then, the received signal x becomes:

$$\begin{aligned} x(t) = \alpha s(t-\tau )e^{-j2\pi f_c b(t-\tau )}+I(t)+n(t). \end{aligned}$$
(2)

Considering the acquisition of \(N=N_2-N_1+1\) samples at the sampling frequency \(F_s=B=1/T_s,\) and assuming that the observation window \([N_1T_s,N_2T_s]\) is short enough to consider constant amplitude, delay and Doppler shift, the discrete signal model yields to:

$$\begin{aligned} {\varvec{x}}=\alpha {\varvec{\mu }}({\varvec{\eta }})+{\varvec{I}}+{\varvec{n}} \end{aligned}$$
(3)

where \({\varvec{\mu }}({\varvec{\eta }}) = \left[ \ldots , s(kTs-\tau )e^{-j2\pi f_cb(kT_s-\tau )},\ldots \right] ^T\in \mathbb {C}^N\) with \({\varvec{\eta }}=(\tau ,b)\) and \(k\in \{N_1,\ldots ,N_2\}\), \({\varvec{I}}= \left[ \ldots , I(kT_s), \ldots \right] ^T\in \mathbb {C}^N\) and \({\varvec{n}}= \left[ \ldots , n(kT_s),\ldots \right] ^T \sim \mathcal{C}\mathcal{N}(0,\sigma ^2 I_N)\). Under constant modulus interference, all the components of the vector \({\varvec{I}}\) have the same modulus A. We therefore propose the parametrization

$$\begin{aligned} {\varvec{I}} = A\tilde{{\varvec{I}}} \end{aligned}$$
(4)

such that \(\tilde{{\varvec{I}}}=\begin{bmatrix}\tilde{I}_{1}&\ldots&\tilde{I}_{N}\end{bmatrix}^T\) with \(|\tilde{I}_{k}|=1\). In other words, each component of the vector \(\tilde{{\varvec{I}}}\) belongs to the complex unit circle, meaning that the vector \(\tilde{{\varvec{I}}}\) belongs to the complex hyper-torus of dimension N. Hence, there exists \({\varvec{\theta }}=\begin{bmatrix}{\theta }_1&\ldots&{\theta }_N\end{bmatrix}^T\in (-\pi ,\pi ]^{N}\)

$$\begin{aligned} \forall k=1,\ldots ,N\quad \tilde{I}_{k}=e^{j\theta _{k}}. \end{aligned}$$
(5)

2.2 Complete likelihood

The resulting problem has the following likelihood

$$\begin{aligned} p({\varvec{x}}|{\varvec{\theta }},{\varvec{\varepsilon }})=\frac{1}{\pi ^N\sigma ^{2N}} e^{-\frac{1}{\sigma ^2}\left( {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})-A\tilde{{\varvec{I}}}\right) ^H\left( {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})-A\tilde{{\varvec{I}}}\right) } \end{aligned}$$
(6)

where \({\varvec{\varepsilon }}=\left\{ {\varvec{\eta }}^T,\rho ,\phi ,A,\sigma ^2\right\}\) is the vector gathering the parameters of interest. We consider the angles \(\theta _k\) as latent (random) variables with independent prior distributions \(p(\theta _k)\). We can therefore form the joint likelihood of observed random variables \({\varvec{x}}\) and unobserved random variables \({\varvec{\theta }}\) as \(p({\varvec{x}},{\varvec{\theta }}|{\varvec{\varepsilon }}) = p({\varvec{x}}|{\varvec{\theta }},{\varvec{\varepsilon }})p({\varvec{\theta }})\) using (6) and the chosen prior \(p({\varvec{\theta }})=\prod _{k=1}^Np(\theta _k)\). One way to overcome the fact that \({\varvec{\theta }}\) is unobserved is to marginalize \(p({\varvec{x}},{\varvec{\theta }}|{\varvec{\varepsilon }})\) w.r.t. \({\varvec{\theta }}\) and to maximize this marginalized likelihood. For the particular case where \(p(\theta _k)\) follows a uniform distribution over \([0,2\pi ]\), the marginalized likelihood expression is:

$$\begin{aligned} p({\varvec{x}}|{\varvec{\varepsilon }}) = \frac{1}{(\pi \sigma ^2)^{N}} e^{-\frac{1}{\sigma ^2}\left( {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})\right) ^H\left( {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})\right) }e^{-\frac{A^2N}{\sigma ^2}} \times \prod _{i=1}^N {\mathcal {B}}_0\left( \frac{2A}{\sigma ^2}\left| x_i-\alpha \mu _i\left( {\varvec{\eta }}\right) \right| \right) \end{aligned}$$
(7)

where \({\mathcal {B}}_\nu\) is the modified Bessel function of the first kind and order \(\nu\) [18, Chap. 3.5.4]. This expression cannot be optimized wrt. \({\varvec{\varepsilon }}\) , and one cannot derive closed-form expressions for the maximum likelihood estimators of the parameters in \({\varvec{\varepsilon }}\). Moreover, when the chosen prior over \(\theta _k\) is more complicated than the uniform one, the likelihood expression would be even more complicated and closed-form expressions for the maximum likelihood estimators cannot be derived. One way to bypass this limit is to resort to the EM algorithm. The EM algorithm [19] can be handy to evaluate the maximum likelihood estimator of parameters when missing variables appear in the estimation framework.

The complete likelihood of the parameters \({\varvec{\varepsilon }}\) given the observations \({\varvec{x}}\) and missing variables \({\varvec{\theta }}\) can be expressed as:

$$\begin{aligned} \mathcal {L}_c({\varvec{\varepsilon }};{\varvec{x}},{\varvec{\theta }})=p({\varvec{x}},{\varvec{\theta }}|{\varvec{\varepsilon }}) = p({\varvec{x}}|{\varvec{\theta }},{\varvec{\varepsilon }})p({\varvec{\theta }}). \end{aligned}$$
(8)

where \(p({\varvec{x}}|{\varvec{\theta }},{\varvec{\varepsilon }})\) is given in (6). For the prior \(p({\varvec{\theta }})\), in this article we choose independent Von Mises distributions with parameter \(\gamma\) and \(\kappa\) for the interference phases \({\varvec{\theta }}\)

$$\begin{aligned} p({\varvec{\theta }})\propto \prod _{k=1}^Np(\theta _k) \end{aligned}$$
(9)

with

$$\begin{aligned} p(\theta _k) = \frac{e^{\kappa \cos {\left( \theta _k-\gamma \right) }}}{2\pi {\mathcal {B}}_0(\kappa )} \end{aligned}$$
(10)

where \(\gamma\) is the mean direction of the von Mises distribution, \(\kappa\) is the concentration parameter. In the following, we use the notation

$$\begin{aligned} \theta _k\sim \mathcal{V}\mathcal{M}\left( \theta _k;\kappa ,\gamma \right) \end{aligned}$$
(11)

to describe the interference phases. One can note that when \(\kappa\) is set to 0, the uniform distribution is recovered, and hence, the results should be the same as presented in [20]. This prior distribution depends on a set of two hyperparameters \({\varvec{\varphi }}=\left\{ \gamma ,\kappa \right\}\). These hyperparameters might be set by the user or unknown. Therefore, in the following, we will consider the general case where they are unknown and will estimate them jointly with the vector of parameters \({\varvec{\varepsilon }}\). Using prior (9) and the conditional likelihood (6), we can rewrite the complete likelihood as

$$\begin{aligned} \mathcal {L}_c({\varvec{\varepsilon }},{\varvec{\varphi }};{\varvec{x}},{\varvec{\theta }})= p({\varvec{x}}|{\varvec{\theta }},{\varvec{\varepsilon }})p({\varvec{\theta }}|{\varvec{\varphi }}) \propto \frac{e^{-\frac{1}{\sigma ^2}\left\| {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})-a\tilde{{\varvec{I}}}\right\| ^2+\kappa \sum _{k=1}^N\cos {(\theta _k-\gamma )}}}{\left( \sigma ^{2}{\mathcal {B}}_0(\kappa )\right) ^N} , \end{aligned}$$
(12)

where \(\left\| \varvec{z}\right\| ^2=\varvec{z}^H\varvec{z}\) for any \(\varvec{z}\in \mathbb {C}^N\) and where it is assumed all \(\theta _{k}\) are in \((-\pi ,\pi ]\) to avoid the indicator functions.

3 Modified Cramér–Rao bound

To assess the performance of the proposed method, it would be appealing to derive the Cramér–Rao Bound (CRB) of the likelihood (7). However, the corresponding derivations are intractable, mainly due to the expectation of the \({\mathcal {B}}_0\) function, having no closed-form expression. To alleviate this problem, we propose to resort to the so-called modified CRB (MoCRB) [21], which has been designed for such problems involving missing variables. The MoCRB results in a looser bound of the asymptotic estimation performance of the parameter of interest, but with a closed-form formulation. The MoCRB of parameters of interest \({\varvec{\varepsilon }}\) is then defined in its vector form as [15]

$$\begin{aligned} \text{ MoCRB }\left( {\varvec{\varepsilon }}\right)&= E_{{\varvec{x}},{\varvec{\theta }}}\left\{ \frac{\partial \ln {p({\varvec{x}}|{\varvec{\theta }},{\varvec{\varepsilon }})}}{\partial {\varvec{\varepsilon }}}\left[ \frac{\partial \ln {p({\varvec{x}}|{\varvec{\theta }},{\varvec{\varepsilon }})}}{\partial {\varvec{\varepsilon }}}\right] ^T\right\} ^{-1} \nonumber \\&= E_{{\varvec{\theta }}}\left( E_{{\varvec{x}}|{\varvec{\theta }}}\left\{ \frac{\partial \ln {p({\varvec{x}}|{\varvec{\theta }},{\varvec{\varepsilon }})}}{\partial {\varvec{\varepsilon }}}\left[ \frac{\partial \ln {p({\varvec{x}}|{\varvec{\theta }},{\varvec{\varepsilon }})}}{\partial {\varvec{\varepsilon }}}\right] ^T\right\} \right) ^{-1}, \end{aligned}$$
(13)

where the matrix to be inverted is the so-called modified Fisher information matrix (MoFIM) of the vector \({\varvec{\varepsilon }}\), denoted \({\varvec{F}}_M\left( {\varvec{\varepsilon }}\right)\) in the following. Moreover, we have

$$\begin{aligned} {\varvec{x}}|{\varvec{\theta }},{\varvec{\varepsilon }}\sim \mathcal{C}\mathcal{N}\left( {\varvec{x}};\rho e^{j\phi }{\varvec{\mu }}\left( {\varvec{\eta }}\right) +A\tilde{{\varvec{I}}}({\varvec{\theta }}),\sigma ^2I_N\right) . \end{aligned}$$
(14)

Hence, \({\varvec{F}}_{{\varvec{\theta }}}({\varvec{\varepsilon }})=E_{{\varvec{x}}|{\varvec{\theta }}}\left\{ \frac{\partial \ln {p({\varvec{x}}|{\varvec{\theta }},{\varvec{\varepsilon }})}}{\partial {\varvec{\varepsilon }}}\left[ \frac{\partial \ln {p({\varvec{x}}|{\varvec{\theta }},{\varvec{\varepsilon }})}}{\partial {\varvec{\varepsilon }}}\right] ^T\right\}\) is the Fisher information matrix (FIM) of a complex Gaussian model, and we can resort to the Slepian–Bangs formula [22, (8.34)] to find its expression

$$\begin{aligned} \left[ {\varvec{F}}_{{\varvec{\theta }}}({\varvec{\varepsilon }})\right] _{k,l} =&\frac{1}{\sigma ^4}\text{ tr }\left( \frac{\partial \sigma ^2I_N}{\partial \varepsilon _k}\frac{\partial \sigma ^2I_N}{\partial \varepsilon _l}\right) \nonumber \\&+\frac{2}{\sigma ^2}{{\,\mathrm{Re}\,}}{\left( \frac{\partial \left( \rho e^{j\phi }{\varvec{\mu }}\left( {\varvec{\eta }}\right) +A\tilde{{\varvec{I}}}({\varvec{\theta }})\right) ^H}{\partial \varepsilon _k}\frac{\partial \rho e^{j\phi }{\varvec{\mu }}\left( {\varvec{\eta }}\right) +A\tilde{{\varvec{I}}}({\varvec{\theta }})}{\partial \varepsilon _l}\right) }. \end{aligned}$$
(15)

Recalling \({\varvec{\varepsilon }}=\left\{ {\varvec{\eta }}^T,\rho ,\phi ,A,\sigma ^2\right\}\) and given the previous formula, the FIM is

$$\begin{aligned} {\varvec{F}}_{{\varvec{\theta }}}({\varvec{\varepsilon }}) = \begin{bmatrix} {\varvec{F}}_{{\varvec{\theta }}}({\varvec{\eta }}^T,\rho ,\phi ) &{} {\varvec{F}}_{{\varvec{\theta }}}\left( A,\left[ {\varvec{\eta }}^T,\rho ,\phi \right] \right) ^T &{} 0 \\ {\varvec{F}}_{{\varvec{\theta }}}\left( A,\left[ {\varvec{\eta }}^T,\rho ,\phi \right] \right) &{} {\varvec{F}}_{{\varvec{\theta }}}(A) &{} 0 \\ 0 &{} 0 &{} {\varvec{F}}_{{\varvec{\theta }}}(\sigma ^2) \end{bmatrix} \end{aligned}$$
(16)

where \({\varvec{F}}_{{\varvec{\theta }}}({{\varvec{\eta }}^T},\rho ,\phi ) = {\varvec{F}}({{\varvec{\eta }}^T},\rho ,\phi )\) is the FIM for the case without interference, \({\varvec{F}}_{{\varvec{\theta }}}(\sigma ^2) = \frac{N}{\sigma ^4}\) and both are derived in [11] and independent of \({\varvec{\theta }}\). On the other hand,

$$\begin{aligned} {\varvec{F}}_{{\varvec{\theta }}}(A)&=\frac{2N}{\sigma ^2}, \end{aligned}$$
(17)
$$\begin{aligned} {\varvec{F}}_{{\varvec{\theta }}}\left( A,\left[ {\varvec{\eta }}^T,\rho ,\phi \right] \right)&=\frac{2}{\sigma ^2}{{\,\mathrm{Re}\,}}{\left( \tilde{{\varvec{I}}}^H({\varvec{\theta }})\begin{bmatrix}\rho e^{j\phi }\frac{\partial {\varvec{\mu }}\left( {\varvec{\eta }}\right) }{\partial {\varvec{\eta }}^T},&e^{j\phi }{\varvec{\mu }}\left( {\varvec{\eta }}\right) ,&j\rho e^{j\phi }{\varvec{\mu }}\left( {\varvec{\eta }}\right) \end{bmatrix}\right) }. \end{aligned}$$
(18)

To derive the MoFIM, one has to take the expected value of (16) wrt. \({\varvec{\theta }}\). Given the previous results, one only requires the expected value \(\tilde{{\varvec{I}}}^H({\varvec{\theta }})\), which yields to [18, (3.5.25),(3.5.26)]

$$\begin{aligned} E_{{\varvec{\theta }}}\left[ \tilde{{\varvec{I}}}^H({\varvec{\theta }})\right] = e^{-j\gamma }\frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )}1_N \end{aligned}$$
(19)

with \(1_N\) the vector of ones with size \(1\times N\). Then, after some calculations that can be found in “Appendix 1”, (18) yields to

$$\begin{aligned}&{\varvec{F}}_{M}\left( A,\left[ {\varvec{\eta }}^T,\rho ,\phi \right] \right) ^T \nonumber \\&\quad = \frac{2}{\sigma ^2}\frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )}\begin{bmatrix} 0 ,&\rho {{\,\mathrm{Im}\,}}\left\{ T_s w_c \varvec{s}^T \varvec{D}\varvec{e}_{\phi -\gamma }(f_c b) \right\} ,&{{\,\mathrm{Re}\,}}\left\{ \varvec{s}^T\varvec{e}_{\phi -\gamma }(f_c b) \right\} ,&-\rho {{\,\mathrm{Im}\,}}\left\{ \varvec{s}^T\varvec{e}_{\phi -\gamma }(f_c b) \right\} \end{bmatrix}, \end{aligned}$$
(20)

with \(w_c = 2\pi f_c\), \(\varvec{s} = \begin{bmatrix} \ldots ,&s(kT_s) ,&\ldots \end{bmatrix}\), \(\varvec{e}_{\phi -\gamma }(f) = \begin{bmatrix} \ldots ,&e^{j(\phi -\gamma -2\pi fkT_s)},&\ldots \end{bmatrix}\) and \(\varvec{D} = \text {diag}(N_1, \dots , N_2)\). Finally, the MoFIM is

$$\begin{aligned} {\varvec{F}}_M({\varvec{\varepsilon }})=\begin{bmatrix} {\varvec{F}}({{\varvec{\eta }}^T},\rho ,\phi ) &{} {\varvec{F}}_{M}\left( A,\left[ {\varvec{\eta }}^T,\rho ,\phi \right] \right) ^T &{} 0 \\ {\varvec{F}}_{M}\left( A,\left[ {\varvec{\eta }}^T,\rho ,\phi \right] \right) &{} \frac{2N}{\sigma ^2} &{} 0 \\ 0&{}0&{}\frac{N}{\sigma ^4} \end{bmatrix} \end{aligned}$$
(21)

Finally, from the matrix inversion lemma [23, 14.11-(a)] we have the MoCRB expression for the parameters of interest

$$\begin{aligned} \text {MoCRB}({{\varvec{\eta }}^T},\rho ,\phi )&= {\varvec{F}}({{\varvec{\eta }}^T},\rho ,\phi )^{-1}\nonumber \\&\quad + \frac{\sigma ^2}{2N}{\varvec{F}}({{\varvec{\eta }}^T},\rho ,\phi )^{-1}{\varvec{F}}_{M}\left( A,\left[ {\varvec{\eta }}^T,\rho ,\phi \right] \right) ^T{\varvec{F}}_{M}\left( A,\left[ {\varvec{\eta }}^T,\rho ,\phi \right] \right) \nonumber \\&\quad \times \, {\varvec{F}}({{\varvec{\eta }}^T},\rho ,\phi )^{-1}. \end{aligned}$$
(22)

Note that when \(\kappa =0\), i.e., when the prior distribution is uniform, the second term vanishes due to the term \(\frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )}\) which is 0 when \(\kappa =0\), and the MoCRB becomes the CRB in the absence of interference.

4 EM approach for interference mitigation under the CM hypothesis

In this section, we introduce the proposed EM algorithm. The EM algorithm iterates between the expectation (E) and the maximization (M) steps to obtain a maximum of the likelihood function:

  • E-step: the derivation of the function

    $$\begin{aligned} Q({\varvec{\varepsilon }},{\varvec{\varphi }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}) = E_{{\varvec{\theta }}|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}}\left[ \log \mathcal {L}_c({\varvec{\varepsilon }},{\varvec{\varphi }};{\varvec{x}}, {\varvec{\theta }})\right] \end{aligned}$$
    (23)
  • M-step: the maximization of this function \(Q({\varvec{\varepsilon }},{\varvec{\varphi }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)})\)

    $$\begin{aligned} {\varvec{\varepsilon }}^{(t+1)} ,{\varvec{\varphi }}^{(t+1)}= {\mathop {\mathrm{arg\,max}}\limits _{{\varvec{\varepsilon }},{\varvec{\varphi }}}}\,Q({\varvec{\varepsilon }},{\varvec{\varphi }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}), \end{aligned}$$
    (24)

where t represents the iteration index.

4.1 E-step

4.1.1 Derivation of the Q function

At the \(t+1\) iteration, the E-step approximates the loglikelihood (which is to be maximized) around the parameters \({\varvec{\varepsilon }}^{(t)}\) and \({\varvec{\varphi }}^{(t)}\). This approximation is given by:

$$\begin{aligned} Q({\varvec{\varepsilon }},{\varvec{\varphi }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}) = E_{{\varvec{\theta }}|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}}\left[ \log \mathcal {L}_c({\varvec{\varepsilon }},{\varvec{\varphi }};{\varvec{x}}, {\varvec{\theta }})\right] \end{aligned}$$
(25)

where \({\varvec{\varepsilon }}^{(t)}\), reps. \({\varvec{\varphi }}^{(t)}\), represent the current value for the set of parameters, resp. hyperparameters. From (12), we have

$$\begin{aligned} \log \mathcal {L}_c({\varvec{\varepsilon }},{\varvec{\varphi }};{\varvec{x}},{\varvec{\theta }})&= K'-N\log \sigma ^2-N\log {\mathcal {B}}_0(\kappa )\nonumber \\&\quad -\frac{1}{\sigma ^2}\left\| {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})-a\tilde{{\varvec{I}}}\right\| ^2+\kappa \sum _{k=1}^N\cos {(\theta _k-\gamma )}\nonumber \\&= K'-N\log \sigma ^2-N\log {\mathcal {B}}_0(\kappa )-\frac{1}{\sigma ^2}\left\| {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})\right\| ^2\nonumber \\&\quad + \frac{2A}{\sigma ^2}{{\,\mathrm{Re}\,}}{\left\{ \left( {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})\right) ^H\tilde{{\varvec{I}}}\right\} }-\frac{NA^2}{\sigma ^2}+\kappa \sum _{k=1}^N\cos {(\theta _k-\gamma )} \nonumber \\&= K'-N\log \sigma ^2-N\log {\mathcal {B}}_0(\kappa )-\frac{1}{\sigma ^2}\left\| {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})\right\| ^2\nonumber \\&\quad -\frac{NA^2}{\sigma ^2}+\sum _{k=1}^N\left( \frac{2A}{\sigma ^2}\delta _k\cos {\left( \theta _k-\beta _k\right) }+\kappa \cos {(\theta _k-\gamma )}\right) , \end{aligned}$$
(26)

where \(K'\) gather terms independent of \(\{{\varvec{\varepsilon }},{\varvec{\varphi }},{\varvec{x}}, {\varvec{\theta }}\}\), and \({{\,\mathrm{Re}\,}}{\left( \left( {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})\right) ^H\tilde{{\varvec{I}}}\right) } =\sum _{k=1}^N \delta _k\cos {\left( \theta _k-\beta _k\right) }\) with \(\delta _k = \left| x_k-\alpha \mu _k({\varvec{\eta }})\right|\) and \(\beta _k= \arg {\left( x_k-\alpha \mu _k({\varvec{\eta }})\right) }\), where \(\arg (.):\mathbb {C}\rightarrow (-\pi ,\pi ]\) the argument of a complex number. The only terms depending on \({\varvec{\theta }}\) in (26) are in the sum of cosines, leading to

$$\begin{aligned}&Q({\varvec{\varepsilon }},{\varvec{\varphi }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}) = K'-N\log \sigma ^2-N\log {\mathcal {B}}_0(\kappa ) -\frac{1}{\sigma ^2}\left\| {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})\right\| ^2-\frac{NA^2}{\sigma ^2}\nonumber \\&\quad +\sum _{k=1}^N\frac{2A}{\sigma ^2}\delta _kE_{{\varvec{\theta }}|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}}\left[ \cos {\left( \theta _k-\beta _k\right) }\right] +\sum _{k=1}^N\kappa E_{{\varvec{\theta }}|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}}\left[ \cos {(\theta _k-\gamma )}\right] . \end{aligned}$$
(27)

4.1.2 Conditional distribution

This expected value considers the distribution of \({\varvec{\theta }}|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}\). The distribution of \({\varvec{\theta }}|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}\) can be expressed as (see “Appendix 2” for details)

$$\begin{aligned} p({\varvec{\theta }}|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)})\propto&\prod _{k=1}^Np(\theta _k|x_k,{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}) \end{aligned}$$
(28)

and

$$\begin{aligned} \theta _k|x_k,{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}\sim \mathcal{V}\mathcal{M}\left( \theta _k;\kappa _k^{(t)},\gamma _k^{(t)}\right) . \end{aligned}$$
(29)

To express the terms in this distribution, we define \(x_k\) as the k-th component of the vector \({\varvec{x}}\) and \(\mu _k\left( {\varvec{\eta }}\right)\) the k-th component of the vector \({\varvec{\mu }}({\varvec{\eta }})\), and then,

$$\begin{aligned} \delta _k^{(t)}&= \left| x_k-\alpha ^{(t)}\mu _k({\varvec{\eta }}^{(t)})\right| , \end{aligned}$$
(30)
$$\begin{aligned} \beta _k^{(t)}&= \arg {\left( x_k-\alpha ^{(t)}\mu _k({\varvec{\eta }}^{(t)})\right) }, \end{aligned}$$
(31)
$$\begin{aligned} \kappa _k^{(t)}&= \sqrt{\left( \frac{2A^{(t)}\delta _k^{(t)}}{{\sigma ^2}^{(t)}}\right) ^2+\left( \kappa ^{(t)}\right) ^2+\frac{4A^{(t)}\delta _k^{(t)}\kappa ^{(t)}}{{\sigma ^2}^{(t)}}\cos {\left( \beta _k^{(t)}-\gamma ^{(t)}\right) }} , \end{aligned}$$
(32)
$$\begin{aligned} \gamma _k^{(t)}&= \text{ atan2 }\Bigg (\frac{2A^{(t)}\delta _k^{(t)}}{{\sigma ^2}^{(t)}}\sin {\left( \beta _k^{(t)}\right) }+\kappa ^{(t)}\sin {\left( \gamma ^{(t)}\right) },\frac{2A^{(t)}\delta _k^{(t)}}{{\sigma ^2}^{(t)}}\cos {\left( \beta _k^{(t)}\right) }+\kappa ^{(t)}\cos {\left( \gamma ^{(t)}\right) }\Bigg ). \end{aligned}$$
(33)

4.1.3 Expectation

We have for any angle \(\psi \in (-\pi ,\pi ]\)

$$\begin{aligned} E_{\theta _k|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}}\left[ \cos {\left( \theta _k-\psi \right) }\right]&= E_{\theta _k|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}}\left[ \cos {\left( \theta _k-\gamma _k^{(t)}+\gamma _k^{(t)}-\psi \right) }\right] \nonumber \\&= E_{\theta _k|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}}\left[ \cos {\left( \theta _k-\gamma _k^{(t)}\right) }\right] \cos {\left( \gamma _k^{(t)}-\psi \right) }\nonumber \\&-E_{\theta _k|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}}\left[ \sin {\left( \theta _k-\gamma _k^{(t)}\right) }\right] \sin {\left( \gamma _k^{(t)}-\psi \right) }. \end{aligned}$$
(34)

Since \(\theta _k|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}\) follows a Von Mises distribution with parameters \(\kappa _k^{(t)}\) and \(\gamma _k^{(t)}\), we have [18]

$$\begin{aligned} E_{\theta _k|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}}\left[ \cos {\left( \theta _k-\gamma _k^{(t)}\right) }\right] = \frac{{\mathcal {B}}_1\left( \kappa _k^{(t)}\right) }{{\mathcal {B}}_0\left( \kappa _k^{(t)}\right) }, \quad E_{\theta _k|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}}\left[ \sin {\left( \theta _k-\gamma _k^{(t)}\right) }\right] =0 \end{aligned}$$
(35)

and thus

$$\begin{aligned} E_{{\varvec{\theta }}|{\varvec{x}},{\varvec{\varepsilon }}^{(t)}}\left[ \cos {\left( \theta _i-\psi \right) }\right] = \frac{{\mathcal {B}}_1\left( \kappa _i^{(t)}\right) }{{\mathcal {B}}_0\left( \kappa _i^{(t)}\right) } \cos {\left( \gamma _k^{(t)}-\psi \right) }. \end{aligned}$$
(36)

Therefore,

$$\begin{aligned} Q({\varvec{\varepsilon }},{\varvec{\varphi }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)})&= K'-N\log \sigma ^2-N\log {\mathcal {B}}_0(\kappa )-\frac{1}{\sigma ^2}\left\| {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})\right\| ^2-\frac{NA^2}{\sigma ^2}\nonumber \\&\quad +\sum _{k=1}^N\frac{2A}{\sigma ^2}\delta _k\frac{{\mathcal {B}}_1\left( \kappa _k^{(t)}\right) }{{\mathcal {B}}_0\left( \kappa _k^{(t)}\right) } \cos {\left( \gamma _k^{(t)}-\beta _k\right) }+\sum _{k=1}^N\kappa \frac{{\mathcal {B}}_1\left( \kappa _k^{(t)}\right) }{{\mathcal {B}}_0\left( \kappa _k^{(t)}\right) } \cos {\left( \gamma _k^{(t)}-\gamma \right) }\nonumber \\&=K'+ Q({\varvec{\varepsilon }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}) + Q({\varvec{\varphi }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}), \end{aligned}$$
(37)

with

$$\begin{aligned} Q({\varvec{\varepsilon }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)})&= -N\log \sigma ^2-\frac{\left\| {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})\right\| ^2}{\sigma ^2}-\frac{NA^2}{\sigma ^2}\nonumber \\&\quad +\sum _{k=1}^N\frac{2A}{\sigma ^2}\delta _k\frac{{\mathcal {B}}_1\left( \kappa _k^{(t)}\right) }{{\mathcal {B}}_0\left( \kappa _k^{(t)}\right) } \cos {\left( \gamma _k^{(t)}-\beta _k\right) } \end{aligned}$$
(38)

and

$$\begin{aligned} Q({\varvec{\varphi }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}) = -N\log {\mathcal {B}}_0(\kappa )+\sum _{k=1}^N\kappa \frac{{\mathcal {B}}_1\left( \kappa _k^{(t)}\right) }{{\mathcal {B}}_0\left( \kappa _k^{(t)}\right) } \cos {\left( \gamma _k^{(t)}-\gamma \right) }. \end{aligned}$$
(39)

Finally, we can define

$$\begin{aligned} w_k^{(t)} = \frac{{\mathcal {B}}_1\left( \kappa _k^{(t)}\right) }{{\mathcal {B}}_0\left( \kappa _k^{(t)}\right) }, \quad \varvec{a}_t=\begin{bmatrix}w_1^{(t)}e^{j\gamma _1^{(t)}}&\ldots&w_N^{(t)}e^{j\gamma _N^{(t)}}\end{bmatrix}^T, \end{aligned}$$
(40)

yielding in (38) to

$$\begin{aligned} Q({\varvec{\varepsilon }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)})&= -N\log \sigma ^2-\frac{1}{\sigma ^2}\left\| {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})\right\| ^2-\frac{NA^2}{\sigma ^2}\nonumber \\&\quad +\frac{2A}{\sigma ^2}{{\,\mathrm{Re}\,}}{\left\{ \left( {\varvec{x}}-\rho e^{j\varphi }{\varvec{\mu }}({\varvec{\eta }})\right) ^H\varvec{a}_t\right\} } \nonumber \\&=-N\log {\sigma ^2} +\frac{A^2}{\sigma ^2}\left( \sum _{k=1}^N\left( w_k^{(t)}\right) ^2-N\right) -\frac{1}{\sigma ^2}\left\| {\varvec{x}}-\rho e^{j\varphi }{\varvec{\mu }}({\varvec{\eta }})-A\varvec{a}_t\right\| ^2 \end{aligned}$$
(41)

and (39) to

$$\begin{aligned} Q({\varvec{\varphi }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}) = -N\log {\mathcal {B}}_0(\kappa )+\kappa \sum _{k=1}^N w_k^{(t)}\cos {\left( \gamma _k^{(t)}-\gamma \right) }. \end{aligned}$$
(42)

4.2 M-step

The second step of the EM algorithm is to maximize (37) w.r.t. \({\varvec{\varepsilon }}\) and \({\varvec{\varphi }}\). Note that this equation is the sum of two independent terms in \({\varvec{\varepsilon }}\) and \({\varvec{\varphi }}\), so the optimization can be done independently.

4.2.1 Optimization with respect to \(\sigma ^2\)

Differentiating (41) w.r.t. \(\sigma ^2\) yields the update equation

$$\begin{aligned} \left( \sigma ^2\right) ^{(t+1)}= & {} \frac{1}{N}\left\| {\varvec{x}}-\rho ^{(t+1)} e^{j\varphi ^{(t+1)}}{\varvec{\mu }}({\varvec{\eta }}^{(t+1)})-A^{(t+1)}\varvec{a}_t\right\| ^2\nonumber \\{} & {} \quad +(A^{(t+1)})^2\left( 1-\frac{1}{N}\sum _{k=1}^n\left( w_k^{(t)}\right) ^2\right) . \end{aligned}$$
(43)

4.2.2 Optimization with respect to \(\rho ,\varphi ,{\varvec{\eta }}\)

The second term in function (41) is independent of \(\rho ,\varphi\) and \({\varvec{\eta }}\), and then, we can recast the equation as

$$\begin{aligned} Q({\varvec{\varepsilon }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)})&=-\frac{1}{\sigma ^2} \left\| {\varvec{x}}-\rho e^{j\varphi }{\varvec{\mu }}({\varvec{\eta }})-A\varvec{a}_t\right\| ^2 +K'' \end{aligned}$$

where \(K''\) represents the terms independent of \(\rho ,\varphi\) and \({\varvec{\eta }}\). Let us denote \(S = \text {span}(\textbf{B})\), with \(\textbf{B}\) a matrix, which is the linear span of the set of its column vectors. \(\varvec{\Pi }_B = \textbf{B}(\textbf{B}^H \textbf{B})^{-1}\textbf{B }^H\) and \(\varvec{\Pi }_{\textbf{B}}^{\bot } = {\varvec{I}} - \varvec{\Pi }_{ \textbf{B}}\) are the orthogonal projectors over S and \(S^{\bot }\), respectively. Then,

$$\begin{aligned} \varvec{\Pi }_{{\varvec{\mu }}\left( {\varvec{\eta }}\right) }&={\varvec{\mu }}\left( {\varvec{\eta }}\right) \left( {\varvec{\mu }}\left( {\varvec{\eta }}\right) ^H{\varvec{\mu }}\left( {\varvec{\eta }}\right) \right) ^{-1}{\varvec{\mu }}\left( {\varvec{\eta }}\right) ^H, \end{aligned}$$
(44)
$$\begin{aligned} \varvec{\Pi }_{{\varvec{\mu }}\left( {\varvec{\eta }}\right) }^\perp&={\varvec{I}}_N-\varvec{\Pi }_{{\varvec{\mu }}\left( {\varvec{\eta }}\right) }, \end{aligned}$$
(45)

and

$$\begin{aligned} \left\| {\varvec{x}}-\rho e^{j\varphi }{\varvec{\mu }}({\varvec{\eta }})-A\varvec{a}_t\right\| ^2&=\left\| \varvec{\Pi }_{{\varvec{\mu }}\left( {\varvec{\eta }}\right) }\left( {\varvec{x}}-A\varvec{a}_t\right) -\rho e^{j\varphi }{\varvec{\mu }}({\varvec{\eta }})\right\| ^2+\left\| \varvec{\Pi }_{{\varvec{\mu }}\left( {\varvec{\eta }}\right) }^\perp \left( {\varvec{x}}-A\varvec{a}_t\right) \right\| ^2 \nonumber \\&= \left\| {\varvec{\mu }}\left( {\varvec{\eta }}\right) \left[ \frac{{\varvec{\mu }}\left( {\varvec{\eta }}\right) ^H \left( {\varvec{x}}-A\varvec{a}_t\right) }{{\varvec{\mu }}\left( {\varvec{\eta }}\right) ^H{\varvec{\mu }}\left( {\varvec{\eta }}\right) }-\rho e^{j\varphi }\right] \right\| ^2+\left\| \varvec{\Pi }_{{\varvec{\mu }}\left( {\varvec{\eta }}\right) }^\perp \left( {\varvec{x}}-A\varvec{a}_t\right) \right\| ^2. \end{aligned}$$
(46)

Note that maximizing the previous expression wrt. \(\rho ,\varphi\) yields to the following update equations:

$$\begin{aligned} \rho ^{(t+1)}&=\left| \frac{{\varvec{\mu }}\left( {\varvec{\eta }}^{(t+1)}\right) ^H \left( {\varvec{x}}-A^{(t+1)}\varvec{a}_t\right) }{{\varvec{\mu }}\left( {\varvec{\eta }}^{(t+1)}\right) ^H{\varvec{\mu }}\left( {\varvec{\eta }}^{(t+1)}\right) }\right| , \end{aligned}$$
(47)
$$\begin{aligned} \varphi ^{(t+1)}&=\arg {\left( \frac{{\varvec{\mu }}\left( {\varvec{\eta }}^{(t+1)}\right) ^H \left( {\varvec{x}}-A^{(t+1)}\varvec{a}_t\right) }{{\varvec{\mu }}\left( {\varvec{\eta }}^{(t+1)}\right) ^H{\varvec{\mu }}\left( {\varvec{\eta }}^{(t+1)}\right) }\right) }. \end{aligned}$$
(48)

Moreover, \(\left\| \varvec{\Pi }_{{\varvec{\mu }}\left( {\varvec{\eta }}\right) }^\perp \left( {\varvec{x}}-A\varvec{a}_t\right) \right\| ^2 = \left\| {\varvec{x}}-A\varvec{a}_t\right\| ^2 - \left\| \varvec{\Pi }_{{\varvec{\mu }}\left( {\varvec{\eta }}\right) } \left( {\varvec{x}}-A\varvec{a}_t\right) \right\| ^2\), where the Pythagora’s theorem has been applied. Then, maximizing \(Q({\varvec{\varepsilon }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)})\) w.r.t. \({\varvec{\eta }}\) yields to the following update equation

$$\begin{aligned} {\varvec{\eta }}^{(t+1)} = \arg \max _{{\varvec{\eta }}} \left\| \varvec{\Pi }_{{\varvec{\mu }}\left( {\varvec{\eta }}\right) } \left( {\varvec{x}}-A^{(t+1)}\varvec{a}_t\right) \right\| ^2 . \end{aligned}$$
(49)

4.3 M-step: optimization with respect to A

To carry out the optimization, we first remind that the objective function (41) can be expressed as:

$$\begin{aligned} Q({\varvec{\varepsilon }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)})=K'''-\frac{NA^2}{\sigma ^2} +\frac{2A}{\sigma ^2}{{\,\mathrm{Re}\,}}{\left\{ \left( {\varvec{x}}-\rho e^{j\varphi }{\varvec{\mu }}({\varvec{\eta }})\right) ^H\varvec{a}_t\right\} }, \end{aligned}$$
(50)

where \(K'''\) represents the terms independent of A. Differentiating this expression w.r.t. A yields to

$$\begin{aligned} A^{(t+1)}=\frac{1}{N}\text{ Re }\left\{ \varvec{a}_t^H\left( {\varvec{x}}-\rho ^{(t+1)} e^{j\varphi ^{(t+1)}}{\varvec{\mu }}\left( {\varvec{\eta }}^{(t+1)}\right) \right) \right\} . \end{aligned}$$
(51)

We can note that

$$\begin{aligned} \rho ^{(t+1)} e^{j\varphi ^{(t+1)}} = \left( \frac{{\varvec{\mu }}\left( {\varvec{\eta }}^{(t+1)}\right) ^H \left( {\varvec{x}}-A^{(t+1)}\varvec{a}_t\right) }{{\varvec{\mu }}\left( {\varvec{\eta }}^{(t+1)}\right) ^H{\varvec{\mu }}\left( {\varvec{\eta }}^{(t+1)}\right) }\right) \end{aligned}$$
(52)

can be injected in (51) to give the update equation

$$\begin{aligned} A^{(t+1)}=\frac{\text{ Re }\left\{ \varvec{a}_t^H\varvec{\Pi }^{\perp }_{{\varvec{\mu }}\left( {\varvec{\eta }}^{(t+1)}\right) }{\varvec{x}}\right\} }{N-\text{ Re }\left\{ \varvec{a}_t^H\varvec{\Pi }_{{\varvec{\mu }}\left( {\varvec{\eta }}^{(t+1)}\right) }\varvec{a}_t\right\} }. \end{aligned}$$
(53)

Note that (53) can be injected into (49) to provide the update equation

$$\begin{aligned} {\varvec{\eta }}^{(t+1)}&= {\mathop {\mathrm{arg\,max}}\limits _{{\varvec{\eta }}}}\,{\left\| \varvec{\Pi }_{{\varvec{\mu }}\left( {\varvec{\eta }}\right) } \left( {\varvec{x}}-\frac{\text{ Re }\left\{ \varvec{a}_t^H\varvec{\Pi }_{{\varvec{\mu }}\left( {\varvec{\eta }}\right) }^\perp {\varvec{x}}\right\} }{N-\text{ Re }\left\{ \varvec{a}_t^H\varvec{\Pi }_{{\varvec{\mu }}\left( {\varvec{\eta }}\right) }\varvec{a}_t\right\} }\varvec{a}_t\right) \right\| ^2 } . \end{aligned}$$
(54)

4.4 M-step: optimization with respect to \(\kappa\) and \(\gamma\)

In the case where the hyperparameters are to be estimated jointly with the model parameters, we can update their values maximizing \(Q({\varvec{\varphi }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)})\), previously defined in (42). Let us denote

$$\begin{aligned} \overline{C}_t&= \frac{1}{N}\sum _{k=1}^N w_k^{(t)}\cos {\left( \gamma _k^{(t)}\right) }, \end{aligned}$$
(55)
$$\begin{aligned} \overline{S}_t&= \frac{1}{N}\sum _{k=1}^N w_k^{(t)}\sin {\left( \gamma _k^{(t)}\right) }, \end{aligned}$$
(56)
$$\begin{aligned} \overline{R}_t&= \sqrt{\overline{C}_t^2+\overline{S}_t^2}, \end{aligned}$$
(57)
$$\begin{aligned} \overline{\gamma }_t&= \text{ atan2 }\left( \overline{S}_t ,\overline{C}_t \right) . \end{aligned}$$
(58)

Then, \(\overline{C}_t =\overline{R}_t \cos {\left( \overline{\gamma }_t\right) }\) and \(\overline{S}_t =\overline{R}_t \sin {\left( \overline{\gamma }_t\right) }.\) Moreover,

$$\begin{aligned} \sum _{k=1}^N w_k^{(t)}\cos {\left( \gamma _k^{(t)}-\gamma \right) }&= \sum _{k=1}^N w_k^{(t)}\cos {\left( \gamma _k^{(t)}\right) }\cos {\left( \gamma \right) }-\sum _{k=1}^N w_k^{(t)}\sin {\left( \gamma _k^{(t)}\right) }\sin {\left( \gamma \right) } \nonumber \\&= N\overline{C}_t\cos {\left( \gamma \right) }-N\overline{S}_t\sin {\left( \gamma \right) } =N\overline{R}_t \cos {\left( \gamma -\overline{\gamma }_t\right) }, \end{aligned}$$
(59)

yielding

$$\begin{aligned} Q({\varvec{\varphi }}|{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}) = -N\log {\mathcal {B}}_0(\kappa )+\kappa N\overline{R}_t \cos {\left( \overline{\gamma }_t-\gamma \right) }. \end{aligned}$$
(60)

From the previous equation, we can recognize the loglikelihood of a von Mises distribution as in [18, (5.3.1)], which leads the update equations

$$\begin{aligned} \gamma ^{(t+1)}&= \overline{\gamma }_t \end{aligned}$$
(61)
$$\begin{aligned} \kappa ^{(t+1)}&= A^{-1}(\overline{R}_t) \end{aligned}$$
(62)

where the function \(A(\kappa )={\mathcal {B}}_1(\kappa )/{\mathcal {B}}_0(\kappa )\) has no closed form, but can be approximated using an iterative algorithm [24].

4.5 Implemented algorithm

For \(t=0\), i.e., for initialization, we might compute \(\left\{ \left( {\varvec{\eta }}^{(0)}\right) ^T,\rho ^{(0)},\phi ^{(0)}, \left( \sigma ^2\right) ^{(0)} \right\}\) thanks to the standard MLE [11]. In other words,

$$\begin{aligned} {\varvec{\eta }}^{(0)}&= \arg \max _{{\varvec{\eta }}} \left\| \varvec{\Pi }_{{\varvec{\mu }}\left( {\varvec{\eta }}\right) } {\varvec{x}} \right\| ^2 \end{aligned}$$
(63)
$$\begin{aligned} \rho ^{(0)}&=\left| \frac{{\varvec{\mu }}\left( {\varvec{\eta }}^{(0)}\right) ^H {\varvec{x}}}{{\varvec{\mu }}\left( {\varvec{\eta }}^{(0)}\right) ^H{\varvec{\mu }}\left( {\varvec{\eta }}^{(0)}\right) }\right| \end{aligned}$$
(64)
$$\begin{aligned} \varphi ^{(0)}&=\arg {\left\{ \frac{{\varvec{\mu }}\left( {\varvec{\eta }}^{(0)}\right) ^H {\varvec{x}}}{{\varvec{\mu }}\left( {\varvec{\eta }}^{(0)}\right) ^H{\varvec{\mu }}\left( {\varvec{\eta }}^{(0)}\right) }\right\} } \end{aligned}$$
(65)
$$\begin{aligned} (\sigma ^2)^{(0)}&= \frac{1}{N} \left\| {\varvec{x}}-\rho ^{(0)} e^{j\varphi ^{(0)}}{\varvec{\mu }}\left( {\varvec{\eta }}^{(0)}\right) \right\| ^2. \end{aligned}$$
(66)

Once the interference is detected, due to the fact that the interference is much more powerful than the GNSS signal masked in the noise, we keep the MLE estimators for \({\varvec{\eta }}^{(1)}, \rho ^{(1)} \varphi ^{(1)}\) and \((\sigma ^2)^{(1)}\), and we initialize the interference power as

$$\begin{aligned} A^{(1)} = \frac{1}{N}\sqrt{{\varvec{x}}^H{\varvec{x}}}. \end{aligned}$$
(67)

If we do not know the parameters for the prior distribution, we can set \(\gamma ^{(1)}= 0\) and \(\kappa ^{(1)} = 0\). Those values indicate that the prior distribution is uniform. Then, we compute the Von Mises parameters, \(\gamma _k^{(t)}\) and \(\kappa _k^{(t)}\), and trigonometric moments \(w_k^{(t)}\) and \(\varvec{a}_t\), using Eqs. (32), (33), and (40), respectively. Finally, we update the parameters of interest in the order \({\varvec{\eta }}, A, \rho , \varphi\) and \(\sigma ^2\) with Eqs. (43), (47), (48), (53), and (54), respectively. Note that Eqs. (54) and (63) do not have closed-form expressions. However, a grid search approach can be used to maximize the corresponding functions. Also, a little simplification can be made to optimize (49), using (54) where \(A^{(t+1)}\) is replaced by \(A^{(t)}\) to update \({\varvec{\eta }}\). With this approach, the same function can be used to solve both (54) and (63).

4.6 Simplification of the algorithm based on an uniform prior

The algorithm presented in this article is a generalization of the algorithm presented in [20]. In that article, the prior distribution is considered to be a uniform distribution, i.e. \(\kappa ^{(t)} = 0\) and \(\gamma ^{(t)}\) can be undefined. Therefore, as demonstrated in “Appendix 2”, \(\kappa _k^{(t)} = \frac{2a}{\sigma ^2}\delta _k^{(t)}\) and \(\gamma _k^{(t)} = \beta _k^{(t)}\) and M-step presented in Sect. 4.4 are not required. Once these simplifications have been made, the algorithm presented in [20] is the same as the one proposed in Sect. 4.5.

Fig. 1
figure 1

PSD of centered linear chirp signal of bandwidth \(B_c=1\) MHz and amplitude \(A=40\). The sampling frequency is set to \(F_s=4\) MHz

5 Experiments

In order to evaluate the performance of the algorithms presented in Sects. 4.5 and 4.6, we consider two possible scenarios.

  • Scenario 1:

Let us consider the case where a GPS L1 C/A signal [1] is attacked by a jammer that is generating a linear frequency modulation (LFM) signal [25], which is defined as

$$\begin{aligned} I(t)=\Pi _T(t) \times e^{j\pi \beta _c t^2+j\phi } ,~ \Pi _T(t) = \left\{ \begin{array}{ll} {A} &{} \text {for } 0 \le t < T \\ 0 &{} \text {otherwise} \end{array} \right. \; \end{aligned}$$
(68)

where \(\beta _c\) is the chirp rate and A is the amplitude. For this particular scenario, we set the waveform period as \(T=N \cdot T_{s}\), i.e., equal to the integration time. The instantaneous frequency is \(f(t)= \frac{1}{2 \pi } \frac{d}{dt}\left( \pi \beta _c t^2\right) =\beta _c t\), and therefore, the waveform bandwidth is \(B_c=\beta _c T\). We consider the case where, after the Hilbert filter, the chirp is located at the baseband frequency, i.e., the central frequency of the chirp is \(f_i=0\). Then, the waveform can be rewritten as:

$$\begin{aligned} I(t)=\Pi _T(t) \times e^{j\pi \beta _c \left( t-T/2\right) ^2+j\phi }. \end{aligned}$$
(69)

We set the chirp bandwidth \(B_c = 1\) MHz, with initial phase \(\phi =0\), amplitude \({A} =40\). In Fig. 1, we illustrate the power spectral density (PSD) of the linear chirp considered in scenario 1. We would also like to point out that the phase distribution for this case is a uniform distribution.

  • Scenario 2:

We consider a GPS L1 C/A signal attacked by a jammer generating a nonlinear frequency modulation [26], which is defined as

$$\begin{aligned} I(t)=\Pi _T(t) \times e^{j\pi \varphi (\beta _c ; t)+j\phi } ,~ \Pi _T(t) = \left\{ \begin{array}{ll} {A} &{} \text {for } 0 \le t < T \\ 0 &{} \text {otherwise} \end{array} \right. \; \end{aligned}$$
(70)

where \(\varphi (\beta _c; t)\) is a nonlinear function with \(\beta _c\) a parameter that controls the bandwidth of the chirp. For this particular scenario, we set \(\varphi (\beta _c; T; t) = \sin \left( \frac{\pi }{T}\pi \beta _c t\right)\). Note that the chirp is located at the baseband frequency, i.e., the central frequency of the chirp is \(f_i=0\). For this scenario, we set \(\beta _c = 51\), \(T=1\) ms (which it is the duration of the GPS L1 C/A signal PRN), initial phase \(\phi =0\), amplitude \({A} =40\). In Fig. 2, we illustrate the power spectral density (PSD) of the nonlinear chirp. For this particular scenario and because the phase is characterized by a nonlinear periodic function, we can verify that the phase distribution is not uniform. In Fig. 3, we observe the phase histogram. As can be seen, the probability density function of the chirp distribution can be well represented from a VM distribution.

Fig. 2
figure 2

PSD of centered nonlinear chirp signal of bandwidth \(B_c=0.16\) MHz and amplitude \(A=40\). The sampling frequency is set to \(F_s=4\) MHz

Fig. 3
figure 3

Histogram of \(\theta _k\) from a nonlinear chirp signal of bandwidth \(B_c=0.16\) MHz and amplitude \(A=40\). The sampling frequency is set to \(F_s=4\) MHz

The root-mean-squared error (RMSE) for the parameters of interest \({\varvec{\eta }}^T\) is displayed in Figs. 4 and 5 for scenario 1 and in Figs. 6 and 7 for scenario 2. These figures depict how the RMSE varies with respect to the signal-to-noise ratio (SNR) at the output of the matched filter, labeled as \(SNR_{OUT}\). The evaluation is conducted under the following conditions: a GNSS receiver with a sampling frequency of \(F_s = 4\) MHz and integration times of \(T= {1}\) ms.

The EM algorithm incorporates a stopping criterion that assesses the change in noise variance at each iteration. The maximum iteration limit is set to 15, and 1000 Monte Carlo runs are performed. The results presented in the figures illustrate four different types of curves:

  • the \(\sqrt{\text {CRB}}\) (as referred to in [11]), which signifies the asymptotic estimation performance of the parameters in the absence of interference

  • the \(\sqrt{\text {MCRB} + \text {Bias}^2}\), representing the asymptotic estimation performance of the parameters when the receiver is unaware of the presence of interference (as discussed in [13, 27]). This includes the root MSE of the misspecified maximum likelihood estimator, as mentioned in [12]. Please note that these metrics quantify the root MSE under the assumption that the receiver does not account for any interference, i.e., the receiver assumes a misspecified model with probability distribution that deviates from the true model. The bias is determined by minimizing the Kullback–Leibler divergence between the probability distributions of the true model and the assumed model (i.e., the misspecified model)

  • the \(\sqrt{\text {MoCRB}}\) derived in Sect. 3 provides a looser bound of the problem of interest. Note that for the evaluated scenarios, the \(\sqrt{\text {MoCRB}}\) yields to the \(\sqrt{\text {CRB}}\). This is consistent if one evaluates numerically the MoFIM matrix introduced in (21) since the order of magnitude of the values within the vector \({\varvec{F}}_{M}\left( A,\left[ {\varvec{\eta }}^T,\rho ,\phi \right] \right) ^T\) is much smaller than the orders of magnitude of the values in the matrix \(\varvec{F}({{\varvec{\eta }}^T},\rho ,\phi )\)

  • the root MSE (\(\sqrt{\text {MSE}}\)) generated by the proposed EM algorithm in Sect. 4.5 and the simplified algorithm proposed in Sect. 4.6. It is important to note that the proposed EM algorithms seem to be unbiased and capable of correcting interference-induced effects. However, even if the performance of the algorithms seems to be similar, we can observe slight performance differences in time-delay estimation depending on the scenario evaluated. Particularly for the first scenario, the algorithm considering the uniform distribution seems to perform slightly better asymptotically. This is mainly due to the fact that the distribution of \(\theta _k\) in the case of LFM interference is uniform. Note that the asymptotic performance is enhanced by the fact to estimate a reduced set of parameters. In the second scenario, we can observe that the performance of the algorithm that considers a prior VM distribution converges around 1 dB earlier. This is because the algorithm jointly estimates the parameters of interest together with the parameters of the a priori distribution, which makes it possible to get rid of the interference at lower \(SNR_{OUT}\). However, note that asymptotic performance improvement gained by introducing prior phase information is offset by the need to jointly estimate another parameter. Therefore, the only improvement achieved is in convergence. We would also like to point out that once both algorithms have converged to the asymptotic regime (small RMSE), the number of iterations for the convergence of the algorithm is surprisingly the same. Our conclusion is that if one of the two algorithms were to be implemented, the complexity in this case suggests using the uniform method.

Finally, it is evident from the results that the error introduced by the EM algorithms is nearly identical to that of the maximum likelihood estimator (MLE) in scenarios without interference. These findings serve to validate and demonstrate the excellent performance of the proposed algorithm.

Fig. 4
figure 4

RMSE for time-delay estimation of the GPS L1 C/A signal received along with a centered LFM chirp signal of bandwidth \(B_c=1\) MHz and amplitude \(A=40\). The sampling frequency is set to \(F_s=4\) MHz

Fig. 5
figure 5

RMSE for Doppler estimation of the GPS L1 C/A signal received along with a centered LFM chirp signal of bandwidth \(B_c=1\) MHz and amplitude \(A=40\). The sampling frequency is set to \(F_s=4\) MHz

Fig. 6
figure 6

RMSE for time-delay estimation of the GPS L1 C/A signal received along with a centered nonlinear chirp signal with \(\beta _c = 51\), \(T=1\)ms and amplitude \(A=40\). The sampling frequency is set to \(F_s=4\) MHz

Fig. 7
figure 7

RMSE for Doppler estimation of the GPS L1 C/A signal received along with a centered nonlinear chirp signal with \(\beta _c = 51\), \(T=1\)ms and amplitude \(A=40\). The sampling frequency is set to \(F_s=4\) MHz

6 Conclusion

Numerous studies have established that interferences can significantly affect the performance of GNSS receivers. In this article, we introduce an EM algorithm designed to address one of the most prominent interference types, known as CM. This algorithm enables the simultaneous estimation of the parameters of interest of the received signal and the characterization of the CM of the interference signal. We demonstrate the estimation effectiveness of this EM algorithm by evaluating its RMSE for time-delay and Doppler parameters. The evaluation encompasses scenarios involving chirp interference jamming and a GPS L1 C/A signal. The results clearly illustrate the strong performance of the proposed algorithm. Finally, we would like to point out that numerous array processing solutions leverage Riemannian optimization to mitigate interference while adhering to constant modulus constraints [28, 29]. These methods could represent a promising advancement in the search for low-complexity interference mitigation algorithms for GNSS.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. P.J.G. Teunissen, O. Montenbruck, Springer Handbook of Global Navigation Satellite Systems (Springer, Switzerland, 2017)

    Book  Google Scholar 

  2. J. Lesouple, T. Robert, M. Sahmoudi, J.-Y. Tourneret, W. Vigneau, Multipath mitigation for GNSS positioning in an urban environment using sparse estimation. IEEE Trans. Intell. Transp. Syst. 20(4), 1316–1328 (2019). https://doi.org/10.1109/TITS.2018.2848461

    Article  Google Scholar 

  3. M.G. Amin, P. Closas, A. Broumandan, J.L. Volakis, Vulnerabilities, threats, and authentication in satellite-based navigation systems. Proc. IEEE 104(6), 1169–1173 (2016). https://doi.org/10.1109/JPROC.2016.2550638

    Article  Google Scholar 

  4. A. Garcia-Pena, O. Julien, C. Macabiau, M. Mabilleau, P. Durel, GNSS C/N0 degradation model in presence of continuous wave and pulsed interference. NAVIGATION 68(1), 75–91 (2021)

    Article  Google Scholar 

  5. F. Dovis, GNSS Interference Threats & Countermeasures (Artech House, 2015)

    Google Scholar 

  6. D. Borio, Swept GNSS jamming mitigation through pulse blanking, in Proceedings of 2016 European Navigation Conference (ENC), Helsinki, Finland (2016), pp. 1–8. https://doi.org/10.1109/EURONAV.2016.7530549

  7. D. Borio, C. O’Driscoll, J. Fortuny, Tracking and mitigating a jamming signal with an adaptive notch filter. InsideGNSS 9, 67–73 (2014)

    Google Scholar 

  8. A. Szumski, Karhunen–Loève transform as an instrument to detect weak RF signals. InsideGNSS 56–64 (2011)

  9. F. Dovis, L. Musumeci, J. Samson, performance comparison of transformed-domain techniques for pulsed interference mitigation, in Proceedings of 25th International Technical Meeting of the Satellite Division of The Institute of Navigation, Nashville, TN, USA (2012), pp. 3530–3541

  10. J. Lesouple, B. Pilastre, Y. Altmann, J.-Y. Tourneret, Hypersphere fitting from noisy data using an EM algorithm. IEEE Signal Process. Lett. 28, 314–318 (2021)

    Article  ADS  Google Scholar 

  11. D. Medina, L. Ortega, J. Vilà-Valls, P. Closas, F. Vincent, E. Chaumette, Compact CRB for delay, doppler, and phase estimation—application to GNSS SPP and RTK performance characterisation. IET Radar Sonar Navig. 14(10), 1537–1549 (2020)

    Article  Google Scholar 

  12. S. Fortunati, F. Gini, M.S. Greco, C.D. Richmond, Performance bounds for parameter estimation under misspecified models: fundamental findings and applications. IEEE Signal Process. Mag. 34(6), 142–157 (2017)

    Article  ADS  Google Scholar 

  13. C. Lubeigt, L. Ortega, J. Vilà-Valls, E. Chaumette, Untangling first and second order statistics contributions in multipath scenarios. Signal Process. 205, 108868 (2023). https://doi.org/10.1016/j.sigpro.2022.108868

    Article  Google Scholar 

  14. L. Ortega, C. Lubeigt, J. Vilà-Valls, E. Chaumette, On gnss synchronization performance degradation under interference scenarios: bias and misspecified cramér-rao bounds. NAVIGATION J. Inst. Navig. 70(4) (2023)

  15. F. Gini, R. Reggiannini, U. Mengali, The modified Cramer-Rao bound in vector parameter estimation. IEEE Commun. Mag. 46(1), 52–60 (1998)

    Google Scholar 

  16. D.W. Ricker, Echo Signal Processing (Kluwer Academic, Springer, New York, 2003)

    Book  Google Scholar 

  17. A. Dogandzic, A. Nehorai, Cramér-Rao bounds for estimating range, velocity, and direction with an active array. IEEE Trans. Signal Process. 6(49), 1122–1137 (2001)

    Article  ADS  Google Scholar 

  18. K.V. Mardia, P.E. Jupp, Directional Statistics (Wiley, 1999)

    Book  Google Scholar 

  19. A.P. Dempster, N.M. Laird, D. Rubin, Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B 39(1), 1–38 (1977)

    MathSciNet  Google Scholar 

  20. J. Lesouple, L. Ortega, An EM Approach for GNSS Parameters of Interest Estimation Under Constant Modulus Interference, Helsinki, Finland (2023), pp. 820–824. https://doi.org/10.23919/EUSIPCO58844.2023.10289775

  21. A.N. D’Andrea, U. Mengali, R. Reggiannini, The modified Cramer-Rao bound and its application to synchronization problems. IEEE Trans. Commun. 42(234), 1391–1399 (1994). https://doi.org/10.1109/TCOMM.1994.580247

    Article  Google Scholar 

  22. H.L. Van Trees, Part IV of Detection, Estimation Nd Modulation Theory: Optimum Array Processing (Wiley-Interscience, 2002)

    Book  Google Scholar 

  23. G.A.F. Seber, Matrix Handbook for Statisticians (Wiley Series in Probability and Statistics, 2008)

    Google Scholar 

  24. S. Sra, A short note on parameter approximation for von Mises–Fisher distributions: and a fast implementation of Is(x). Comput. Stat. 27(1), 177–190 (2012)

    Article  MathSciNet  Google Scholar 

  25. L. Ortega, J. Vilà-Valls, E. Chaumette, Theoretical Evaluation of the GNSS Synchronization Performance Degradation Under Interferences (Denver, 2022)

  26. E. Sénant, B. Gadat, C. Charbonnieras, S. Roche, M. Aubault, F.-X. Marmet, Tentative new signals and services in upper L1 and S bands for Galileo evolutions, in Proceedings of the 31st International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2018) (2018), pp. 913–942

  27. H. McPhee, L. Ortega, J. Vilà-Valls, E. Chaumette, On the accuracy limits of misspecified delay-doppler estimation. Signal Process. 205, 108872 (2023). https://doi.org/10.1016/j.sigpro.2022.108872

    Article  Google Scholar 

  28. S.T. Smith, Optimum phase-only adaptive nulling. IEEE Trans. Signal Process. 47(7), 1835–1843 (1999)

    Article  ADS  Google Scholar 

  29. M.A. ElMossallamy, K.G. Seddik, W. Chen, L. Wang, G.Y. Li, Z. Han, RIS optimization on the complex circle manifold for interference mitigation in interference channels. IEEE Trans. Trans. Veh. Technol. 70(6), 6184–6189 (2021)

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This work has been funded by ENAC and TéSA.

Author information

Authors and Affiliations

Authors

Contributions

The authors contributed equally to this work.

Corresponding author

Correspondence to Julien Lesouple.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: MoFIM derivation for bandlimited signals

In this appendix, we focus on the derivation of the MoFIM. In order to do that, we need to derive a closed-form expression of the elements within the vector \({\varvec{F}}_{M}\left( A,\left[ {\varvec{\eta }}^T,\rho ,\phi \right] \right)\). For the element \({\varvec{F}}_{M}\left( A,\rho \right)\), it is required to compute:

$$\begin{aligned}&E_{{\varvec{\theta }}}\left[ e^{j\phi }\tilde{{\varvec{I}}}^H({\varvec{\theta }}){\varvec{\mu }}\left( {\varvec{\eta }}\right) \right] =\frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )}\sum _{k=N_1}^{N_2}s(kT_s-\tau )e^{j\left( \phi -\gamma -2\pi f_cb(kT_s-\tau )\right) }\nonumber \\&\quad \Rightarrow {{\,\mathrm{Re}\,}}\left\{ E_{{\varvec{\theta }}}\left[ e^{j\phi }\tilde{{\varvec{I}}}^H({\varvec{\theta }}){\varvec{\mu }}\left( {\varvec{\eta }}\right) \right] \right\} =\frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )}\sum _{k=N_1}^{N_2} {{\,\mathrm{Re}\,}}\left\{ s(kT_s-\tau )e^{j\left( \phi -\gamma -2\pi f_cb(kT_s-\tau )\right) } \right\} \end{aligned}$$
(71)

and applying the Nyquist–Shannon theorem for bandlimited signals, we have

$$\begin{aligned}&\lim _{(N_1,N_2)\rightarrow (-\infty ,+\infty )} \sum _{k=N_1}^{N_2}{{\,\mathrm{Re}\,}}\left\{ s(kT_s-\tau )e^{j\left( \phi -\gamma -2\pi f_cb(kT_s-\tau )\right) } \right\} \nonumber \\&\quad =F_s{{\,\mathrm{Re}\,}}\left\{ \int _{-\infty }^{+\infty }s(t-\tau )e^{j\left( \phi -\gamma -2\pi f_cb(t-\tau )\right) } \text {d}t \right\} = F_s{{\,\mathrm{Re}\,}}\left\{ e^{j\left( \phi -\gamma \right) }\int _{-\infty }^{+\infty }s(t)e^{-j2\pi f_c b t} \text {d}t \right\} \nonumber \\&\quad =F_s{{\,\mathrm{Re}\,}}\left\{ e^{j\left( \phi -\gamma \right) } S(f_c b) \right\} = {{\,\mathrm{Re}\,}}\left\{ e^{j\left( \phi -\gamma \right) } \sum _{k=N_1}^{N_2}s(kT_s)e^{-j2\pi f_c b kT_s} \right\} = {{\,\mathrm{Re}\,}}\left\{ \varvec{s}^T\varvec{e}_{\phi -\gamma }(f_c b) \right\} \end{aligned}$$
(72)

where

$$\begin{aligned} S(f)&= T_s\sum _{k=N_1}^{N_2}s(kT_s)e^{-j2\pi fkT_s},\quad \forall f\in \left[ -\frac{F_s}{2},\frac{F_s}{2}\right] \end{aligned}$$
(73)

and

$$\begin{aligned} \varvec{s}&= \begin{bmatrix} \ldots&, s(kT_s) ,&\ldots \end{bmatrix}^T, \quad k= N_1,\ldots ,N_2, \end{aligned}$$
(74)
$$\begin{aligned} \varvec{e}_{\phi -\gamma }(f)&= \begin{bmatrix} \ldots&, e^{j(\phi -\gamma -2\pi f kT_s)},&\ldots \end{bmatrix}^T, \quad k= N_1,\ldots ,N_2. \end{aligned}$$
(75)

Thus, the closed-form expression of the element yields to

$$\begin{aligned} {\varvec{F}}_{M}\left( A,\rho \right) =\frac{2}{\sigma ^2}\frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )}{{\,\mathrm{Re}\,}}\left\{ \varvec{s}^T\varvec{e}_{\phi -\gamma }(f_c b) \right\} . \end{aligned}$$
(76)

Similarly, to compute the element \({\varvec{F}}_{M}\left( A,\phi \right)\), it is required to compute:

$$\begin{aligned}&E_{{\varvec{\theta }}}\left[ j\rho e^{j\phi }\tilde{{\varvec{I}}}^H({\varvec{\theta }}){\varvec{\mu }}\left( {\varvec{\eta }}\right) \right] =j\rho \frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )}\sum _{k=N_1}^{N_2}s(kT_s-\tau )e^{j\left( \phi -\gamma -2\pi f_cb(kT_s-\tau )\right) }\nonumber \\&\quad \Rightarrow {{\,\mathrm{Re}\,}}\left\{ E_{{\varvec{\theta }}}\left[ j \rho e^{j\phi }\tilde{{\varvec{I}}}^H({\varvec{\theta }}){\varvec{\mu }}\left( {\varvec{\eta }}\right) \right] \right\} =\frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )}\rho \sum _{k=N_1}^{N_2} {{\,\mathrm{Re}\,}}\left\{ js(kT_s-\tau )e^{j\left( \phi -\gamma -2\pi f_cb(kT_s-\tau )\right) } \right\} \end{aligned}$$
(77)

Then, it is simple to check that closed form yields to

$$\begin{aligned} {\varvec{F}}_{M}\left( A,\phi \right) =-\frac{2\rho }{\sigma ^2}\frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )}{{\,\mathrm{Im}\,}}\left\{ \varvec{s}^T\varvec{e}_{\phi -\gamma }(f_c b) \right\} . \end{aligned}$$
(78)

Now, we compute the elements within the vector \({\varvec{F}}_{M}\left( A,{\varvec{\eta }}^T\right)\). Then, we need to compute

$$\begin{aligned} \frac{\partial {\varvec{\mu }}\left( {\varvec{\eta }}\right) }{\partial {\varvec{\eta }}^T} =\begin{bmatrix} \ldots &{}, \left( -s^{(1)}(kT_s-\tau )+s(kT_s-\tau )j2\pi f_cb\right) e^{-j2\pi f_cb(kT_s-\tau ))}, &{} \ldots \\ \ldots &{}, -j2\pi f_c(kT_s-\tau )s(kT_s-\tau )e^{-j2\pi f_c b(kT_s-\tau )}, &{}\ldots \end{bmatrix}^T \end{aligned}$$

where \(s^{(1)}=\frac{\text {d}s(t)}{\text {d}t}\). Then,

$$\begin{aligned} E_{{\varvec{\theta }}}\left[ \rho e^{j\phi }\tilde{{\varvec{I}}}^H({\varvec{\theta }})\frac{\partial {\varvec{\mu }}\left( {\varvec{\eta }}\right) }{\partial \tau }\right]&= \rho e^{j(\phi -\gamma )}\frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )} \left( -\sum _{k=N_1}^{N_2}s^{(1)}(kT_s-\tau ) e^{-j2\pi f_cb(kT_s-\tau ))} \right. \nonumber \\&\quad \left. +\sum _{k=N_1}^{N_2} s(kT_s-\tau )j2\pi f_cbe^{-j2\pi f_cb(kT_s-\tau ))} \right) \end{aligned}$$
(79)

and

$$\begin{aligned} {{\,\mathrm{Re}\,}}\left\{ E_{{\varvec{\theta }}}\left[ \rho e^{j\phi }\tilde{{\varvec{I}}}^H({\varvec{\theta }})\frac{\partial {\varvec{\mu }}\left( {\varvec{\eta }}\right) }{\partial \tau }\right] \right\}&= {{\,\mathrm{Re}\,}}\left\{ \rho e^{j(\phi -\gamma )}\frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )} \left( -\sum _{k=N_1}^{N_2}s^{(1)}(kT_s-\tau ) e^{-j2\pi f_cb(kT_s-\tau ))} \right. \right. \nonumber \\&\quad \left. \left. +\sum _{k=N_1}^{N_2} s(kT_s-\tau )j2\pi f_cbe^{-j2\pi f_cb(kT_s-\tau ))} \right) \right\} . \end{aligned}$$
(80)

Applying the Nyquist–Shannon theorem for bandlimited signals, we have

$$\begin{aligned}&\lim _{(N_1,N_2)\rightarrow (-\infty ,+\infty )} \sum _{k=N_1}^{N_2}{{\,\mathrm{Re}\,}}\left\{ s^{(1)}(kT_s-\tau )e^{j\left( \phi -\gamma -2\pi f_cb(kT_s-\tau )\right) } \right\} \nonumber \\&\quad =F_s{{\,\mathrm{Re}\,}}\left\{ \int _{-\infty }^{+\infty }s^{(1)}(t-\tau )e^{j\left( \phi -\gamma -2\pi f_cb(t-\tau )\right) } \text {d}t \right\} \nonumber \\&\quad = F_s{{\,\mathrm{Re}\,}}\left\{ e^{j\left( \phi -\gamma \right) }\int _{-\infty }^{+\infty }s^{(1)}(t)e^{-j2\pi f_c b t} \text {d}t \right\} \nonumber \\&\quad =F_s{{\,\mathrm{Re}\,}}\left\{ j2\pi f_c b \cdot e^{j\left( \phi -\gamma \right) } S(f_c b) \right\} = {{\,\mathrm{Re}\,}}\left\{ j2\pi f_c e^{j\left( \phi -\gamma \right) } \sum _{k=N_1}^{N_2}s(kT_s)e^{-j2\pi f_c b kT_s} \right\} \nonumber \\&\quad = -{{\,\mathrm{Im}\,}}\left\{ 2\pi f_c b \varvec{s}^T\varvec{e}_{\phi -\gamma }(f_c b) \right\} \end{aligned}$$
(81)

where

$$\begin{aligned} \int _{-\infty }^{+\infty }s^{(1)}(t)e^{-j2\pi f t} \text {d}t = j2\pi f S(f). \end{aligned}$$
(82)

On the other hand, following the derivation in (73)

$$\begin{aligned}&\lim _{(N_1,N_2)\rightarrow (-\infty ,+\infty )} \sum _{k=N_1}^{N_2}{{\,\mathrm{Re}\,}}\left\{ j2\pi f_c b s(kT_s-\tau )e^{j\left( \phi -\gamma -2\pi f_cb(kT_s-\tau )\right) } \right\} \nonumber \\&\quad = {{\,\mathrm{Re}\,}}\left\{ j2\pi f_c b \varvec{s}^T\varvec{e}_{\phi -\gamma }(f_c b) \right\} = - {{\,\mathrm{Im}\,}}\left\{ 2\pi f_c b \varvec{s}^T\varvec{e}_{\phi -\gamma }(f_c b) \right\} . \end{aligned}$$
(83)

and

$$\begin{aligned} {{\,\mathrm{Re}\,}}\left\{ E_{{\varvec{\theta }}}\left[ \rho e^{j\phi }\tilde{{\varvec{I}}}^H({\varvec{\theta }})\frac{\partial {\varvec{\mu }}\left( {\varvec{\eta }}\right) }{\partial \tau }\right] \right\} = 0 \rightarrow {\varvec{F}}_{M}\left( A,\tau \right) = 0. \end{aligned}$$
(84)

The last element to compute is

$$\begin{aligned}&{{\,\mathrm{Re}\,}}\left\{ E_{{\varvec{\theta }}}\left[ \rho e^{j\phi }\tilde{{\varvec{I}}}^H({\varvec{\theta }})\frac{\partial {\varvec{\mu }}\left( {\varvec{\eta }}\right) }{\partial b}\right] \right\} \nonumber \\&\quad = -{{\,\mathrm{Re}\,}}\left\{ \rho e^{j(\phi -\gamma )}\frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )} \sum _{k=N_1}^{N_2}j2\pi f_c(kT_s-\tau )s(kT_s-\tau )e^{-j2\pi f_c b(kT_s-\tau )} \right\} \end{aligned}$$
(85)

Again, we apply the Nyquist–Shannon theorem for bandlimited signals

$$\begin{aligned}&\lim _{(N_1,N_2)\rightarrow (-\infty ,+\infty )} \sum _{k=N_1}^{N_2}{{\,\mathrm{Re}\,}}\left\{ j2\pi f_c(kT_s-\tau ) s(kT_s-\tau )e^{j\left( \phi -\gamma -2\pi f_cb(kT_s-\tau )\right) } \right\} \nonumber \\&\quad =F_s{{\,\mathrm{Re}\,}}\left\{ \int _{-\infty }^{+\infty }j 2 \pi f_c(t-\tau )s(t-\tau )e^{j\left( \phi -\gamma -2\pi f_cb(t-\tau )\right) } \text {d}t \right\} \nonumber \\&\quad = F_s{{\,\mathrm{Re}\,}}\left\{ j 2 \pi f_ce^{j\left( \phi -\gamma \right) }\int _{-\infty }^{+\infty }ts(t)e^{-j2\pi f_c b t} \text {d}t \right\} = F_s{{\,\mathrm{Re}\,}}\left\{ j2\pi f_c e^{j\left( \phi -\gamma \right) }\frac{j}{2\pi } S^{(1)}(f_c b) \right\} \nonumber \\&\quad = Fs{{\,\mathrm{Re}\,}}\left\{ j2\pi f_c e^{j\left( \phi -\gamma \right) } T_s^2\sum _{k=N_1}^{N_2}ks(kT_s)e^{-j2\pi f_c b kT_s} \right\} \nonumber \\&\quad = -{{\,\mathrm{Im}\,}}\left\{ T_s 2\pi f_c \varvec{s}^T \varvec{D}\varvec{e}_{\phi -\gamma }(f_c b) \right\} \end{aligned}$$
(86)

where

$$\begin{aligned} S^{(1)}(f) = \frac{\text {d}S(f)}{\text {d}f}= & {} -j2\pi T_s^2 \sum _{k=N_1}^{N_2}ks(kT_s)e^{-j2\pi fkT_s}, \end{aligned}$$
(87)
$$\begin{aligned} \int _{-\infty }^{+\infty }ts(t)e^{-j2\pi f t} \text {d}t= & {} \frac{j}{2\pi } S^{(1)}(f) \end{aligned}$$
(88)

and \(\varvec{D} = \text {diag}\left( N_1,\ldots ,N_2\right)\). Then,

$$\begin{aligned} {{\,\mathrm{Re}\,}}\left\{ E_{{\varvec{\theta }}}\left[ \rho e^{j\phi }\tilde{{\varvec{I}}}^H({\varvec{\theta }})\frac{\partial {\varvec{\mu }}\left( {\varvec{\eta }}\right) }{\partial b}\right] \right\} = \rho \frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )} {{\,\mathrm{Im}\,}}\left\{ T_s 2\pi f_c \varvec{s}^T \varvec{D}\varvec{e}_{\phi -\gamma }(f_c b) \right\} \end{aligned}$$
(89)

and

$$\begin{aligned} {\varvec{F}}_{M}\left( A,b\right) = \frac{2\rho }{\sigma ^2}\frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )} {{\,\mathrm{Im}\,}}\left\{ T_s 2\pi f_c \varvec{s}^T \varvec{D}\varvec{e}_{\phi -\gamma }(f_c b) \right\} \end{aligned}$$
(90)

Therefore,

$$\begin{aligned}&{\varvec{F}}_{M}\left( A,\left[ {\varvec{\eta }}^T,\rho ,\phi \right] \right) ^T = \end{aligned}$$
(91)
$$\begin{aligned}&\frac{2}{\sigma ^2}\frac{{\mathcal {B}}_1(\kappa )}{{\mathcal {B}}_0(\kappa )}\begin{bmatrix} 0 ,&\rho {{\,\mathrm{Im}\,}}\left\{ T_s 2\pi f_c \varvec{s}^T \varvec{D}\varvec{e}_{\phi -\gamma }(f_c b) \right\} ,&{{\,\mathrm{Re}\,}}\left\{ \varvec{s}^T\varvec{e}_{\phi -\gamma }(f_c b) \right\} ,&-\rho {{\,\mathrm{Im}\,}}\left\{ \varvec{s}^T\varvec{e}_{\phi -\gamma }(f_c b) \right\} \end{bmatrix}. \end{aligned}$$
(92)

The MFIM is then

$$\begin{aligned} {\varvec{F}}_M({\varvec{\varepsilon }})=\begin{bmatrix} {\varvec{F}}({\varvec{\eta }},\rho ,\phi ) &{} {\varvec{F}}_{M}\left( A,\left[ {\varvec{\eta }}^T,\rho ,\phi \right] \right) ^T &{} 0 \\ {\varvec{F}}_{M}\left( A,\left[ {\varvec{\eta }}^T,\rho ,\phi \right] \right) &{} \frac{2N}{\sigma ^2} &{} 0 \\ 0&{}0&{}\frac{N}{\sigma ^4} \end{bmatrix} \end{aligned}$$
(93)

Appendix 2: Computation of the conditional distribution \({\varvec{\theta }}|{\varvec{x}},{\varvec{\varepsilon }}^{(t)},{\varvec{\varphi }}^{(t)}\)

Applying Bayes’ theorem, the following equivalence is obtained

$$\begin{aligned} p({\varvec{\theta }}|{\varvec{x}},{\varvec{\varepsilon }},{\varvec{\varphi }})\propto p({\varvec{x}}|{\varvec{\theta }},{\varvec{\varepsilon }})p({\varvec{\theta }}|{\varvec{\varphi }}) \propto e^{-\frac{1}{\sigma ^2}\left\| {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})-A\tilde{{\varvec{I}}}\right\| ^2+\kappa \sum _{k=1}^N\cos {(\theta _k-\gamma )}}. \end{aligned}$$
(94)

Let us expand the quadratic term as:

$$\begin{aligned} \left\| {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})-A\tilde{{\varvec{I}}}\right\| ^2 = \left( {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})\right) ^H\left( {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})\right) + NA^2 -2A{{\,\mathrm{Re}\,}}{\left( \tilde{{\varvec{I}}}^H\left( {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})\right) \right) }. \end{aligned}$$
(95)

where \(\tilde{{\varvec{I}}}^H\tilde{{\varvec{I}}}=N\). Hence, following the same procedure as in (26)

$$\begin{aligned} p({\varvec{\theta }}|{\varvec{x}},{\varvec{\varepsilon }},{\varvec{\varphi }})\propto&e^{-\frac{1}{\sigma ^2}\left\| {\varvec{x}}-\alpha {\varvec{\mu }}({\varvec{\eta }})-a\tilde{{\varvec{I}}}\right\| ^2+\kappa \sum _{k=1}^N\cos {(\theta _k-\gamma )}} \propto e^{\sum _{k=1}^N \frac{2a}{\sigma ^2}\delta _k\cos {\left( \theta _k-\beta _k\right) }+\kappa \cos {(\theta _k-\gamma )}}\nonumber \\ \propto&\prod _{k=1}^N e^{\frac{2a}{\sigma ^2}\delta _k\cos {\left( \theta _k-\beta _k\right) }+\kappa \cos {(\theta _k-\gamma )}} =\prod _{k=1}^N p(\theta _k|x_k,{\varvec{\varepsilon }},{\varvec{\varphi }}) \end{aligned}$$
(96)

with \(p(\theta _k|x_k,{\varvec{\varepsilon }},{\varvec{\varphi }})\propto e^{\frac{2a}{\sigma ^2}\delta _k\cos {\left( \theta _k-\beta _k\right) }+\kappa \cos {(\theta _k-\gamma )}}.\) Note that we can expand the exponential term as

$$\begin{aligned} \frac{2a}{\sigma ^2}\delta _k\cos {\left( \theta _k-\beta _k\right) }+\kappa \cos {(\theta _k-\gamma )}&=\cos {(\theta _k)}\left( \frac{2a\delta _k}{\sigma ^2}\cos {(\beta _k)}+\kappa \cos {(\gamma })\right) \nonumber \\&\quad +\sin {(\theta _k)}\left( \frac{2a\delta _k}{\sigma ^2}\sin {(\beta _k)}+\kappa \sin {(\gamma })\right) . \end{aligned}$$
(97)

Solving the following equation system

$$\begin{aligned} \left\{ \begin{array}{l} \frac{2a\delta _k}{\sigma ^2}\cos {(\beta _k)}+\kappa \cos {(\gamma })= \kappa _k\cos {(\gamma _k)} \\ \frac{2a\delta _k}{\sigma ^2}\sin {(\beta _k)}+\kappa \sin {(\gamma })= \kappa _k\sin {(\gamma _k)} \end{array}\right. , \end{aligned}$$
(98)

for \(\kappa _k\) and \(\gamma _k\) leads

$$\begin{aligned} \kappa _k&= \sqrt{\left( \frac{2a\delta _k}{\sigma ^2}\cos {(\beta _k)}+\kappa \cos {(\gamma })\right) ^2+\left( \frac{2a\delta _k}{\sigma ^2}\sin {(\beta _k)}+\kappa \sin {(\gamma })\right) ^2} \nonumber \\ {}&= \sqrt{\frac{4a^2\delta _k^2}{\sigma ^4}+\kappa ^2+\frac{4a\delta _k\kappa }{\sigma ^2}\cos {(\beta _k-\gamma )}} \end{aligned}$$
(99)
$$\begin{aligned} \gamma _k&= \text{ atan2 }\left( \frac{2a\delta _k}{\sigma ^2}\sin {(\beta _k)}+\kappa \sin {(\gamma }),\frac{2a\delta _k}{\sigma ^2}\cos {(\beta _k)}+\kappa \cos {(\gamma })\right) . \end{aligned}$$
(100)

Introducing the above equations in (97) leads to

$$\begin{aligned} \frac{2a}{\sigma ^2}\delta _k\cos {\left( \theta _k-\beta _k\right) }+\kappa \cos {(\theta _k-\gamma )} = \kappa _k\cos {(\theta _k-\gamma _k)} \end{aligned}$$
(101)

which provides the conditional distributions

$$\begin{aligned} p(\theta _k|x_k,{\varvec{\varepsilon }},{\varvec{\varphi }}) \propto e^{ \kappa _k\cos {(\theta _k-\gamma _k)}} \rightarrow \theta _k|x_k,{\varvec{\varepsilon }},{\varvec{\varphi }}\sim \mathcal{V}\mathcal{M}\left( \theta _k;\kappa _k,\gamma _k\right) . \end{aligned}$$
(102)

Note that when a uniform prior is used (\(\kappa =0\)) we have \(\kappa _k = \frac{2a}{\sigma ^2}\delta _k\) and \(\gamma _k = \beta _k\), which are exactly the results obtained in [20].

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lesouple, J., Ortega, L. Bayesian EM approach for GNSS parameters of interest estimation under constant modulus interference. EURASIP J. Adv. Signal Process. 2024, 32 (2024). https://doi.org/10.1186/s13634-024-01129-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-024-01129-z

Keywords