Skip to main content

A gradient-adaptive lattice-based complex adaptive notch filter

Abstract

This paper presents a new complex adaptive notch filter to estimate and track the frequency of a complex sinusoidal signal. The gradient-adaptive lattice structure instead of the traditional gradient one is adopted to accelerate the convergence rate. It is proved that the proposed algorithm results in unbiased estimations by using the ordinary differential equation approach. The closed-form expressions for the steady-state mean square error and the upper bound of step size are also derived. Simulations are conducted to validate the theoretical analysis and demonstrate that the proposed method generates considerably better convergence rates and tracking properties than existing methods, particularly in low signal-to-noise ratio environments.

1 Introduction

The adaptive notch filter (ANF) is an efficient frequency estimation and tracking technique that is utilised in a wide variety of applications, such as communication systems, biomedical engineering and radar systems [112]. The complex ANF (CANF) has recently gained much attention [1320]. A direct-form poles and zeros constrained CANF was first developed in [13] with a modified Gauss-Newton algorithm. A recursive least square (RLS)-based Steiglitz-McBride (RLS-SM) algorithm was also established to accelerate the convergence rate [14]. However, both algorithms are computationally complicated and can result in biased estimations.

To address this problem, numerous efficient and unbiased least mean square (LMS)-based algorithms have been developed, such as the complex plain gradient (CPG) [15], modified CPG (MCPG) [16], lattice-form CANF (LCANF) [17], and arctangent-based algorithms [18]. However, all these LMS-based algorithms generate a lower convergence rate than the RLS-based algorithms do. Moreover, the upper bound of the step size in LMS-based methods must be maintained within a limited range to ensure stability; this range depends on the eigenvalue of the correlation matrix of the input signal. These drawbacks limit the practical applications of LMS-based algorithms.

Several normalized LMS (NLMS)-based CANF algorithms were established, including the normalized CPG (NCPG) algorithm [19] and the improved simplified lattice complex algorithm [20]. However, the former may be unstable in low signal-to-noise ratio (SNR) conditions, and the latter can only be used to estimate positive instantaneous frequency.

In this paper, we develop a new CANF system based on the lattice algorithm [21]. Instead of the traditional gradient estimation filter, we proposed a normalized lattice predictor that makes both forward and backward predictions. This scheme reduces computational complexity and enhances the robustness to noise influence. Furthermore, convergence rate is improved significantly when compared with conventional gradient-based or nongradient-based methods without sacrificing tracking property.

A classic ordinary differential equation (ODE) method is applied to confirm the unbiasedness of the proposed algorithm. In addition, theoretical analyses are conducted on the stable range of the step size and the steady-state mean square error (MSE) under different conditions. Computer simulations are conducted to confirm the validity of the theoretical analysis results and the effectiveness of the proposed algorithm.

The following notations are adopted throughout this paper. j denotes square root of minus one. ln[·] denotes the principal branch of the complex natural logarithm function and Im {·} means taking the imaginary part of a complex value. Z{·} and E{·} denote the z-transform operator and statistical expectation operator, respectively. δ(·) represents the Dirac function. Asterisk denotes a complex conjugate and is the convolution operator.

2 Filter structure and adaptive algorithm

We consider the following noisy complex sinusoidal input signal x(n) with amplitude A, frequency ω 0 and initial phase ϕ 0:

$$ x(n) = A{e^{j({\omega_{0}}n + {\phi_{0}})}} + v(n), $$
(1)

where ϕ 0 is uniformly distributed over [0, 2π) and v(n)=v r (n)+j v i (n) is assumed to be a zero-mean white complex Gaussian noise process. It is assumed v r (n) and v i (n) are uncorrelated zero-mean real white noise processes with identical variances. The first-order, pole-zero-constrained CANF with the following transfer function is widely used to estimate frequency ω 0: \(H(z) = \frac {{1 - {e^{j\theta }}{z^{- 1}}}}{{1 - \alpha {e^{j\theta }}{z^{- 1}}}}\) where θ is the notch frequency and α represents the pole-zero constrained factor and determines the notch filter’s 3-dB attenuation bandwidth. The pole can remain in the unit circle by restricting the value of α.

We now propose a new structure to implement the complex notch filter. As shown in Fig. 1, the input signal x(n) is first processed by an all-pole prefilter H p (z)=1/D(z)=1/(1+a 0 z −1) to obtain s 0(n), where a 0 is the coefficient of the all-pole filter. Then, a lattice predictor is employed to identify the forward and backward prediction errors s 1(n) and r 1(n), respectively. The transform functions from s 1(n) and r 1(n) to s 0(n) are given by H f (z)=N(z)=1+k 0 z −1 and H b (z)=z −1 N (z)=k 0 +z −1 (k 0 being the reflection coefficient of the lattice filter). To acquire the desired pole-zero constrained notch filter, the following relations must be satisfied:

$$ {k_{0}} = - {e^{j\theta }}, $$
(2)
Fig. 1
figure 1

Structure of a first-order complex notch filter

$$ {a_{0}} = \alpha {k_{0}}. $$
(3)

Thus, θ can be computed as θ=Im{ln[−k 0]}.

At this point, a normalized stochastic gradient algorithm is derived to update the reflection coefficient k 0. We consider the following cost function:

$$ {J_{fb}} = \frac{1}{2}E\left[{\left| {{s_{1}}(n)} \right|^{2}} + {\left| {{r_{1}}(n)} \right|^{2}}\right], $$
(4)

We replace cost function J fb with its instantaneous estimation, i.e.,

$$ {\hat{J}_{fb}} = \frac{1}{2}\left({\left| {{s_{1}}(n)} \right|^{2}} + {\left| {{r_{1}}(n)} \right|^{2}}\right). $$
(5)

By taking the derivative of \({\hat J_{fb}}\) with respect to θ(n), we obtain

$$ \nabla {\hat{J}_{fb}} = \frac{{d{{\hat{J}}_{fb}}(n)}}{{d{k_{0}}(n)}}\frac{{d{k_{0}}(n)}}{{d\theta (n)}} = - {\text{Im}} \{ {s_{1}}(n){s_{0}}^{*}(n)\}. $$
(6)

Considering that θ(n) is real, the adaptation equation can be written as

$$ \theta (n + 1) = \theta (n) + \mu \cdot {\text{Im}} \{{s_{1}}(n){s_{0}}^{*}(n)\} /\xi (n), $$
(7)

where μ is the step size and the normalized signal ξ(n) can be recursively calculated as

$$ \xi (n) = \rho \xi (n - 1) + (1 - \rho){s_{0}}^{*}(n){s_{0}}(n), $$
(8)

where ρ denotes the smoothing factor.

Table 1 shows the computational complexities of the proposed algorithm and of four conventional methods [14, 16, 17, 19]. Note that the complexity of the proposed algorithm is comparable to that of LMS-based methods and lower than that of NLMS-based and RLS-based algorithms.

Table 1 Complexities of the proposed algorithm and of four conventional algorithms

3 Convergence analysis

We now use the ODE approach to analyse the convergence properties of the adaptive algorithm, which has been applied to analyse several other ANF algorithms [17, 22]. Assuming that the adaptation is sufficiently slow and the input signal is stationary, the associated ODEs for the proposed adaptive algorithm can be expressed as

$$ \frac{d}{{d\tau }}\theta (\tau) = {\xi^{- 1}}(\tau)f(\theta (\tau)), $$
(9)
$$ \frac{d}{{d\tau }}\xi (\tau) = G(\theta (\tau)) - \xi (\tau), $$
(10)

where G(θ(τ))=E{s 0 (n)s 0(n)} and

$$\begin{array}{@{}rcl@{}} f(\theta (\tau)) &=& {\text{Im}} \{ E[{s_{0}}^{*}(n){s_{1}}(n)]\} \\ &=& {\text{Im}} \left\{\frac{1}{{2\pi }}\int_{- \pi }^{\pi} {\frac{{{S_{x}}(\omega)N({e^{j\omega }})}}{{{D^ * }({e^{j\omega }})D({e^{j\omega }})}}} d\omega \right\} \; \\ &=& {\text{Im}} \left\{\frac{1}{{2\pi}}\int_{- \pi }^{\pi} {\frac{{{\sigma_{v}}^{2}N({e^{j\omega }})}}{{{D^ * }({e^{j\omega }})D({e^{j\omega }})}}} d\omega \right\} \\ & & + {\text{Im}} \left\{ \frac{{{A^{2}}N({e^{j{\omega_{0}}}})}}{{{{\left| {D({e^{j{\omega_{0}}}})} \right|}^{2}}}}\right\} \\ &=& {\text{Im}} \left\{ \frac{{{\sigma_{v}}^{2}}}{{1 + \alpha }}\right\} + \frac{{{A^{2}}\sin ({\omega_{0}} - \theta)}}{{{{\left| {D({e^{j{\omega_{0}}}})} \right|}^{2}}}} \\ &=& \frac{{{A^{2}}\sin ({\omega_{0}} - \theta)}}{{{{\left| {D({e^{j{\omega_{0}}}})} \right|}^{2}}}}. \end{array} $$
(11)

Here, S x (ω) is the power spectral density (PSD) of x(n): S x (ω)=2π A 2 δ(ωω 0)+σ v 2 [17] and the transfer functions N(e jω) and 1/D(e jω) are defined in the previous section where e jω is substituted by z. Since Eq. 9 is the associated ordinary differential equation of the proposed adaptive algorithm, according to [23], θ(n) will always converge to the stationary point of Eq. 9 without exception, and this stationary point must satisfy \(\frac {d}{{d\tau }}\theta (\tau) = 0. \xi (\tau)\) is always positive; therefore, the stationary point of θ(n) converges to a solution of equation f(θ(τ))=0. Based on Eq. 11, θ=ω 0 is the sole stationary point over one period of the function. To confirm that the stationary point is stable, we choose a Lyapunov function L(τ)=[ω 0θ(τ)]2. L(τ)≥0 for all τ. Meanwhile,

$$\begin{array}{@{}rcl@{}} \frac{{dL(\tau)}}{{d\tau}} &=& \frac{{dL}}{{d\theta }}\frac{{d\theta}}{{d\tau }} \\ &=& \frac{{ - 2{A^{2}}\sin ({\omega_{0}} - \theta (\tau))[{\omega_{0}} - \theta (\tau)]}}{{{{\left| {D({e^{j{\omega_{0}}}})} \right|}^{2}}\xi (\tau)}} \\ &<& 0 \end{array} $$
(12)

is maintained for all θ(τ)≠ω 0. This equation implies that L(τ) is a decreasing function of τ for |ω 0θ(τ)|<π. Thus, it is proved that θ(n) can always converge to the expected frequency ω 0 [23].

Now, we would like to compute the upper bound of step size μ. Taking the expectation on both sides of Eq. 7, we obtain

$$ \bar{\theta} (n + 1) - \bar{\theta} (n) = \mu {\text{Im}} \left\{ E\left[\frac{{{s_{0}}^{*}(n){s_{1}}(n)}}{{\xi (n)}}\right]\right\}, $$
(13)

where \(\bar {\theta } (n) = E\{\theta (n)\} \). Expanding Eq. 8 yields

$$ \xi (n) = (1 - \rho)\sum\limits_{m = 0}^{n} {{\rho^{m}}} {s_{0}}(n - m)s_{0}^{*}(n - m). $$
(14)

Taking ensemble expectations on both sides and assuming that s 0(n) is wide-sense stationary, we have

$$\begin{array}{@{}rcl@{}} \mathop{\lim}\limits_{n \to \infty} E[\xi (n)] &=& \mathop{\lim }\limits_{n \to \infty} \left[(1 - \rho)\sum\limits_{m = 0}^{n} {{\rho^{m}}} {r_{{S_{0}}}}(0)\right] \\ &=& {r_{{S_{0}}}}(0), \end{array} $$
(15)

where

$$\begin{array}{@{}rcl@{}} {r_{{S_{0}}}}(0) &=& \frac{1}{{2\pi }}{\int_{- \pi }^{\pi} {\left| {\frac{1}{{1 + {a_{0}}{e^{- j\omega }}}}} \right|}^{2}}{S_{x}}(\omega)d\omega \\ &=& \frac{{{A^{2}}}}{{{{\left| {1 + {a_{0}}{e^{- j{\omega_{0}}}}} \right|}^{2}}}} + \frac{{{\sigma_{v}}^{2}}}{{1 - {\alpha^{2}}}}. \end{array} $$
(16)

In each step, we consider that [24]

$$ \xi (n) = {r_{{S_{0}}}}(0) + \Delta \xi (n), $$
(17)

where Δ ξ(n) is the zero-mean stochastic error sequence that is independent of the input signal. By applying Eq. 17 and disregarding the second-order error, we obtain

$$ \frac{1}{{\xi (n)}} \approx r_{{S_{0}}}^{- 1}(0) - r_{{S_{0}}}^{- 2}(0)\Delta \xi (n). $$
(18)

By substituting Eqs. 11, 16, and 18 into Eq. 13, we get

$$\begin{array}{@{}rcl@{}} \bar{\theta} (n + 1) - \bar{\theta} (n) &=& \mu {\text{Im}} \left\{E\left[{s_{0}}^{*}(n){s_{1}}(n)\left(r_{{S_{0}}}^{- 1}(0)\right.\right.\right.\\ && -\left.\left.\left. r_{{S_{0}}}^{- 2}(0)\Delta \xi (n)\right)\right]\right\}\\ &\approx& \mu {\text{Im}} \left\{ E[{s_{0}}^{*}(n){s_{1}}(n)r_{{S_{0}}}^{- 1}(0)]\right\}\\ &=& \frac{{\frac{{\mu {A^{2}}\sin ({\omega_{0}} - \bar{\theta} (n))}}{{{{\left| {1 - \alpha {e^{j(\bar{\theta} (n) - {\omega_{0}})}}} \right|}^{2}}}}}}{{\frac{{{A^{2}}}}{{{{\left| {1 - \alpha {e^{j(\bar{\theta} (n) - {\omega_{0}})}}} \right|}^{2}}}} + \frac{{{\sigma_{v}}^{2}}}{{1 - {\alpha^{2}}}}}}. \end{array} $$

Considering the approximations \(\frac {{{\text {sin}}(\bar {\theta } - {\omega _{0}})}}{{{{\left | {1 - \alpha {e^{j(\bar {\theta } - {\omega _{0}})}}} \right |}^{2}}}} \approx \frac {{\bar {\theta } - {\omega _{0}}}}{{{{(1 - \alpha)}^{2}}}}\) and \({\text {sin}}(\bar {\theta } - {\omega _{0}})/(\bar {\theta } - {\omega _{0}}) \approx 1 \left (\text {for a small} \left | {\bar {\theta } - {\omega _{0}}} \right |\right)\) [17], we have

$$ {\omega_{0}} - \bar{\theta} (n + 1) = \left(1 - \frac{{\mu {A^{2}}}}{{{A^{2}} + \frac{{1 - \alpha }}{{1 + \alpha }}{\sigma_{v}}^{2}}}\right) [{\omega_{0}} - \bar{\theta} (n)]. $$
(19)

To satisfy \(\left | {{\omega _{0}} - \bar {\theta } (n + 1)} \right | < \left | {{\omega _{0}} - \bar {\theta } (n)} \right |\), the step-size μ should satisfy:

$$ 0 < \mu < 2(1 + \frac{{1 - \alpha }}{{1 + \alpha }}\frac{{{\sigma_{v}}^{2}}}{{{A^{2}}}}). $$
(20)

Furthermore, when SNR→ or α→1, we have μ(0,2], which is independent of the input.

4 Steady-state MSE analysis

In this section, a PSD-based method [19, 25] is exploited to derive the accurate expressions for the steady-state MSE of the estimated frequency. As discussed in the previous section, the estimated frequency can converge to an unbiased value, i.e., \(\mathop {\lim }\limits _{n \to \infty } \;\theta (n) = {\omega _{0}}\). Defining that Δ θ(n)=θ(n)−ω 0, we obtain the following two approximations: \(\mathop {\lim }\limits _{n \to \infty } \;{\text {sin}}(\Delta \theta (n)) \approx \Delta \theta (n)\) and \(\mathop {\lim }\limits _{n \to \infty } \;{\text {cos}}(\Delta \theta (n)) \approx 1.\) Then, the steady-state transfer function from s 1(n) and s 0(n) to x(n) can be written as:

$$ {H_{{s_{1}}}}({e^{j{\omega_{0}}}}) = \frac{{1 - {e^{j\Delta \theta (n)}}}}{{1 - \alpha {e^{j\Delta \theta (n)}}}} \approx \frac{{ - j\Delta \theta (n)}}{{1 - \alpha }}, $$
(21)
$$ {H_{{s_{0}}}}({e^{j{\omega_{0}}}}) = \frac{1}{{1 - \alpha {e^{j\Delta \theta (n)}}}} \approx \frac{1}{{1 - \alpha }}. $$
(22)

The input signal x(n) in Eq. 1 is assumed to be composed of a single frequency part and Gaussian white noise. Thus, the steady-state outputs s 1(n) and s 0(n) can be expressed as:

$$ {s_{1}}(n) = {s_{{s_{1}}}}(n) + {n_{{s_{1}}}}(n), $$
(23)
$$ {s_{0}}(n) = {s_{{s_{0}}}}(n) + {n_{{s_{0}}}}(n), $$
(24)

where \({n_{{s_{1}}}}(n)\) and \({n_{{s_{0}}}}(n)\) are the complex Gaussian parts of s 1(n) and s 0(n), respectively. By using Eqs. 21 and 22, we obtain

$$ {s_{{s_{1}}}}(n) \approx \frac{{A\Delta \theta (n){e^{j({\omega_{0}}n + {\phi_{0}} - \pi /2)}}}}{{1 - \alpha}}, $$
(25)
$$ {s_{{s_{0}}}}(n) \approx \frac{{A{e^{j({\omega_{0}}n + {\phi_{0}})}}}}{{1 - \alpha }}. $$
(26)

By substituting Eqs. 23 and 24 into Eq. 7, the adaptive update equation can be rewritten as

$$ \theta (n + 1) = \theta (n) + \bar{\mu} \cdot \sum\limits_{i = 1}^{4} {{u_{i}}(n)}, $$
(27)

where

$$ \bar \mu = \frac{\mu }{{\xi (n)}} \approx \frac{\mu }{{{A^{2}}/{{(1 - \alpha)}^{2}} + {\sigma_{v}}^{2}/(1 - {\alpha^{2}})}}, $$
(28)
$$ {u_{1}}(n) = {\text{Im}} \{ {s_{{s_{0}}}}^{*}(n){n_{{s_{1}}}}(n)\}, $$
(29)
$$ {u_{2}}(n) = {\text{Im}} \{ {n_{{s_{0}}}}^{*}(n){n_{{s_{1}}}}(n)\}, $$
(30)
$$ {u_{3}}(n) = {\text{Im}} \{{s_{{s_{0}}}}^{*}(n){s_{{s_{1}}}}(n)\}, $$
(31)

and

$$ {u_{4}}(n) = {\text{Im}} \{ {n_{{s_{0}}}}^{*}(n){s_{{s_{1}}}}(n)\}. $$
(32)

Substituting Eqs. 25 and 26 into Eq. 31 yields

$$ {u_{3}}(n) \approx - \frac{{{A^{2}}\Delta \theta (n)}}{{{{\left({1 - \alpha} \right)}^{2}}}}. $$
(33)

Meanwhile, Eq. 32 can be rearranged as

$$ {u_{4}}(n) \approx {\text{Im}} ({n_{{s_{0}}}}^{*}(n)\frac{{ - jA\Delta \theta (n){e^{j({\omega_{0}}n + {\phi_{0}})}}}}{{1 - \alpha }}). $$
(34)

Then,

$$\begin{array}{@{}rcl@{}} \left| {\frac{{{u_{3}}(n)}}{{{u_{4}}(n)}}} \right| &\approx& \frac{A}{{(1 - \alpha)\left| {{\text{Im}} (- j{n_{{s_{0}}}}^{*}(n){e^{j({\omega_{0}}n + {\phi_{0}})}})} \right|}}\\ &\ge& \frac{A}{{(1 - \alpha)\left| {{n_{{s_{0}}}}^{*}(n)} \right|}} \end{array} $$
(35)

Assuming α is close to unity or the SNR is sufficient large, it stands that \(\left | {\frac {{{u_{3}}(n)}}{{{u_{4}}(n)}}} \right | \ge \frac {A}{{(1 - \alpha)\left | {{n_{{s_{0}}}}^{*}(n)} \right |}} \gg 1\). Thus, u 4(n) in Eq. 27 can be neglected.

Therefore, by subtracting ω 0 from both sides of Eq. 27 and assuming u(n)=u 1(n)+u 2(n) and \(\beta = 1 - \bar \mu {A^{2}}/{(1 - \alpha)^{2}}\), we obtain

$$ \Delta \theta (n + 1) = \beta \Delta \theta (n) + \bar{\mu} u(n). $$
(36)

With Eq. 36, the transform function from u(n) to Δ θ(n) is written as:

$$ {H_{u\Delta \theta }}(z) = \frac{{\bar{\mu} {z^{- 1}}}}{{1 - \beta {z^{- 1}}}}. $$
(37)

Hence, the MSE of the estimated frequency can be expressed as [26]:

$$\begin{array}{@{}rcl@{}} &&E\left\{\Delta \theta {(n)^{2}}\right\} = {r_{\Delta \theta }}(0) \\ &=& \frac{{\oint {{H_{u\Delta \theta }}(z)} {H_{u\Delta \theta }}^{*}\left(\frac{1}{{{z^ * }}}\right){R_{u}}(z){z^{- 1}}dz}}{{2\pi j}}, \end{array} $$
(38)

where R u (z) denotes the z-transform of r u (l), which is the autocorrelation sequence of u(n) and can be calculated as:

$$\begin{array}{@{}rcl@{}} {r_{u}}(l)&=&E\{ u(k + l){u^{*}}(k)\} \\ &=&{r_{{u_{1}}}}(l) + {r_{{u_{2}}}}(l) + 2{r_{{u_{1}}{u_{2}}}}(l), \end{array} $$
(39)

where

$$ {r_{{u_{1}}}}(l) = E[{u_{1}}(n + l){u_{1}}(n)], $$
(40)
$$ {r_{{u_{2}}}}(l) = E[{u_{2}}(n + l){u_{2}}(n)], $$
(41)

and

$$ {r_{{u_{1}}{u_{2}}}}(l) = E[{u_{1}}(n + l){u_{2}}(n)]. $$
(42)

Thus R u (z) in Eq. 38 can be divided into three parts:

$$\begin{array}{@{}rcl@{}} {R_{u}}(z) &=& Z\{ {r_{u}}(l)\} \\ &=& {R_{{u_{1}}}}(z) + {R_{{u_{2}}}}(z) + 2{R_{{u_{1}}{u_{2}}}}(z), \end{array} $$
(43)

where \({R_{{u_{1}}}}(z)\), \({R_{{u_{2}}}}(z)\), and \({R_{{u_{1}}{u_{2}}}}(z)\) denote the z-transform of \({r_{{u_{1}}}}(l)\), \({r_{{u_{2}}}}(l)\), and \({r_{{u_{1}}{u_{2}}}}(l)\), which will be calculated in what follows.

To get \({r_{{u_{1}}}}(l)\), we transform Eq. 29 as:

$$ {u_{1}}(n) = \frac{{{s_{{s_{0}}}}^{*}(n){n_{{s_{1}}}}(n) - {s_{{s_{0}}}}(n){n_{{s_{1}}}}^{*}(n)}}{{2j}}, $$
(44)

and then Eq. 40 can be rearranged as:

$$ {r_{{u_{1}}}}(l) = E[{u_{1}}(n + l){u_{1}}(n)] = - \frac{1}{4}\sum\limits_{i = 1}^{4} {{p_{i}}(l)}, $$
(45)

where

$$ {p_{1}}(l) = E\left\{{s_{{s_{0}}}}^{*}(n + l){n_{{s_{1}}}}(n + l){s_{{s_{0}}}}^{*}(n){n_{{s_{1}}}}(n)\right\}, $$
(46)
$$ {p_{2}}(l) = E\left\{ {s_{{s_{0}}}}(n + l){n_{{s_{1}}}}^{*}(n + l){s_{{s_{0}}}}(n){n_{{s_{1}}}}^{*}(n)\right\}, $$
(47)
$$\begin{array}{@{}rcl@{}} {p_{3}}(l) = &-& E\left\{{s_{{s_{0}}}}^{*}(n + l){n_{{s_{1}}}}(n + l)\right. \\ &&\left.\times {s_{{s_{0}}}}(n){n_{{s_{1}}}}^{*}(n)\right\}, \end{array} $$
(48)
$$\begin{array}{@{}rcl@{}} {p_{4}}(l) = &-& E\left\{ {s_{{s_{0}}}}(n + l){n_{{s_{1}}}}^{*}(n + l)\right. \\ &&\left.\times {s_{{s_{0}}}}^{*}(n){n_{{s_{1}}}}(n)\right\}. \end{array} $$
(49)

By using the results in Appendix A and considering that \({s_{{s_{0}}}}(n)\) and \({n_{{s_{1}}}}(n)\) are uncorrelated, we can rewrite Eqs. 46, 47, 48, and 49 as

$$ {p_{1}}(l) = {\zeta_{{s_{{s_{0}}}}^{*}{s_{{s_{0}}}}^{*}}}(l){\zeta_{{n_{{s_{1}}}}{n_{{s_{1}}}}}}(l) = 0, $$
(50)
$$ {p_{2}}(l) = {\zeta_{{s_{{s_{0}}}}{s_{{s_{0}}}}}}(l){\zeta_{{n_{{s_{1}}}}^{*}{n_{{s_{1}}}}^{*}}}(l) = 0, $$
(51)
$$ {p_{3}}(l) = - {r_{{s_{{s_{0}}}}}}(l){r_{{n_{{s_{1}}}}}}(- l), $$
(52)
$$ {p_{4}}(l) = - {r_{{s_{{s_{0}}}}}}(- l){r_{{n_{{s_{1}}}}}}(l), $$
(53)

where

$$ {r_{{s_{{s_{0}}}}}}(l) = E\{ {s_{{s_{0}}}}(n){s_{{s_{0}}}}^{*}(n - l)\}, $$
(54)
$$ {r_{{n_{{s_{1}}}}}}(l) = E\{ {n_{{s_{1}}}}(n){n_{{s_{1}}}}^{*}(n - l)\}. $$
(55)

Substituting Eqs. 50, 51, 52, and 53 into Eq. 45, we get

$$ {r_{{u_{1}}}}(l) = \frac{1}{4}\left[{r_{{s_{{s_{0}}}}}}(l){r_{{n_{{s_{1}}}}}}(- l) + {r_{{s_{{s_{0}}}}}}(- l){r_{{n_{{s_{1}}}}}}(l)\right]. $$
(56)

Considering Eq. 26, \({r_{{s_{{s_{0}}}}}}(l)\) in Eq. 56 can be written as:

$$ {r_{{s_{{s_{0}}}}}}(l) = E\{ {s_{{s_{0}}}}(n){s_{{s_{0}}}}^{*}(n - l)\} = \frac{{{A^{2}}{e^{j{\omega_{0}}l}}}}{{{{(1 - \alpha)}^{2}}}}. $$
(57)

Substituting Eq. 57 into Eq. 56 yields

$$ {r_{{u_{1}}}}(l) = \frac{{{A^{2}}\left[{r_{{n_{{s_{1}}}}}}(- l){e^{j{\omega_{0}}l}} + {r_{{n_{{s_{1}}}}}}(l){e^{- j{\omega_{0}}l}}\right]}}{{4{{(1 - \alpha)}^{2}}}}. $$
(58)

The z-transform of both sides of Eq. 58 can be expressed as:

$$ {R_{{u_{1}}}}(z) = \frac{{{A^{2}}\left[{R_{{n_{{s_{1}}}}}}\left({z^{- 1}}{e^{j{\omega_{0}}}}\right) + {R_{{n_{{s_{1}}}}}}\left(z{e^{j{\omega_{0}}}}\right)\right]}}{{4{{\left(1 - \alpha\right)}^{2}}}}. $$
(59)

Note that \({R_{{n_{{s_{1}}}}}}(z)\) can be expanded as [26]:

$$ {R_{{n_{{s_{1}}}}}}(z) = {H_{{s_{1}}}}(z){H_{{s_{1}}}}^{*}(1/{z^{*}}){R_{n}}(z), $$
(60)

where \({R_{n}}(z) = Z\{ v(n)\} = {\sigma _{v}^{2}}\) and \({H_{{s_{1}}}}(z) = \frac {{1 + {k_{0}}{z^{- 1}}}}{{1 + \alpha {k_{0}}{z^{- 1}}}}\). Utilizing the Taylor series expansion e jΔθ=1+j Δ θ+o(Δ θ 2), we obtain

$$ {R_{{u_{1}}}}(z) \approx \frac{{{A^{2}}{\sigma_{v}^{2}}(z - 1)(1 - z)}}{{2{{(1 - \alpha)}^{2}}(z - \alpha)(1 - \alpha z)}} $$
(61)

Using the similar method of deriving \({R_{{u_{1}}}}(z)\), we get the following results (see Appendix B for details)

$$ {R_{{u_{2}}}}(z) = \frac{{{\sigma_{v}^{4}}}}{{2(1 - {\alpha^{2}})}}, $$
(62)

and

$$ {R_{{u_{1}}{u_{2}}}}(z) = 0. $$
(63)

Substituting Eqs. 61, 62, and 63 into Eq. 38, finally we get

$$ E\left\{ \Delta \theta {(n)^{2}}\right\} = \frac{{{{\bar{\mu}}^{2}}\left[\frac{{{A^{2}}{\sigma_{v}^{2}}}}{{1 - \alpha \beta }} + \frac{{{\sigma_{v}^{4}}(1 - \alpha)}}{{2(1 - \beta)}}\right]}}{{(1 + \beta)(1 + \alpha){{(1 - \alpha)}^{2}}}}. $$
(64)

Equation 64 indicates that the estimated MSE is independent of input frequency ω 0 and smooth factor ρ.

5 Simulation results

Computer simulations are conducted to confirm the effectiveness of the proposed algorithm and the validity of the theoretical analysis results.

5.1 Performance comparisons

In the following two simulations, the proposed algorithm is compared with four conventional algorithms [14, 16, 17, 19] under two different kinds of inputs, namely a fixed frequency input and a quadratic chirp input. The input signal takes the form \(\phantom {\dot {i}\!}x(n) = {e^{j(\varphi (n) + {\theta _{0}})}} + v(n)\), where φ(n) is the instantaneous phase. The parameters are adjusted to establish an equal steady-state MSE and an equal notch bandwidth for all the algorithms. The initial notch frequency value is set to zero for all the methods.

Figure 2 presents the MSE curves of five algorithms with a fixed frequency φ(n)=0.4π n at SNR = 10 and 0 dB, respectively. Note that the proposed algorithm outperforms the other four algorithms. The NCPG algorithm achieves the similar convergence rate as the proposed algorithm at SNR = 10 dB while the former diverges at SNR = 0 dB. This indicates that the proposed algorithm is robust even at very low SNR conditions.

Fig. 2
figure 2

Comparison of the convergence rates of the estimated MSE under two different SNRs (α = 0.9, ρ = 0.8, and 1000 runs): a SNR = 10 dB and b SNR = 0 dB

Figure 3 presents the tracking rate of the five algorithms with a quadratic chirp input signal: φ(n)=A c (ϕ 1 n+ϕ 2 n 2+ϕ 3 n 3), where ϕ 1=−π/4, ϕ 2=π/2×10−3 and ϕ 3=−π/6×10−6. Parameter A c is adopted to control the value of chirp rate. For this case, the desired true frequency can be obtained by φ(n)/ n=A c (ϕ 1+2ϕ 2 n+3ϕ 3 n). Figure 3 a depicts the tracking MSE obtained when A c =1, and Fig. 3 b presents the MSE with an increased chirp rate: A c =2. The results imply that under the non-stationary case, the proposed method can achieve faster convergence speed than all the other four algorithms. When tracking speed is concerned, we see that the RLS-SM method and the proposed method can maintain an equally small MSE than the other three methods especially at the high chirp rate part. We checked each of the learning curves of the NCPG algorithm and found that this algorithm even diverges in some runs.

Fig. 3
figure 3

Comparison of the tracking behaviors for a quadratic chirp input under two different chirp rates (α = 0.9, SNR = 0 dB, and 1000 runs): a comparison of MSEs when A c =1 and b comparison of MSEs when A c =2

5.2 Simulations of steady-state estimation MSE

In the following four simulations, the simulated steady-state MSE of the proposed algorithm is compared with the theoretical results in Eq. 64 with different input frequency ω 0, SNR, pole radius α and step size μ. The simulation results are obtained by averaging over 500 trials.

Figure 4 displays the comparison of the theoretical and simulated steady-state MSEs versus signal frequency ω 0 under two different SNRs (SNR = 60 and 10 dB). The curves show that the theoretical MSEs can predict the simulated MSEs precisely, and the steady-state MSEs are independent of input frequency ω 0. We also see that a higher SNR leads to a larger MSE.

Fig. 4
figure 4

Comparison of the theoretical and simulated steady-state MSEs versus signal frequency ω 0 at SNR = 60 dB and 10 dB (α=0.9 and μ=0.8)

Figure 5 exhibits the comparison of the theoretical and simulated steady-state MSEs versus SNR under two different parameter settings: (1) α=0.9,μ=0.8 and (2) α=0.98,μ=0.1. The proposed approach predicts the MSEs well, although some discrepancies are observed with α=0.9,μ=0.8. That is because the CANF can hardly converge when the SNR is very low.

Fig. 5
figure 5

Comparison of the theoretical and simulated steady-state MSEs versus SNR (ω 0=0.2π): (1) α = 0.9, μ =0.8 and (2) α = 0.98,μ =0.1

Figure 6 illustrates the comparison of the theoretical and simulated steady-state MSEs versus pole radius α. When α decreases, the MSEs increase and the mismatch between the theoretical and simulated steady-state MSEs is somewhat large. It is because Eq. 36 is derived on the basis of the assumption that α is close to unity. When α is small, the assumption does not hold. This explains the mismatch in Fig. 6. This finding implies that the theoretical MSE remains valid when α is close to unity.

Fig. 6
figure 6

Comparison of the theoretical and simulated steady-state MSEs versus pole radius α (ω 0 = 0.2 π, μ = 0.1, and 500 runs): (1) SNR = 60 dB and (2) SNR = 10 dB

As shown in Fig. 7, the theoretical MSEs can predict the simulated steady-state MSEs well particularly for μ<1.8 but the mismatch occurs when μ approaches the up boundary of the step size. Moreover, it is noted that a large step size yields a large MSE.

Fig. 7
figure 7

Comparison of the theoretical and simulated steady-state MSEs versus step size μ. (ω 0 = 0.2 π, α = 0.95, and 500 runs): (1) SNR = 60 dB and (2) SNR = 10 dB

6 Conclusions

This paper has presented a complex adaptive notch filter based on the gradient-adaptive lattice approach. The new algorithm is computationally efficient and can provide an unbiased estimation. The closed-form expressions for the steady-state MSE and the upper bound of step size have been worked out. Simulation results demonstrate that (1) the proposed algorithm can achieve faster convergence rate than the traditional methods particularly in the low SNR conditions and (2) theoretical analysis of the proposed algorithm is in good agreement with computer simulation results. By cascading the proposed first-order gradient-adaptive lattice filters, the algorithm can be extended to handle complex signal with multiple sinusoids, which will be the focus of our further research.

7 Appendix A

Given complex sequences f(n) and g(n), we define a new function ζ fg (l) as

$$ {\zeta_{fg}}(l) = E\left\{ f(n + l)g(n)\right\}. $$
(65)

Thus, for the input signal x(n) defined in Eq. 1, we have

$$\begin{array}{@{}rcl@{}} {\zeta_{xx}}(l) &=& E\left\{ x(n + l)x(n)\right\} \\ &=& {A^{2}}{e^{j{\omega_{0}}l}}E\left\{ {e^{j2({\omega_{0}}n + {\phi_{0}})}}\right\} + {\zeta_{vv}}(l). \end{array} $$
(66)

Given that ϕ 0 is uniformly distributed over [0, 2π), we have \(\phantom {\dot {i}\!}E\{ {e^{j2({\omega _{0}}n + {\phi _{0}})}}\} = 0\). v(n)=v r (n)+j v i (n) is assumed to be a zero-mean white complex Gaussian noise process where v r (n) and v i (n) are uncorrelated zero-mean real white noise processes with identical variances. Therefore, we have the following relations:

$$ {r_{{v_{r}}}}(l) = \frac{{{\sigma_{v}^{2}}}}{2}\delta (l), $$
(67)
$$ {r_{{v_{i}}}}(l) = \frac{{{\sigma_{v}^{2}}}}{2}\delta (l), $$
(68)
$$ {r_{{v_{r}}{v_{i}}}}(l) = {r_{{v_{i}}{v_{r}}}}(l) = 0, $$
(69)

where \({r_{{v_{r}}}}(l)\) and \({r_{{v_{i}}}}(l)\) are the autocorrelation sequences of v r (n) and v i (n), respectively. \({r_{{v_{r}}{v_{i}}}}(l)\) is the cross-correlation sequence of v r (n) and v i (n). Consequently, we obtain

$$\begin{array}{@{}rcl@{}} {\zeta_{vv}}(l) &=& E[v(n + l)v(n)] \\ &=& {r_{{v_{r}}}}(l) - {r_{{v_{i}}}}(l) + 2j{r_{{v_{r}}{v_{i}}}}(l) \\ &=& 0. \end{array} $$
(70)

Substituting Eq. 70 into Eq. 66, we get

$$ {\zeta_{xx}}(l) = 0. $$
(71)

Suppose y(n)=h(n)x(n), where h(n) denotes the impulse response of an arbitrary linear system. Then,

$$\begin{array}{@{}rcl@{}} {\zeta_{xy}}(l) &=& E[x(n + l)y(n)] \\ &=& E\left[x(n + l)\sum\limits_{k = - \infty }^{\infty} {h(k)} x(n - k)\right] \\ &=& \sum\limits_{k = - \infty }^{\infty} {h(k)} {\zeta_{xx}}(l + k) \\ &=& \sum\limits_{k = - \infty }^{\infty} {h(- k)} {\zeta_{xx}}(l - k) \\ &=& h(l) \otimes {\zeta_{xx}}(l). \end{array} $$
(72)

Moreover,

$$\begin{array}{@{}rcl@{}} {\zeta_{yy}}(l) &=& E[y(n + l)y(n)] \\ &=& E\left[y(n + l)\sum\limits_{k = - \infty }^{\infty} {h(k)} x(n - k)\right] \\ &=& \sum\limits_{k = - \infty }^{\infty} {h(k)} E\left[y(n + l)x(n - k)\right] \\ &=& \sum\limits_{k = - \infty }^{\infty} {h(k)} {\zeta_{xy}}\left(- l - k\right). \end{array} $$
(73)

Substituting Eq. 72 into Eq. 73 and considering Eq. 71, we get

$$ {\zeta_{yy}}(l) = h(- l) \otimes h(l) \otimes {\zeta_{xx}}(l) = 0 $$
(74)

By using Eq. 74, it is clear that

$$ E\{ y{(n)^{2}}\} = {\zeta_{yy}}(0) = 0. $$
(75)

8 Appendix B

To get \({r_{{u_{2}}}}(l)\), we transform Eq. 30 as:

$$ {u_{2}}(n) = \frac{{{n_{{s_{0}}}}^{*}(n){n_{{s_{1}}}}(n) - {n_{{s_{0}}}}(n){n_{{s_{1}}}}^{*}(n)}}{{2j}}, $$
(76)

and then Eq. 41 can be rearranged as:

$$ {r_{{u_{2}}}}(l) = E[{u_{2}}(n + l){u_{2}}(n)] = - \frac{1}{4}\sum\limits_{i = 1}^{4} {{q_{i}}(l)}, $$
(77)

where

$$ {q_{1}}(l) = E\{ {n_{{s_{0}}}}^{*}(n + l){n_{{s_{1}}}}(n + l){n_{{s_{0}}}}^{*}(n){n_{{s_{1}}}}(n)\}, $$
(78)
$$ {q_{2}}(l) = E\left\{{n_{{s_{0}}}}(n + l){n_{{s_{1}}}}^{*}(n + l){n_{{s_{0}}}}(n){n_{{s_{1}}}}^{*}(n)\right\}, $$
(79)
$$\begin{array}{@{}rcl@{}} {q_{3}}(l) = &-&E\left\{ {n_{{s_{0}}}}^{*}(n + l){n_{{s_{1}}}}(n + l)\right. \\ &&\left.\times {n_{{s_{0}}}}(n){n_{{s_{1}}}}^{*}(n)\right\}, \end{array} $$
(80)

and

$$\begin{array}{@{}rcl@{}} {q_{4}}(l) = &-&E\left\{ {n_{{s_{0}}}}(n + l){n_{{s_{1}}}}^{*}(n + l)\right. \\ &&\left.\times {n_{{s_{0}}}}^{*}(n){n_{{s_{1}}}}(n)\right\}. \end{array} $$
(81)

By assuming that \({n_{{s_{0}}}}(n)\) and \({n_{{s_{1}}}}(n)\) are jointly Gaussian stationary processes and utilising the Gaussian moment factoring theorem [27], we get

$$\begin{array}{@{}rcl@{}} {q_{1}}(l) = {\text{cum}}({n_{{s_{0}}}}^{*}(n + l),{n_{{s_{1}}}}(n + l)\\,{n_{{s_{0}}}}^{*}(n),{n_{{s_{1}}}}(n)) + {r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(- l){r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(l)\\ + {r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(0){r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(0) + {\zeta_{n_{{s_{0}}}^ * n_{{s_{0}}}^ * }}(l){\zeta_{{n_{{s_{1}}}}{n_{{s_{1}}}}}}(l), \end{array} $$
(82)

where cum(·) denotes high order cumulants of the complex random variables. We adopt the widely used independence assumption [28], which tells that the present sample is independent of the past samples. Thus, we have \({\text {cum}}({n_{{s_{0}}}}^{*}(n + l),{n_{{s_{1}}}}(n + l),{n_{{s_{0}}}}^{*}(n),{n_{{s_{1}}}}(n)) = 0.\) And, furthermore, considering \({\zeta _{n_{{s_{0}}}^ * n_{{s_{0}}}^ * }}(l)\) and \({\zeta _{{n_{{s_{1}}}}{n_{{s_{1}}}}}}(l)\) are all zero (see Appendix A), q 1(l) in Eq. 82 can be rewritten as

$$\begin{array}{@{}rcl@{}} {q_{1}}(l) &=& {r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(0){r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(0)\\ &&+ {r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(- l){r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(l), \end{array} $$
(83)

where \({r_{{n_{{s_{i}}}}{n_{{s_{j}}}}}}(l) = E\{ {n_{{s_{i}}}}(n){n_{{s_{j}}}}^{*}(n - l)\},\;{\text {for}}\;i \ne j.\) Utilizing the same method, we get

$$\begin{array}{@{}rcl@{}} {q_{2}}(l) &=& {r_{{n_{{s_{0}}}}{n_{{s_{1}}}}}}(0){r_{{n_{{s_{0}}}}{n_{{s_{1}}}}}}(0) \\ &&+ {r_{{n_{{s_{0}}}}{n_{{s_{1}}}}}}(- l){r_{{n_{{s_{0}}}}{n_{{s_{1}}}}}}(l), \end{array} $$
(84)
$$\begin{array}{@{}rcl@{}} {q_{3}}(l) &=& - {r_{{n_{{s_{0}}}}}}(-l){r_{{n_{{s_{1}}}}}}(l) \\ &&- {r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(0){r_{{n_{{s_{0}}}}{n_{{s_{1}}}}}}(0), \end{array} $$
(85)

and

$$\begin{array}{@{}rcl@{}} {q_{4}}(l) &=& - {r_{{n_{{s_{0}}}}}}(l){r_{{n_{{s_{1}}}}}}(-l) \\ &&- {r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(0){r_{{n_{{s_{0}}}}{n_{{s_{1}}}}}}(0), \end{array} $$
(86)

where \({r_{{n_{{s_{i}}}}}}(l) = E\{ {n_{{s_{i}}}}(n){n_{{s_{i}}}}^{*}(n - l)\},\;i \in \{ 0,1\},\) and Substituting Eqs. 83, 84, 85 and 86 into Eq. 77, we get

$$\begin{array}{@{}rcl@{}} &&{r_{{u_{2}}}}(l) = - \frac{1}{4}\left[{r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}^{2}(0) + {r_{{n_{{s_{0}}}}{n_{{s_{1}}}}}}^{2}(0)\right.\\ &-& 2{r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(0){r_{{n_{{s_{0}}}}{n_{{s_{1}}}}}}(0) - {r_{{n_{{s_{0}}}}}}(l){r_{{n_{{s_{1}}}}}}(- l)\\ &+& {r_{{n_{{s_{0}}}}{n_{{s_{1}}}}}}(- l){r_{{n_{{s_{0}}}}{n_{{s_{1}}}}}}(l) - {r_{{n_{{s_{0}}}}}}(- l){r_{{n_{{s_{1}}}}}}(l)\\ &+&\left. {r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(- l){r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(l)\right]. \end{array} $$
(87)

In the following part, the exact forms of \({r_{{n_{{s_{1}}}}}}(l)\), \({r_{{n_{{s_{0}}}}}}(l)\), \({r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(l)\), and \({r_{{n_{{s_{0}}}}{n_{{s_{1}}}}}}(l)\) are derived. Note that \({R_{{n_{{s_{1}}}}}}(z)\) can be expanded as [26]

$$\begin{array}{@{}rcl@{}} {R_{{n_{{s_{1}}}}}}(z) &=& {H_{{s_{1}}}}(z){H_{{s_{1}}}}^{*}(1/{z^{*}}){R_{n}}(z)\\ &{\mathrm{= }}&{\sigma_{v}^{2}}\frac{{\left({1 + {k_{0}}{z^{- 1}}} \right)\left({1 + k_{0}^{*}z} \right)}}{{\left({1 + \alpha {k_{0}}{z^{- 1}}} \right)\left({1 + \alpha k_{0}^{*}z} \right)}}\\ &=& \frac{{{\sigma_{v}^{2}}(1 - \alpha)}}{{(1 + \alpha)\alpha }}\left[\frac{1}{{1 + {{(\alpha k_{0}^{*})}^{- 1}}{z^{- 1}}}}\right. \\ && +\left. \frac{{1 + \alpha }}{{1 - \alpha }} - \frac{1}{{1 + \alpha {k_{0}}{z^{- 1}}}}\right], \end{array} $$
(88)

where \({R_{n}}(z) = {\sigma _{v}^{2}}\) and \({H_{{s_{1}}}}(z) = \frac {{1 + {k_{0}}{z^{- 1}}}}{{1 + \alpha {k_{0}}{z^{- 1}}}}\). Since \({r_{{n_{{s_{1}}}}}}(l)\) is a two-sided sequence with the region of convergence given by |k 0|/α>|z|>α|k 0|, the inverse z-transform of \({R_{{n_{{s_{1}}}}}}(z)\) can be expressed as

$$\begin{array}{@{}rcl@{}} {r_{{n_{{s_{1}}}}}}(l) &=& \frac{1}{{2\pi j}}\oint {{R_{{n_{{s_{1}}}}}}} (z){z^{l - 1}}dz\\ &=& \frac{{{\sigma_{v}^{2}}(1 - \alpha)}}{{(1 + \alpha)\alpha }}\left[ - {\left(- {(\alpha k_{0}^{*})^{- 1}}\right)^{l}}u(- l - 1)\right. \\ &&+ \frac{{1 + \alpha }}{{1 - \alpha }}\delta (l) - {(- \alpha {k_{0}})^{l}}u(l)], \end{array} $$
(89)

where u(l) denotes the unit step sequence. Using the same method, we have

$$\begin{array}{@{}rcl@{}} {r_{{n_{{s_{0}}}}}}(l)\; &=& \frac{{{\sigma_{v}^{2}}}}{{{\alpha^{2}} - 1}}\left[ - {\left(- {(\alpha k_{0}^{*})^{- 1}}\right)^{l}}u(- l - 1)\right.\\ &&-\left. {(- \alpha {k_{0}})^{l}}u(l)\right], \end{array} $$
(90)
$$\begin{array}{@{}rcl@{}} {r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(l) &=& \frac{{{\sigma_{v}^{2}}}}{{1 + \alpha }}\left[{\left(- {(\alpha k_{0}^{*})^{- 1}}\right)^{l}}u(- l - 1)\right.\\ &&+\left. \frac{{1 + \alpha }}{\alpha }\delta (l) - \frac{1}{\alpha }{(- \alpha {k_{0}})^{l}}u(l)\right], \end{array} $$
(91)

and

$$\begin{array}{@{}rcl@{}} {r_{{n_{{s_{0}}}}{n_{{s_{1}}}}}}(l) &=& \frac{{{\sigma_{v}^{2}}}}{{1 + \alpha }}\left[\frac{{ - 1}}{\alpha }{(- {\left(\alpha k_{0}^{*}\right)^{- 1}})^{l}}u(- l - 1)\right.\\ &&+ {(- \alpha {k_{0}})^{l}}u(l)]. \end{array} $$
(92)

Substituting Eqs. 89, 90, 91, and 92 into 87, and taking the z-transform on both sides we have

$$ {R_{{u_{2}}}}(z) = Z\{ {r_{{u_{2}}}}(l)\} = \frac{{{\sigma_{v}^{4}}}}{{2(1 - {\alpha^{2}})}}. $$
(93)

Substituting Eqs. 44 and 76 into Eq. 42 and considering that \({s_{{s_{0}}}}(n)\) is uncorrelated with \({n_{{s_{1}}}}(n)\) and \({n_{{s_{0}}}}(n)\), we have

$$\begin{array}{@{}rcl@{}} {r_{{u_{1}}{u_{2}}}}(l) = \frac{1}{4}E\{ {s_{{s_{0}}}}(n + l)\} E\left\{ {n_{{s_{1}}}}^{*}(n + l)\right. \\ \left.\times \left[{n_{{s_{0}}}}^{*}(n){n_{{s_{1}}}}(n) - {n_{{s_{0}}}}(n){n_{{s_{1}}}}^{*}(n)\right]\right\} \\ - \frac{1}{4}E\{ {s_{{s_{0}}}}^{*}(n + l)\} E\left\{ {n_{{s_{1}}}}(n + l)\right. \\ \left.\times [{n_{{s_{0}}}}^{*}(n){n_{{s_{1}}}}(n) - {n_{{s_{0}}}}(n){n_{{s_{1}}}}^{*}(n)]\right\}. \end{array} $$
(94)

Since \({s_{{s_{0}}}}(n)\) is a zero-mean stationary process, it holds that \({r_{{u_{1}}{u_{2}}}}(l){\mathrm {= }}0\). Thus we get

$$ {R_{{u_{1}}{u_{2}}}}(z) = Z\{ {r_{{u_{1}}{u_{2}}}}(l)\} = 0. $$
(95)

References

  1. L-M Li, LB Milstein, Rejection of pulsed cw interference in pn spread-spectrum systems using complex adaptive filters. IEEE Trans. Comm.COM-31:, 10–20 (1983).

    Google Scholar 

  2. D Borio, L Camoriano, LL Presti, Two-pole and multi-pole notch filters: a computationally effective solution for GNSS interference detection and mitigation. IEEE Syst. J.2(1), 38–47 (2008).

    Article  Google Scholar 

  3. RM Ramli, AOA Noor, SA Samad, A review of adaptive line enhancers for noise cancellation. Aust. J. Basic Appl. Sci.6(6), 337–352 (2012).

    Google Scholar 

  4. R Zhu, FR Yang, J Yang, in 21st Int. Congress on Sound and Vibration 2014 (ICSV 2014). A variable coefficients adaptive IIR notch filter for bass enhancement (International Institute of Acoustics and Vibrations (IIAV)USA, 2014).

    Google Scholar 

  5. SW Kim, YC Park, YS Seo, DH Youn, A robust high-order lattice adaptive notch filter and its application to narrowband noise cancellation. EURASIP J. Adv. Signal Process.2014(1), 1–12 (2014).

    Article  Google Scholar 

  6. A Nehorai, A minimal parameter adaptive notch filter with constrained poles and zeros. IEEE Trans. Acoust. Speech Signal Process.ASSP-33(8), 983–996 (1985).

    Article  Google Scholar 

  7. NI Choi, CH Choi, SU Lee, Adaptive line enhancement using an IIR lattice notch filter. IEEE Trans. Acoust. Speech Signal Process.37(4), 585–589 (1989).

    Article  MathSciNet  Google Scholar 

  8. T Kwan, K Martin, Adaptive detection and enhancement of multiple sinusoids using a cascade IIR filter. IEEE Trans. Circ. Syst.36(7), 937–947 (1989).

    Article  MathSciNet  Google Scholar 

  9. PA Regalia, An improved lattice-based adaptive IIR notch filter. IEEE Trans. Signal Process.39:, 2124–2128 (1991).

    Article  Google Scholar 

  10. Y Xiao, L Ma, K Khorasani, A Ikuta, Statistical performance of the memoryless nonlinear gradient algorithm for the constrained adaptive IIR notch filter. IEEE Trans. Circ. Syst. I. 52(8), 1691–1702 (2005).

    Article  Google Scholar 

  11. J Zhou, in Proc. Inst. Elect. Eng., Vis., Image Signal Process, 153. Simplified adaptive algorithm for constrained notch filters with guaranteed stability (The Institution of Engineering and Technology (IET)UK, 2006), pp. 574–580.

    Google Scholar 

  12. L Tan, J Jiang, L Wang, Pole-radius-varying iir notch filter with transient suppression. IEEE Trans. Instrum. Meas.61(6), 1684–1691 (2012).

    Article  Google Scholar 

  13. SC Pei, CC Tseng, Complex adaptive IIR notch filter algorithm and its applications. IEEE Trans. Circ. Syst. II. 41(2), 158–163 (1994).

    Article  Google Scholar 

  14. Y Liu, TI Laakso, PSR Diniz, in Proc. 2001 Finnish Signal Process. Symp. (FINSIG01). A complex adaptive notch filter based on the Steiglitz-Mcbride method (Helsinki University of TechnologyFinland, 2001), pp. 5–8.

    Google Scholar 

  15. S Noshimura, HY Jiang, in Proc. IEEE Asia Pacific Conf. Circuits and Systems. Gradient-based complex adaptive IIR notch filters for frequency estimation (Institute of Electrical and Electronics Engineers (IEEE)USA, 1996), pp. 235–238.

    Chapter  Google Scholar 

  16. A Nosan, R Punchalard, A complex adaptive notch filter using modified gradient algorithm. Signal Process.92(6), 1508–1514 (2012).

    Article  Google Scholar 

  17. PA Regalia, A complex adaptive notch filter. IEEE Signal Process. Lett.17(11), 937–940 (2010).

    Article  Google Scholar 

  18. R Punchalard, Arctangent based adaptive algorithm for a complex iir notch filter for frequency estimation and tracking. Signal Process.94:, 535–544 (2014).

    Article  Google Scholar 

  19. A Mvuma, T Hinamoto, S Nishimura, in Proc. IEEE MWSCAS. Gradient-based algorithms for a complex coefficient adaptive iir notch filter: steady-state analysis and application (Institute of Electrical and Electronics Engineers (IEEE)USA, 2004).

    Google Scholar 

  20. H Liang, N Jia, CS Yang, in Int. Proc. of Computer Science and Information Technology, 58. Complex algorithms for lattice adaptive IIR notch filter (IACSIT PressSingapore, 2012), pp. 68–72.

    Google Scholar 

  21. S Haykin, Adaptive Filter Theory, 4th edn. (Prentice-Hall, Upper Saddle River, NJ, 2002).

    MATH  Google Scholar 

  22. NI Cho, SU Lee, On the adaptive lattice notch filter for the detection of sinusoids. IEEE Circ. Syst.40(7), 405–416 (1993).

    Google Scholar 

  23. L Ljung, T Soderstrom, Theory and practice of recursive identification (MIT Press, Cambridge, 1983).

    MATH  Google Scholar 

  24. PSR Diniz, Adaptive filtering: algorithms and practical implementation, 3rd edn. (Springer, New York, 2008).

    Book  MATH  Google Scholar 

  25. R Punchalard, Steady-state analysis of a complex adaptive notch filter using modified gradient algorithm. AEU-Intl. J. Electron. Commun.68(11), 1112–1118 (2014).

    Article  Google Scholar 

  26. DG Manolakis, VK Ingle, SM Kogon, Statistical and adaptive signal processing: spectral estimation, signal modeling, adaptive filtering, and array processing (McGraw-Hill, New York, 2000).

    Google Scholar 

  27. A Swami, System identification using cumulants. PhD thesis (University of Southern California, Dep. Elec. Eng.-Syst., 1989).

  28. B Farhang-Boroujeny, Adaptive filters: theory and applications (John Wiley & Sons, Chichester, UK, 2013).

    Book  MATH  Google Scholar 

Download references

Acknowledgements

This work is supported by Strategic Priority Research Program of the Chinese Academy of Sciences under Grants XDA06040501, and in part by the National Science Fund of China under Grant 61501449. We thank the reviewers for their constructive comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Yang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, R., Yang, F. & Yang, J. A gradient-adaptive lattice-based complex adaptive notch filter. EURASIP J. Adv. Signal Process. 2016, 79 (2016). https://doi.org/10.1186/s13634-016-0377-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-016-0377-4

Keywords