Skip to main content

Blind CFO estimation based on weighted subspace fitting criterion with fuzzy adaptive gravitational search algorithm

Abstract

This paper deals with the blind carrier frequency offset (CFO) estimation based on weighted subspace fitting (WSF) criterion with fuzzy adaptive gravitational search algorithm (GSA) for the interleaved orthogonal frequency-division multiplexing access (OFDMA) uplink system. For the CFO estimation problem, it is well known that the WSF has superior statistical characteristics and better estimation performance. However, the type of CFO estimation must pass through the high-dimensional space problem. Optimizing complex nonlinear multimodal functions requires a large computational load, which is difficult and not easy to maximize or minimize nonlinear cost functions in large parameter spaces. This paper firstly presents swarm intelligence (SI) optimization algorithms such as GSA, particle swarm optimization (PSO), and hybrid PSO and GSA (PSOGSA) to improve estimation accuracy and reduce the computational load of search. At the same time, this paper also integrates a fuzzy inference system to WSF-GSA to dynamically adjust the gravitational constant, which can not only reduce the searching computational load, but also improve the performance of GSA in the global optimization and solution accuracy. Finally, several simulation results are provided for illustrating the effectiveness of the proposed estimator.

1 Introduction

Orthogonal frequency-division multiple access (OFDMA) provides good spectrum efficiency and the ability to resist multipath fading channels [1], which has been widely used in broadband wireless communications in recent years. The carrier frequency offset (CFO) that occurs during signal transmission causes the loss of orthogonality between its subcarriers, resulting in intercarrier interference (ICI) and multiple access interference (MAI). CFO caused by the imperfect oscillator (different from the frequency of the transmitter) and the Doppler effect at the transceiver end will cause the shift of the signal spectrum in the time domain, and the occurrence of frequency offset will lead to the loss of signal amplitude and ICI. In OFDMA uplink systems, all of the users’ CFOs have to be simultaneously estimated at the base station (BS) receiver, which is considered to be a more challenging problem. Many methods have been developed for CFO estimation in OFDMA uplink systems [2]-[11]. Those methods can roughly be classified into two categories: methods using and not using training sequences. Methods using training sequences insert a known preamble in front of each data packet, facilitating CFO estimation at the BS receiver [2, 3]. Methods not using training sequences, which are also referred to as blind methods, manipulate the subcarrier assignment scheme such that the CFO for each user can individually or jointly be estimated [4,5,6,7,8,9,10,11] at the BS receiver. However, the frequency offset estimation of the OFDMA modulated signal can be implemented by using the training symbol, and the longer the length of the cyclic prefix (CP), the stronger the ability to resist multipath channel interference. The tolerance for time offset will also be improved, but the negative loss is the reduction of data transmission efficiency. Therefore, blind CFO estimation and compensation in OFDMA systems are necessary; otherwise, the transmission performance of the system will be affected.

In recent years, the problem of blind CFO estimation for interleaved OFDMA systems has attracted the attention of many scholars. In the literature [4], CFO is estimated by a blind maximum likelihood (ML) estimator. For evaluating the ML function, it uses series expansion to reduce the computational load. The minimum variance distortionless response (MVDR) method [5] can handle the CFO estimation problem, but it still has a high computational load by using the conventional spectrum search. The resolution threshold performance of a root-MVDR [5] is better than the searching-based MVDR, but for the noise dominant situation the appearance of both methods’ threshold performance is opposite. The literature [6] proposes a blind CFO estimator based on virtual carriers or unused subchannels in an uplink OFDMA system. In order to avoid high-dimensional spectrum search, an iterative one-dimensional search estimator is also proposed. In the literature [7], the multiple signal classification (MUSIC) method is used to implement blind CFO estimation in the interleaved OFDMA uplink system. This estimator does not need pilot symbols and has better estimation performance. These searching-based and root-finding MUSIC methods need to perform eigenvalue decomposition (EVD), but the execution of EVD requires high computational complexity. In addition, many matrix multiplication and addition operations are required for execution, so the computational complexity of the MUSIC method is quite high. The literature [8] presents an oblique projection technique with reduced computational complexity to overcome the adverse effects caused by I/Q imbalances and CFOs. The literature [9] proposes a CFO estimation algorithm with strong interference-resistant capability for OFDMA systems. The literature [10] uses the estimation of signal parameters via rotational invariance techniques (ESPRIT) in the interleaved OFDMA uplink system to estimate accurate CFO, and does not need pilot symbols. In practice, using the ESPRIT has a very low computational load compared to the MUSIC method. Unfortunately, when the signal-to-noise ratio (SNR) is relatively low and the CFO value is very close, the adjacent peaks cannot be distinguished. This means that the original peak of one user on the spectrum will be pulled into the range of adjacent users, which will seriously distort the position of the original peak [11]. For these searching-based estimators, the complexity and estimation accuracy depend on the grid size used for the search. The smaller the grid, the better the performance, but relatively more computational complexity. The literature [12] proposes a fine estimation algorithm based on discrete Fourier transform (DFT) samples and fuzzy logic to enhance the frequency estimation accuracy, which is less affected by the initial frequency offset. A fuzzy logic controller is utilized to generate a weighted factor to adjust the weight of the main-lobe coefficient and the side-lobe coefficient in the formula of the correction term. Compared with the above-mentioned estimation methods, the blind ML method [4] and weighted subspace fitting (WSF) method [13] have superior statistical properties, so they have better estimation performance. In general, in order to obtain accurate solutions, the CFO estimation of this type must be carried out by optimizing the complex nonlinear multimodal functions in high-dimensional problem spaces. However, it requires a huge computational load, so it seems quite difficult and not easy to directly maximize or minimize the nonlinear cost function in a large parameter space.

In order to deal with the computational load derived from the search process, heuristic evolutionary algorithms have been put into the research topic of CFO estimation under the uplink OFDMA system in recent years. The development of these algorithms comes from the observation of natural phenomena, and the inspiration obtained serves as the theoretical basis of the algorithms. In nature, there are many group-moving organisms that set up their “social systems” for food, migration, and organization. This social system is composed of simple individuals and their groups that produce interactive behaviors. Many scientists have begun to explore the compositional structure, information communication, and behavior patterns of these biological social systems, such as ants foraging and nesting, the moving formations of birds and fishes, and the gravitational force between particle masses. Swarm intelligence (SI) is intelligence derived from many individuals based on self-organizing group behavior [14]. Particle swarm optimization (PSO) [15] is a stochastic global optimization algorithm that simulates the social behavior of birds. In order to reduce the searching computational load, the literature [16] applies PSO to CFO estimation with the MUSIC to replace the traditional spectrum search. Gravitational search algorithm (GSA) [17] is a new type of SI optimization algorithm based on the law of gravity between masses. At the same time, GSA can find the best solution for many benchmark functions. The literature [18] uses an adaptive maximum speed limit algorithm in the search space to adjust the particle moving speed of GSA to search for the global optimal solution. The literature [19] proposes a new hybrid PSO and GSA (PSOGSA) method. This method tests 23 different benchmark functions and can obtain better global search optimal solutions than the traditional GSA and PSO.

A SI algorithm should be equipped with two main features to guarantee finding the global optimal solution [20]. These two main features are exploration and exploitation. Exploration refers to the capacity to search the entire problem space, while exploitation refers to the capacity to converge to the optimal solution. Exploitation involves finding the best solution in the search solution space obtained so far during exploration. Through these two features, the SI optimization algorithm can find the best solution in all possible search spaces. The goal of all SI optimization algorithms is to effectively balance the exploration capacity and exploitation efficiency in order to find the real global optimization. In order to make the algorithm execute faster and more accurately, many studies have explored a hybrid algorithm that balances exploration and exploitation at the same time, but it is not easy to set the correct parameters in the SI algorithm. For different problems, different parameters are required, so the parameters recommended by the literature are not always the best. It is likely to be the worst when applied to other problems. However, exploration and exploitation are not clear in the evolution calculation, and on the other hand, if one of them is strengthened (exploration or exploitation) capacity, the capacity of the other will be weakened, and vice versa. Due to the above arguments, although the existing heuristic optimization algorithm can solve some problems, it has also been proved that it cannot solve all optimization problems. In recent years, GSA with fuzzy has been successfully applied to various real optimization problems, e.g., data training [21], standard benchmark functions [22]-[26], modern power system [27], and control engineering [28]. A fuzzy GSA miner is introduced to develop a novel data mining technique [21]. In the research, fuzzy controller is designed as adaptive control for the gravitational coefficient; then, fuzzy-GSA is employed to construct a novel data mining algorithm for classification rule discovery from reference data set. The literature [22] firstly incorporates local search technique (LST) into the GSA to enhance the exploitation capacity of GSA; then, fuzzy logic is introduced to the hybrid GSA and LST reasonably. The literature [23] uses fuzzy logic control in GSA to adjust the gravitational constant to achieve better optimization results and to increase convergence rate. In the literature [24], the exponential parameter of the gravitational constant of the GSA is adaptively improved, and a fuzzy system is used to control the global and local search capabilities of the GSA. The optimal parameters are found according to the situation given by the iterative process and the dispersion of the particles at a specific time when the particles execute the algorithm. The literature [25] uses fuzzy bi-level programming in chaotic GSA. The basic concept to create fuzzy bi-level GSA is the iterative fuzzy decision-making operation. A GSA-Fuzz was proposed in [26] to improve the efficiency of seed mutation strategy with the GSA. GSA-Fuzz uses GSA to learn the optimal selection probability distributions of operators and mutation positions and designs a position-sensitive strategy to guide seed mutation with learned distributions. A new improved hybrid PSOGSA algorithm based on fuzzy logic optimization algorithm was proposed in [27]. The speed of all particles in POSGSA is controlled by using fuzzy logic to adjust the maximum speed limit algorithm. A random vibration PSOGSA algorithm-based approach is presented in [28] to design an optimal fuzzy proportional–integral controller. So far, the results of our literature survey show that there is almost no relevant research work on applying GSA combined with fuzzy techniques to CFO estimation for OFDMA uplink systems.

Due to the nonlinear and high-dimensional nature of the WSF parameter estimate space, such problems seem to be good applications for SI algorithms. This paper mainly discusses the application of GSA search technique to the fitness function based on the WSF criterion to estimate accurate CFO with computational efficiency in interleaved OFDMA uplink system. GSA also has some important parameters (for example, gravitational constant and population size). Among many SI algorithms, GSA is an algorithm with better exploration capacity, but it still has some shortcomings; for example, in the convergence speed of later iterations is slow and it may not be easy to converge. By understanding the search process of GSA, we know that the expected behavior of GSA is to perform global search in the early iterations. When the algorithm is at the end of the iteration, it will gradually change to local search. Therefore, according to this expected behavior, the relevant parameters of GSA are set as the adaptive dynamic configuration of the fuzzy system will improve the shortcomings of GSA. In order to enable the GSA to effectively balance exploration and exploitation capabilities and find the best solution during the search process, combining the fuzzy inference system in each iteration process, this paper uses a two-level fuzzy inference to dynamically adjust the two parameters of the gravitational constant, the initial value of the gravitational constant and the attenuation coefficient, by enhancing the exploration or exploitation capabilities to improve the global maximum which has better solution search performance, faster convergence speed, and smaller number of particles. This proposed estimator is called WSF-fuzzy adaptive GSA (WSF-FAGSA). Simulation results are provided to demonstrate the effectiveness of the proposed WSF-FAGSA estimator.

Notation: Boldfaced lowercase letters denote column vectors and boldfaced uppercase letters denote matrices. The symbols \(( \bullet )^{H}\), \(( \bullet )^{T}\), and \(E\{ \bullet \}\) represent the conjugate transpose operation, transpose operation, and expectation, respectively; \(\otimes\) indicates an element-by-element product; \({diag}\{ {\mathbf{x}}\}\) denotes a diagonal matrix with diagonal entries of \({\mathbf{x}}\); \({{tr}}\{ \bullet \}\) denotes the trace operation of the matrix; \({|| } \bullet { ||}_{{2}}\) denotes the two-norm operation; and \({\mathbf{I}}_{Q}\) denotes an identity matrix with size \(Q \times Q\).

2 Background knowledge

2.1 Signal model

Consider an interleaved allocation OFDMA uplink system with \(N\) subcarriers, and \(M\) users transmit signals to the BS through independent channels at the same time. Assume that there are \(N\) subcarriers which are divided into \(Q\) subchannels with each subchannel having \(P = N/Q\) subcarriers and the subcarriers assigned to different users are interleaved over the whole bandwidth. The \(m{\text{th}}\) user is assigned to the \(q_{m}\) subchannel which is a subset of \(P\) subcarriers with the index set \(\{ q_{m} ,Q + q_{m} , \cdots ,(P - 1)Q + q_{m} \}\). For the sake of convenience, we assume that the coarse time and frequency synchronization have been completed, and the fractional time offset and CFO are considered. Via this assumption, the effect of time offset can be considered as a linear channel phase shift [1], i.e., the effect of time offset is not discussed here. After passing through the multipath channel and removing the cyclic prefix, the received signal at the BS is the superposition of the signals from all users and is given by

$$y(n) = \sum\limits_{m = 1}^{M} {y_{m} (n) + z(n)} , \, n = 0,1, \cdots ,N - 1$$
(1)

where \(z(n)\) is the additive white Gaussian noise with zero mean and variance \(\sigma_{z}^{2}\). The received baseband signal from the \(m{\text{th}}\) user is given by

$$y_{m} (n) = \sum\limits_{p = 0}^{P - 1} {X_{m} (p)H_{m} (p)e^{{j2\pi n(pQ + q_{m} + \varepsilon_{m} ){/}N}} }$$
(2)

where \(X_{m} (p)\) is a set of P data streams of the \(m{\text{th}}\) user and \(H_{m} (p) = \left. {\overline{H}_{m} (k)} \right|_{{k = pQ + q_{m} }}\). The channel frequency response of the kth subcarrier for the \(m{\text{th}}\) user is \(\overline{H}_{m} (k) = \sum\nolimits_{l = 0}^{{L_{m} - 1}} {h_{m} (l)e^{ - j2\pi lk/N} }\), \(k = 0,1, \cdots ,N - 1\), where \(L_{m}\) is the channel order. \(\theta_{m} = (q_{m} + \varepsilon_{m} )/Q\) denotes the effective CFO of the \(m{\text{th}}\) user and \(\varepsilon_{m} \in ( - 0.5, \, 0.5)\) denotes the \(m{\text{th}}\) user’s CFO normalized by the subcarrier spacing \(2\pi /N\). In (2), the received signal set has a special periodic feature with every \(P = N/Q\) samples, i.e., \(y(n + vP) = \sum\nolimits_{m = 1}^{M} {e^{{j2\pi v\theta_{m} }} y_{m} (n)}\), where \(v \, (0 \le v \le Q - 1)\) is an integer. In one OFDMA block, \(\{ y(n)\}_{n = 0}^{N - 1}\) can be arranged into a \(Q \times P\) matrix as follows:

$${\mathbf{Y}} = \left[ {\begin{array}{*{20}c} {y(0)} & {y(1)} & \cdots & {y(P - 1)} \\ {y(P)} & {y(P + 1)} & \cdots & {y(2P - 1)} \\ \vdots & \vdots & \ddots & \vdots \\ {y(N - P)} & {y(N - P + 1)} & \cdots & {y(N - 1)} \\ \end{array} } \right]$$
(3)

Then, the matrix of (3) in one OFDMA block can be expressed as

$${\mathbf{Y}} = {\mathbf{A}}(\theta ){\mathbf{S + Z}}$$
(4)

where \({\mathbf{A}}(\theta ) = [{\mathbf{a}}(\theta_{1} ) \, {\mathbf{a}}(\theta_{2} ) \, \cdots \, {\mathbf{a}}(\theta_{M} )]\) with \({\mathbf{a}}(\theta_{m} ) = [1, \, e^{{j2\pi \theta_{m} }} , \, \cdots {, }e^{{j2\pi (Q - 1)\theta_{m} }} ]^{T}\) for \(m = 1,2, \cdots ,M\). \({\mathbf{Z}}\) is a \(Q \times P\) noise matrix. \({\mathbf{S}} = {\mathbf{D}} \otimes ({\mathbf{BW}})\) with a \(P \times P\) inverse discrete Fourier transform (IDFT) matrix \({\mathbf{W}}\). \({\mathbf{B}} = [{\mathbf{b}}_{1} \, {\mathbf{b}}_{2} \, \cdots \, {\mathbf{b}}_{M} ]^{T}\) with \({\mathbf{b}}_{m} = [X_{m} (0)H_{m} (0), \, X_{m} (1)H_{m} (1), \, \cdots , \, X_{m} (P - 1)H_{m} (P - 1)]^{T}\) and \({\mathbf{D}} = [{\mathbf{d}}_{1} \, {\mathbf{d}}_{2} \, \cdots \, {\mathbf{d}}_{M} ]^{T}\) with \({\mathbf{d}}_{m} = [1, \, e^{{j{2}\pi \theta_{m} /P}} , \, \cdots {, }e^{{j{2}\pi (P - 1)\theta_{m} /P}} \, ]^{T}\). Then, the ensemble autocorrelation matrix of (4) is \({\mathbf{R}} = E\{ {\mathbf{YY}}^{H} \} = {\mathbf{A}}(\theta ){\mathbf{R}}_{s} {\mathbf{A}}(\theta )^{H} + \sigma_{z}^{2} {\mathbf{I}}_{Q}\), where \({\mathbf{R}}_{s} = E\{ {\mathbf{SS}}^{H} \}\) is the autocorrelation matrix of \({\mathbf{S}}\). Let \(B\) OFDMA blocks be taken, the sample autocorrelation matrix of \({\mathbf{Y}}{(}b{)}\) can be computed by

$${\hat{\mathbf{R}}} = \frac{1}{BP}\sum\limits_{b = 1}^{B} {{\mathbf{Y}}(b)^{H} {\mathbf{Y}}(b)}$$
(5)

where \({\hat{\mathbf{R}}}\) represents the sample version of \({\mathbf{R}}\) computed by using \(B\) OFDMA blocks. Assume that the number of active users \(M\) is known. The EVD of \({\hat{\mathbf{R}}}\) can be expressed as

$${\hat{\mathbf{R}}} = \sum\limits_{i = 1}^{Q} {\lambda_{i} {\mathbf{e}}_{i} {\mathbf{e}}_{i}^{H} } = {\mathbf{E}}_{s} {{\varvec{\Sigma}}}_{s} {\mathbf{E}}_{s}^{H} + {\mathbf{E}}_{z} {{\varvec{\Sigma}}}_{z} {\mathbf{E}}_{z}^{H}$$
(6)

where \(\lambda_{1} \ge \lambda_{2} \ge \cdots \ge \lambda_{M} \ge \lambda_{M + 1} = \cdots = \lambda_{Q} = \sigma_{z}^{2}\) are the eigenvalues of \({\hat{\mathbf{R}}}\) in the descending order. \({\mathbf{e}}_{i}\) are the corresponding orthonormal eigenvector associated with \(\lambda_{i}\) for \(i = 1, \, 2, \, \cdots , \, Q\). The signal subspace \({\mathbf{E}}_{s} = [{\mathbf{e}}_{1} {, }{\mathbf{e}}_{{2}} {, } \cdots , \, {\mathbf{e}}_{M} ]\) and the noise subspace \({\mathbf{E}}_{z} = [{\mathbf{e}}_{M + 1} {, }{\mathbf{e}}_{{M{ + 2}}} {, } \cdots , \, {\mathbf{e}}_{Q} ]\) are orthogonal to each other. The diagonal matrix \({{\varvec{\Sigma}}}_{s} = diag\{ \lambda_{1} , \, \lambda_{{2}} , \, \cdots ,\lambda_{M} \}\) is formed by combining the eigenvalues ​​of all users and \({{\varvec{\Sigma}}}_{z} = diag\{ \lambda_{M + 1} , \, \lambda_{M + 2} , \, \cdots , \, \lambda_{Q} \}\).

2.2 Weighted subspace fitting estimator

This subsection introduces the WSF with spectral searching estimator [8] for CFO estimation. Assume that the number of users \(M\) is known, it has been shown in [8] that efficient estimation of the asymptotic statistics (maximum likelihood estimation toward large sample size or high SNR) can be obtained by minimizing the WSF problem:

$$F_{{{{WSF}}}} ({\hat{\mathbf{\theta }}}) = tr\{\mathbf{P}_{{\mathbf{A}}}^{ \bot } {(}{{\varvec{\uptheta}}}{)}{\mathbf{E}}_{s} {\mathbf{W}}_{o} {\mathbf{E}}_{s}^{H}\}$$
(7)

where \({\mathbf{P}}_{{\mathbf{A}}}^{ \bot } ({{\varvec{\uptheta}}}) = {\mathbf{I}}_{Q} - {\mathbf{A}}({{\varvec{\uptheta}}}){[}{\mathbf{A}}^{H} ({{\varvec{\uptheta}}}){\mathbf{A}}({{\varvec{\uptheta}}})]^{ - 1} {\mathbf{A}}^{H} ({{\varvec{\uptheta}}})\), \({\mathbf{A}}({{\varvec{\uptheta}}})\) is the search matrix and \({{\varvec{\uptheta}}}{ = [}\theta_{1} ,\theta_{2} , \cdots ,\theta_{M} {]}\), and the diagonalized weight matrix \({\mathbf{W}}_{o}\) is given by

$${\mathbf{W}}_{o} = {(}{{\varvec{\Lambda}}}_{s} - \hat{\sigma }_{z}^{2} {\mathbf{I}}_{M} )^{2} {{\varvec{\Lambda}}}_{s}^{ - 1}$$
(8)

and the estimated noise power \(\hat{\sigma }_{z}^{2} = \tfrac{1}{Q - M}\sum\nolimits_{i = M + 1}^{Q} {\lambda_{i} }\). Finally, let (7) be the cost function of the WSF estimator with the search grid size \(\mu_{{1}}\). For \({\mathbf{A}}(\theta ) = [{\mathbf{a}}(\theta_{1} ), \, {\mathbf{a}}(\theta_{2} ), \, \cdots , \, {\mathbf{a}}(\theta_{M} )]\), change the CFO \({{\{ }}\theta_{m} \}\) or \({{\{ }}\varepsilon_{m} \}\) of \(M\) users at the same time, there will be a minimum value when the CFO of all users are consistent with the correct CFO. Under the assumption of \(|\varepsilon_{m} | < 0.5\), \(\theta_{m}\) between \((q_{m} - 0.5)/Q\) and \((q_{m} + 0.5)/Q\), and the \(m{\text{th}}\) user will occupy the \(q_{m} {\text{th}}\) subchannel. From the above analysis, by finding the minimum cost function value, we can complete the CFO estimation of \(M\) users. If the BS knows the number of users and the configuration of subchannels in advance, it can simply assign the estimated effective CFO to different users, because the value \(\{ \theta_{m} \}_{m = 1}^{M}\) within the effective CFO range will not repeat. Since each user will be in its own range, the one-to-one pairing may be between range \(\hat{\theta }_{m}\) and \(\hat{\varepsilon }_{m}\), that is, the estimate of each user's CFO can be expressed as

$$\hat{\varepsilon }_{m} = \hat{\theta }_{m} Q - q_{m}$$
(9)

Although the spectrum search method can be used to minimize the cost function of (7), its computational efficiency is low. Therefore, the computational load required to directly calculate (7) is quite large, and when the number of users increases, the required computational load is quite large.

2.3 Proposed methodology

The research methodology is based on the low computational load and high-resolution CFO estimation for the WSF method through the proposed fuzzy adaptive GSA approach. The methodology of research consists four different essential steps as follows: (1) In order to obtain accurate solutions, the CFO estimation must be carried out by optimizing the complex nonlinear multimodal WSF function in high-dimensional problem spaces. However, it is a NP-hard problem and requires a huge computational load. (2) In order to reduce the searching computational load, this study firstly applies PSO, GSA, and PSOGSA for CFO estimation with the fitness function of WSF to replace the traditional spectrum search. (3) Among these SI algorithms, GSA is an algorithm with better exploration capacity, it still has some shortcomings, such as in the convergence speed of later iterations is slow and it may not be easy to converge. In order to enable the GSA to effectively balance exploration and exploitation capabilities and find the best solution during the search process, combining the fuzzy inference system in each iteration process, this paper uses a two-level fuzzy inference to dynamically adjust the two parameters of the gravitational constant, the initial value of the gravitational constant and the attenuation coefficient, by enhancing the exploration or exploitation capabilities to improve the global maximum which has better solution search performance, faster convergence speed, and smaller number of particles. (4) Finally, simulation result and discussion are provided to demonstrate the effectiveness of the proposed WSF-FAGSA estimator.

3 Swarm intelligence optimization algorithms

This section presents blind CFO estimation based on WSF with SI searching algorithms. Before calculating the fitness function of the WSF, we need to transform \(\varepsilon_{m,i}\) through \(\theta_{m,i} = (q_{m} + \varepsilon_{m,i} )/Q\) into the estimate \(\hat{\theta }_{m,i}\) corresponding to the \(m{\text{th}}\) user, then substitute into the fitness function, and compare to find the user’s CFO by minimizing the fitness function. Through the dynamic adjustment of relative parameters of SI algorithms, the estimated CFO of the \(i{\text{th}}\) particle (agent) for the \(m{\text{th}}\) user is \(\hat{\theta }_{m,i}^{h}\), and the corresponding fitness function value is \({{fitness}}_{i}^{h}\), \(m = 1,2, \cdots ,M\). Let \({\hat{\mathbf{\theta }}}_{i}^{h} { = [}\hat{\theta }_{1,i}^{h} ,\hat{\theta }_{2,i}^{h} , \cdots ,\hat{\theta }_{M,i}^{h} {]}\), then the minimum fitness value \({{fitness}}_{i}^{h}\) of the \(i{\text{th}}\) particle in the \(h{\text{th}}\) iteration in the M-dimensional space is given by

$${{fitness}}_{i}^{h} = {\text{tr}}\{ {\mathbf{P}}_{{\mathbf{A}}}^{ \bot } (\hat{\theta }_{i}^{h} )\hat{E}_{s} {\mathbf{W}}_{o} \hat{E}_{s}^{H}\}$$
(10)

It is noted that increasing number of iterations and number of particles will result in higher estimation performance, but the computational load will also increase accordingly.

3.1 Gravitational search algorithm

This subsection introduces the WSF with GSA for blind CFO estimation. The GSA is inspired by the law of gravity and mass interactions [17]. In GSA, the search particles (agents) are considered objects, and their masses measure their performance. Due to the gravitational force, all objects attract each other, which causes global movement of all objects toward the object with heavier masses. Through the gravitational force, the heavier masses (considered a good solution) move slowly compared to the lighter masses. In GSA, the mass of the \(i{\text{th}}\) particle for the \(m{\text{th}}\) user has four specifications; position \(\varepsilon_{m,i}^{{}}\), inertial mass \(M_{{m_{ii} }}\), active gravitational mass \(M_{{m_{ai} }}\), and passive gravitational mass \(M_{{m_{pi} }}\). The position of each particle reflects the solution to the problem, and a fitness function is used to determine its gravitational and inertial mass. In other words, each particle represents a solution, and the algorithm is guided by adjusting the gravitational and inertial masses appropriately. Over time or the number of iterations, the desired particle is attracted to the particle with the heaviest mass, which will present an optimal solution in the search space. GSA can be regarded as an isolated system of particles, which behave like particles in a small artificial world and obey Newton's laws of gravity and motion. This subsection discusses its application to the problem of CFO estimation, where agents or particles can be considered as objects and their performance is measured by their mass.

Let the \(m{\text{th}}\) user contain \(N_{m,p}\) particles and \(\varepsilon_{m,i}^{{}}\) denotes the position of the \(i{\text{th}}\) particle in the \(m{\text{th}}\) dimension and \(i = 1, \, 2, \cdots ,N_{m,p}\). At the \(h{\text{th}}\) iteration, the gravitational force acting on particle \(i\) from particle \(j\) can be defined as

$$F_{{m_{ij} }}^{h} = G^{h} \frac{{M_{{m_{pi} }}^{h} \times M_{{m_{aj} }}^{h} }}{{R_{{m_{ij} }}^{h} + c_{3} }}(\varepsilon_{m,j}^{h} - \varepsilon_{m,i}^{h} )$$
(11)

where \(M_{{m_{aj} }}^{h}\) is the active gravitational mass associated with particle \(j\), \(M_{{m_{pi} }}^{h}\) is the passive gravitational mass associated with particle \(i\), and \(G^{h}\) is the gravitational constant at iteration \(h\). \(c_{3}\) is a small constant and \(R_{{m_{ij} }}^{h} = { ||}\varepsilon_{m,i}^{h} , \, \varepsilon_{m,j}^{h} ||_{2}\) is the Euclidean distance between \(i{\text{th}}\) and \(j{\text{th}}\) particles. In order to make the algorithm have random characteristics, assume that the total force acting on the \(i{\text{th}}\) particle in the \(m{\text{th}}\) dimension of the problem space is a randomly weighted sum, \(F_{m,i}^{h} = \sum\nolimits_{j = 1, \, j \ne i}^{{N_{m,p} }} {{{rand}}_{j} \times F_{{m_{ij} }}^{h} }\), of forces exerted from other particles, where \({{rand}}_{j}\) is a random variable in the interval \([0,1]\). Reducing the number of particles over time during the search process is one of the good compromises between exploration and exploitation. In order to improve the performance of GSA by controlling exploration and exploitation, the literature [29] suggests that only the first \(K\) exploitation particles containing the best fitness value and maximum mass can be used to attract other particles. Therefore, \(F_{m,i}^{h}\) can be rewritten as

$$F_{m,i}^{h} = \sum\limits_{j \in Kbest, \, j \ne i}^{{}} {{rand}}_{j} \times F_{{m_{ij} }}^{h}$$
(12)

where \({{Kbest}}\) is the set of first \(K\) particles with the best fitness value and biggest mass. \({{Kbest}}\) is a function of time and has an initial value of \(K_{0}\) and decreases over time. In this way, at the beginning, all particles exert forces such that \({{Kbest}}\) will decrease linearly over time. At the end, there will only be one particle applying a force to its own particle. In accordance with the law of motion, the acceleration \(a_{m,i}^{h}\) of the \(i{\text{th}}\) particle for the \(m{\text{th}}\) user at the \(h{\text{th}}\) iteration can be expressed as

$$a_{m,i}^{h} = F_{m,i}^{h} /M_{{m_{ii} }}^{h}$$
(13)

where \(M_{{m_{ii} }}^{h}\) is the inertial mass of the \(i{\text{th}}\) particle. In other words, the acceleration of a particle is directly proportional to the applied force and inversely proportional to its mass. Additionally, the particle’s next velocity is considered to a fraction of its current velocity plus its acceleration, so its velocity and position can be calculated as follows:

$$v_{m,i}^{h + 1} = {{rand}}_{i} \times v_{m,i}^{h} + a_{m,i}^{h}$$
(14)
$$\varepsilon_{m,i}^{h + 1} = \varepsilon_{m,i}^{h} + v_{m,i}^{h + 1}$$
(15)

where \({{rand}}_{i}\) is a random variable in the interval \([0,1]\) that gives the search mechanism with a random quality.

The gravitational constant \(G\) adjusts the accuracy of the search, so it reduces with time. \(G\) is initialized at the beginning and is given by

$$G^{h} = G(G_{0} , \, h) = G_{0} e^{{ - \alpha h/h_{\max } }}$$
(16)

where \(G_{0}\) is the initial value of the gravitational constant \(G\) chosen randomly, \(\alpha\) is a user specified constant, and \(h_{\max }\) is the total number of iterations [17]. Gravitational mass and inertial mass can be obtained by simple calculation through the fitness function, so a heavier mass means a more efficient particle (solution). This means that better particles exist with better solutions and run slower. Assume that the gravitational mass and the inertial mass are equal, the particle can update the gravitational mass and the inertial mass via the following equations:

$$M_{{m_{ai} }} = M_{{m_{pi} }} = M_{{m_{ii} }} = M_{m,i} , \, i = 1,2, \cdots ,N_{m,p}$$
(17)
$$\tilde{m}_{i}^{h} = \frac{{{{fitness}}_{i}^{h} - {{worst}}_{{}}^{h} }}{{{{best}}_{{}}^{h} - {{worst}}_{{}}^{h} }}$$
(18)
$$M_{m,i}^{h} = \frac{{\tilde{m}_{m,i}^{h} }}{{\sum\nolimits_{j = 1}^{{N_{m,p} }} {\tilde{m}_{m,j}^{h} } }}$$
(19)

where \(M_{m,i}^{h}\) is the mass of the \(i{\text{th}}\) particle for the \(m{\text{th}}\) user at the \(h{\text{th}}\) iteration and \({{fitness}}_{i}^{h}\) represents the fitness value of the \(i{\text{th}}\) particle at the \(h{\text{th}}\) iteration. The worst fitness value \({{worst}}_{{}}^{h}\) and the best fitness value \({{best}}_{{}}^{h}\) of the minimization problem are defined as follows:

$${{best}}_{{}}^{h} = \mathop {\min }\limits_{{j \in \{ 1,2, \cdots ,N_{m,p} \} }} {{fitness}}_{j}^{h} \;{{and}}\;{{worst}}_{{}}^{h} = \mathop {\max }\limits_{{j \in \{ 1,2, \cdots ,N_{m,p} \} }} {{ fitness}}_{j}^{h}$$
(20)

3.2 Particle swarm optimization

This subsection presents the WSF with PSO searching algorithm for blind CFO estimation. PSO [15] is a well-known SI optimization method that evolves into a domain search based on the swarm intelligent behavior. The entire search process and update is performed by the current best solution. For \(M\) user’ CFO estimation problem, each particle can be treated as a point in \(M\)-dimensional spaces and a swarm consisting of \(N_{m,p}\) and then searches for best position by updating iterations until getting a relatively steady position or exceeding the limit of iteration number. Each particle’s position of the \(m{\text{th}}\) user represents one solution in the \(m{\text{th}}\) dimension space; meanwhile, the \(m{\text{th}}\) dimension also represents the searching space of the \(m{\text{th}}\) user. For \(M\)-dimensional search spaces, the position and the velocity of the \(i{\text{th}}\) particle are \({{\varvec{\upvarepsilon}}}_{i} = [\varepsilon_{1,i} ,\varepsilon_{2,i} , \cdots ,\varepsilon_{M,i} ]\) and \({\mathbf{v}}_{i} = [v_{1,i} ,v_{2,i} , \cdots ,v_{M,i} ]\), respectively. Before computing the fitness function, we transform \(\varepsilon_{m,i}\) through \(\theta_{m,i} = (q_{m} + \varepsilon_{m,i} )/Q\) into the estimate \(\hat{\theta }_{m,i}\). Substituting the adaptive function to calculate and compare, through the dynamic adjustment of uniformly distributed random variables and acceleration, the best previous position of the \(i{\text{th}}\) particle, which gives the best fitness value, is recoded as \({\mathbf{p}}_{i}\). The index of the best particle among all the particles in the population is represented by \({\mathbf{g}}\) and called the global best location. The velocity and the position of the \(i{\text{th}}\) particle at the \((h + 1){\text{th}}\) iteration for \(i = 1, \, 2, \cdots ,N_{m,p}\) and \(h = 1, \, 2, \, \cdots , \, h_{\max }\) are updated according to the following equations:

$${\mathbf{v}}_{i}^{h + 1} = w^{h} {\mathbf{v}}_{i}^{h} + c_{1} {\mathbf{r}}_{1}^{h} \otimes [{\mathbf{p}}_{i}^{h} - {{\varvec{\upvarepsilon}}}_{i}^{h} ] + c_{2} {\mathbf{r}}_{2}^{h} \otimes [{\mathbf{g}}^{h} - {{\varvec{\upvarepsilon}}}_{i}^{h} ]$$
(21)
$${{\varvec{\upvarepsilon}}}_{i}^{h + 1} = {{\varvec{\upvarepsilon}}}_{i}^{h} + {\mathbf{v}}_{i}^{h + 1}$$
(22)

where (21) is divided into three parts. The first part is the previous inertial velocity \(w^{h} {\mathbf{v}}_{i}^{h}\) of the \(i{\text{th}}\) particle. Assume that the inertial weight \(w^{h}\) decreases linearly from the maximum value \(w_{\max } = 0.9\) to the minimum value \(w_{\min } = 0.4\)[15]. All particles’ positions must be limited to \([\varepsilon_{\min } , \, \varepsilon_{\max } ] = [ - 0.5, \, 0.5]\), avoiding some infeasible particles’ positions (subcarriers) that can lead to slow PSO search. After the particle position is updated by (22), if the particle position exceeds the range, adjust its position to \(\varepsilon_{\min }\) or \(\varepsilon_{\max }\). In the second part, \(c_{1} {\mathbf{r}}_{1}^{h} \otimes ({\mathbf{p}}_{i}^{h} - {{\varvec{\upvarepsilon}}}_{i}^{h} )\) is the optimal position of the particle’s own history, which belongs to the self-learning mode. In the third part, \(c_{2} {\mathbf{r}}_{2}^{h} \otimes [{\mathbf{g}}^{h} - {{\varvec{\upvarepsilon}}}_{i}^{h} ]\) is the influence of the global historical best position on speed, which can be regarded as a social learning mode. \(c_{1} {\mathbf{r}}_{1}^{h}\) and \(c_{2} {\mathbf{r}}_{2}^{h}\) represent the vectors formed by uniformly distributed random variables between 0 and 1 at the \(h{\text{th}}\) iteration, which are the random acceleration items of \({\mathbf{p}}_{i}^{h} = [p_{1,i}^{h} ,p_{2,i}^{h} , \cdots ,p_{M,i}^{h} ]\) and \({\mathbf{g}}^{h} = [g_{1,g}^{h} ,g_{2,g}^{h} , \cdots ,g_{M,g}^{h} ]\), respectively. The learning factors \(c_{1}\) and \(c_{2}\) will affect the acceleration of the particles that the purpose is to push the particles to the optimal position. In this paper, due to the use of the original PSO learning factor value, its performance will be poor. After continuous adjustment, the value of the learning factor \(c_{1}\) used will be set to 0.9, and the value of \(c_{2}\) will be set to 0.1.

3.3 Hybrid PSO and GSA

For blind CFO estimation, this subsection introduces the WSF with hybrid PSO and GSA searching method. The hybrid PSO and GSA method [19] is a SI optimization algorithm, which is called PSOGSA. The basic idea of PSOGSA is to combine the capacity of social thinking (\(g_{m,g}^{h}\)) in PSO with the local search capability of GSA. In order to associate these heuristic algorithms, mathematical formulation is aimed as

$$v_{m,i}^{h + 1} = w \times v_{m,i}^{h} + c^{\prime}_{1} \times {{rand}} \times a_{m,i}^{h} + c^{\prime}_{2} \times {{rand}} \times (g_{m,g}^{h} - \varepsilon_{m,i}^{h} )$$
(23)

where \(v_{m,i}^{h}\) is the velocity of the \(i{\text{th}}\) particle at iteration \(h\), \(c^{\prime}_{1}\) and \(c^{\prime}_{2}\) are weighting factors, \(w\) is a weighting function, and \({\text{rand}}\) is a uniform random variable in the interval \([0, \, 1]\). \(a_{m,i}^{h}\) is the acceleration of the \(i{\text{th}}\) particle at iteration, \(h\) and \({{gbest}}_{m,g}^{h}\) is the best solution by now. In the next iteration, the position of the \(i{\text{th}}\) particle is updated as

$$\varepsilon_{m,i}^{h + 1} = \varepsilon_{m,i}^{h} + v_{m,i}^{h + 1}$$
(24)

In the PSOGSA method, firstly, all particles are randomly initialized for the specified maximum and minimum limitations. A candidate solution is computed at each particle in population. The gravitational constant, the mass of the particle, the gravitational force, and the acceleration of the particle are described by (16), (19), (12), and (13), respectively. After calculating the acceleration of the particles and updating the best solution by now, velocities of all particles are calculated using by (23). Finally, the position of the particle is updated as (24), and the operation process of the method is repeated by meeting the stopping criteria.

4 Fuzzy adaptive GSA

As we all know, early iterations need to strengthen exploration capabilities, and later iterations need to strengthen exploitation capabilities. Additionally, the best particles should have high exploitation capabilities, while the worst particles should have high exploration capabilities. We know that GSA is designed for exploration capabilities, while local search is designed for exploitation capabilities. In order to balance exploration and exploitation capabilities, this section explores the adjustment of GSA combined with fuzzy inference, called fuzzy adaptive GSA (FAGSA). The main idea is to improve the internal parameter values of the iterative process of GSA by combining fuzzy inference. In other words, use the fuzzy “IF–THEN” rule to dynamically adjust the two important parameters of the gravitational constant \(G^{h}\) such as \(G_{0}^{h}\) and \(\alpha^{h}\) for GSA at each iteration. The application of the gravitational constant \(G\) allows control of the dynamic properties of the particle swarm, including its propensity for exploration and exploitation. In fact, the gravitational constant prevents the increase in velocity because the effect of the object’s inertia. Without a gravitational constant, objects with accumulated velocities might explore the search space, but lose the capacity to fine-tune the results. On the other hand, preventing objects from moving too fast may compromise the exploration of the search space. Therefore, the value of the gravitational constant will affect the global and local search capabilities for GSA. It can also be concluded from (11) that the gravitational value of the object can be determined by the \({\text{Kbest}}\) position found in the current iteration. This means that the convergence properties of GSA can be controlled by the gravitational constant. As the fitness value of the object becomes better and better, the search space explored by the object should become smaller and smaller, which means that \(G\) should be reduced to emphasize local exploitation rather than global exploration. A small improvement in object’s fitness value results in a larger search space for exploration, which means that the value of \(G\) should be increased to emphasize global exploration rather than local exploitation.

However, for the CFO estimation problem, it is necessary to understand the above GSA search process and convert it into a linguistic description. This makes fuzzy logic a good choice for dynamically tuning the parameters of the GSA. Therefore, the proposed FAGSA makes the gravitational constant \(G^{h}\) dynamically adjust two important parameters, the initial gravitational constant term \(G_{0}^{h}\) and the decay exponential term \(\alpha^{h}\), in the CFO estimation problem at the \(h{\text{th}}\) iteration. Therefore, (16) can be rewritten as

$$G^{h} = G_{{0}}^{h} e^{{ - \alpha^{h} \times h/h_{\max } }}$$
(25)

The most used fuzzy inferences are Mamdani [30] and Takagi–Sugeno–Kang (TSK) [31]. Here, this paper will introduce the fuzzy system of Mamdani to control the GSA parameters. The fuzzy inference system consists of two levels. The first level consists of the current iteration number (\(T = h/N_{m,h}\)) and the effective object number (\(K = {{Kbest}}/N_{m,p}\)) which are normalized to (\(0 \le T \le 1\)) and (\(0 \le K \le 1\)) as input variables, and then output the value of the gravitational constant \(G_{0}^{h}\). The membership function (MF) of the fuzzy set adopts a triangular membership function in shape, and all input membership functions present three linguistic values: L, M, and H are represented as Low, Medium, and High, respectively. The output variables are represented as linguistic values of five fuzzy sets: L (Low), ML (Medium Low), M (Medium), MH (Medium High), and H (High). The fuzzy rules are shown in Table 1. For the fuzzification system of each input and output, the membership function is shown in Fig. 1. The fuzzy rules in Table 1 are used to select the value of the gravitational constant \(G_{0}^{h}\). Each rule represents a mapping from the input space to the output space, as shown in Table 1, and there are nine possible rules for each input variable with three linguistic values. The rules are “IF–THEN” description language; for example, the \(i{\text{th}}\) rule is: Rule \(i\): If \(T\) is \({{MF}}_{t}^{i}\) and \(K\) is \({{MF}}_{k}^{i}\), then \(G_{0}^{h}\) is \({{MF}}_{g}^{i}\), \(\forall{i}\), where the T-norm, implication, and aggregation are given by min, min, and max operation, respectively. And the membership functions of \(T\), \(K\), and \(G_{0}^{h}\) are denoted by \({{MF}}_{t}^{i} \in \{{\text{L}},{\text{M}},{\text{H}}\}\), \({{MF}}_{k}^{i} \in \{ {\text{L}},\;{\text{M}},\;{\text{H}}\}\), and \({{MF}}_{g}^{i} (x) \in \{L,\;{\text{ML}},\;{\text{M}},\;{\text{MH}},\;{\text{H}}\}\).

Table 1 Fuzzy rules for the gravitational constant \(G_{{0}}^{h}\)
Fig. 1
figure 1

Input membership functions: a T and b K, and output membership function c \(G_{{0}}^{h}\)

In the defuzzification step, the center of gravity (COG) method [32] is used for implementing the defuzzification process, which transforms the fuzzification results into a definite value, which can be evaluated as

$$G_{0}^{h} = \frac{{\int\limits_{0}^{100} {x\mathop {\max }\limits_{x} \{ \mu_{1} (T,\;K)MF_{g}^{1} (x),\;\mu_{2} (T,\;K)MF_{g}^{2} (x), \cdots ,\;\mu_{9} (T,\;K)MF_{g}^{9} (x)\;\} dx} }}{{\int\limits_{0}^{100} {\mathop {\max }\limits_{x} \{ \mu_{1} (T,\;K)MF_{g}^{1} (x),\;\mu_{2} (T,\;K)MF_{g}^{2} (x), \cdots ,\;\mu_{9} (T,\;K)MF_{g}^{9} (x)\;\} dx} }}$$
(26)

where \(\mu_{i} (T,\;K) = \min {MF_{t}^{i} (T),MF_{k}^{i} (K)} \ge 0\) and \(\sum\limits_{i} {\mu_{i} (T,\;K) > 0}\).

Then, the second level takes the normalized T and the output value \(G_{0}^{h}\) by the first level as input variables, and finally outputs the attenuation index term \(\alpha^{h}\). The shape of the membership function of the fuzzy set is also a triangular membership function, and all input membership functions present three linguistic values: L, M, and H are represented as Low, Medium, and High, respectively. The output variables are represented as linguistic values ​​of five fuzzy sets: L (Low), ML (Medium Low), M (Medium), MH (Medium High), and H (High). The fuzzy rules are shown in Table 2. The membership function of fuzzification for each input and output is shown in Fig. 2, and the defuzzification is used the COG defuzzification method [32]. As shown in Table 2, for each input variable with three linguistic values, there are nine possible rules in total, and each rule is an “IF–THEN” description language. For example, the \(j{\text{th}}\) rule is: Rule j: If \(T\) is \(MF_{t}^{j}\) and \(G_{0}^{h}\) is \(MF_{{\overline{g}}}^{j}\), then \(\alpha^{h}\) is \(MF_{\alpha }^{j}\), \(\forall j\), where the membership functions of \(K\) and \(G_{0}^{h}\) are denoted by \(MF_{{\overline{g}}}^{j} \in \{L,{\text{M}},\;{\text{H} }\}\) and \(MF_{\alpha }^{j} (x) \in \{L,{\text{ML}},\;{\text{M}},\;{\text{MH}},\;{\text{H} }\}\) shown in Fig. 2. The fuzzy rules in Table 2 are used to select the value of the attenuation index item \(\alpha^{h}\), which can be evaluated as:

$$\alpha^{h} = \frac{{\int\limits_{0}^{50} {x\mathop {\max }\limits_{x} \{ \mu_{1} (T,\;G_{0}^{h} )MF_{\alpha }^{1} (x),\;\mu_{2} (T,\;G_{0}^{h} )MF_{\alpha }^{2} (x),\; \cdots ,\;\;\mu_{9} (T,\;G_{0}^{h} )MF_{\alpha }^{9} (x)\} dx} }}{{\int\limits_{0}^{50} {\mathop {\max }\limits_{x} \{ \mu_{1} (T,\;G_{0}^{h} )MF_{\alpha }^{1} (x),\;\mu_{2} (T,\;G_{0}^{h} )MF_{\alpha }^{2} (x),\; \cdots ,\;\;\mu_{9} (T,\;G_{0}^{h} )MF_{\alpha }^{9} (x)\} dx} }}$$
(27)

where \(\mu _{j} (T,\;G_{0}^{h} ) = \min \{{{ }}MF_{t}^{j} (T),MF_{{\bar{g}}}^{j} (G_{0}^{h} )\} \ge 0\) and \(\sum\limits_{j} {\mu_{j} (T,\;K) > 0}\).

Table 2 Fuzzy rules for the attenuation coefficient \(\alpha^{h}\)
Fig. 2
figure 2

Input membership functions: a T and b \(G_{{0}}^{h}\) and output membership function (c) \(\alpha^{h}\)

Although the settings of the proposed fuzzy inference system cannot make GSA have better performance, they can make GSA have a more stable convergence performance, and can also speed up the convergence speed and reduce the number of iterations and the number of particles. Finally, the steps to implement the WSF-FAGSA to perform CFO estimation for \(M\) users:

Step 1. Calculate the autocorrelation matrix of an OFDMA data block, and then, perform EVD on \({\hat{\mathbf{R}}}\).

Step 2.Obtain the signal subspace \({\mathbf{E}}_{s}\) and construct the weighting matrix \({\mathbf{W}}_{o}\) by using the eigenvalue matrix \({{\varvec{\Lambda}}}_{s}\) and the noise power \(\hat{\sigma }_{z}^{2}\).

Step 3.Set the number of particles \(N_{m,p}\) and the maximum number of iterations \(N_{m,h}\), the particle position is randomly generated, and the initial velocity of the particle is 0.

Step 4.Calculate \(T = h/N_{m,h}\) and \(K = {\text{Kbest}}/N_{m,p}\), and output \(G_{{0}}^{h}\) after operating \(T\) and \(K\) through the first-level fuzzy system; then, \(T\) and \(G_{{0}}^{h}\) are processed by the second-level fuzzy system and output \(\alpha^{h}\).

Step 5. Calculate the gravitational constant of (16), the particle mass of (19), the gravitational force \(F_{m,i}^{h}\) of (12), and the acceleration \(a_{m,i}^{h}\) of (13).

Step 6.Convert \(\varepsilon_{m,i}^{h}\) into \(\theta_{m,i}^{h}\) and substitute into (10), and update the best fitness value and worst fitness value according to the fitness value of (10).

Step 7.According to (14) and (15), the velocity \(v_{m,i}^{h + 1}\) and position \(\varepsilon_{m,i}^{h + 1}\) are updated, and the position of the particle that escapes the range is corrected.

Step 8.If the termination condition is reached, then it is the optimal solution, otherwise go back to step 4 and execute the next iteration. The termination condition is reaching the maximum number of iterations.

5 Computational complexity analysis

This section first describes the computational complexity analysis of the CFO estimator based on the WSF criterion, and uses the required number of complex multiplications (CM) as the evaluation basis. It is assumed that an interleaved OFDMA uplink system contains \(M\) users and \(Q\) subchannels, and each subchannel has \(P\) subcarriers. In the process of constructing the WSF matrix, the EVD of the autocorrelation matrix with dimension \(Q \times Q\) needs \(12Q^{3}\) CM [33], and the weight matrix \({\mathbf{W}}_{o} = {(}{{\varvec{\Lambda}}}_{s} - \sigma_{z}^{2} {\mathbf{I}}_{Q} )^{2} {{\varvec{\Lambda}}}_{s}^{ - 1}\) needs \(2M^{3}\) CM. Let \(\mu_{{1}}\) be the searching grid size, then the number of search grids in each dimension (user) of WSF is \(F_{1} = \mu_{1}^{ - 1} + 1\), so the search number of \(M\) users is \(F_{1}^{M}\), so the executing \({\mathbf{P}}_{A}^{ \bot } ({{\varvec{\uptheta}}}) = {\mathbf{I}}_{Q} - {\mathbf{A}}({{\varvec{\uptheta}}})[{\mathbf{A}}^{H} {(}{{\varvec{\uptheta}}}{)}{\mathbf{A}}{(}{{\varvec{\uptheta}}}{)}]^{ - 1} {\mathbf{A}}^{H} {(}{{\varvec{\uptheta}}}{)}\) requires \(F_{1}^{M} (2Q^{2} + 2Q^{2} M + QM^{2} )\) CM. Then, there will also be about \(F_{1}^{M} (4Q^{2} M + 3QM^{2} + 2Q^{2} )\) CM on \(F_{WSF} = {{tr(}}{\mathbf{P}}_{{\mathbf{A}}}^{ \bot } {(}{{\varvec{\uptheta}}}{)}{\mathbf{E}}_{s} {\mathbf{WE}}_{s}^{H} )\), so the total CM required to implement the spectrum search WSF estimator is about \(12Q^{3} + 2M^{3} + F_{1}^{M} (4Q^{2} M + 3QM^{2} + 2Q^{2} )\) CM. Let \(F_{1,m}\) be the number of conventional spectrum searches performed on the \(m{\text{th}}\) user, and for polynomial root-finding estimators, such as root-MVDR [5] and root-MUSIC [7], if the highest order of the polynomial is \(2Q - 2\), then the computational complexity required by this root-finding procedure is about \(8Q^{3}\) CM [4]. Compared with the root-finding procedure, the computational complexity required for calculating \(\hat{\varepsilon }_{m}\) is very small, so it is ignored here. In addition, this paper also provides the CM required by estimators including the MVDR [5], root-MVDR [5], MUSIC [7] and root-MUSIC [7], and ESPRIT [10], as shown in Table 3.

Table 3 Computational complexity analysis for conventional estimators

Compared with the computational load required to construct the fitness function of WSF in each iteration, the computational complexity of updating parameters required by all the SI and fuzzy adaptive algorithms evaluated in this paper is quite small, so it can be ignored. When the SI algorithm converges, the computational complexity of executing \(M\) users is related to the number of particles \(N_{m,p}^{{{\text{SI}}}}\) and the number of iterations \(N_{m,h}^{{{\text{SI}}}}\), where the superscript “SI” includes the PSO, GSA, PSOGSA, and FAGSA. In other words, the fitness function of WSF with SI needs to be calculated \(N_{p}^{{{\text{SI}}}} N_{h}^{{{\text{SI}}}}\) times. Finally, the CM required by estimators including the WSF-PSO, WSF-GSA, WSF-PSOGSA, and WSF-FAGSA is listed in Table 4.

Table 4 Computational complexity for the WSF with SI searching estimators

6 Result and discussion

This section provides computer simulation results to demonstrate the effectiveness of the proposed WSF-FAGSA estimator for CFO estimation. For comparison, the results of the MVDR [5], root-MVDR [5], MUSIC [7], root-MUSIC [7] and ESPRIT [10], WSF-PSO, WSF-GSA, and WSF-PSOGSA are also provided. All OFDMA signals were generated with binary phase shift keying (BPSK) modulation and the average received signal power from all users is the same. The BS fully knows the subcarrier configuration of each user, and each user transmits signals to the BS through an independent multipath channel. The channel taps \(h_{m} (l)\) are modeled as statistically independent Gaussian random variables with zero mean and an exponentially decaying power profile, \(E\{ h_{m} (l)\} = \alpha_{l} e^{( - l/5)} ,{ 0} \le l \le L_{m} - {1}\), where \(\alpha_{l}\) is the normalized factor used to set the channel power to unity and \(L_{m} = 10\). For all simulation results, assuming that the total number of subcarriers is \(N = 1024\), the subcarriers will be allocated to all users by interleaving. The allocated subchannels are all continuous, the total number of subchannels is \(Q = 32\), and the number of subcarriers corresponding to each subchannel is \(P = 32\). It is also assumed in each Monte Carlo test that the channel state within an OFDMA block will not change with time. The input SNR and mean square error (MSE) are defined as \({\text{SNR}} = {1}0\log E\{ y_{m} (n)^{2} \} /\sigma_{z}^{2}\) and \({\text{MSE}} = 1/M\Pi \sum\nolimits_{\rho = 1}^{\Pi } {\sum\nolimits_{m = 1}^{M} {(\hat{\varepsilon }_{m}^{\rho } - \varepsilon_{m}^{\rho } )^{2} } }\), respectively, where \(\Pi\) is the Monte Carlo tests and the number of users is \(M = 8\). The CFO of all simulated effective users adopts {\(- 0.4041\), \(0.3355\), \(- 0.0407\), \(0.2375\), \(- 0.1254\), \(0.2293\), \(- 0.3612\), \(0.4595\)}, and the users’ CFO are independent of each other. In each simulation, the results are obtained through \(B = 1\) OFDMA block and \(\Pi = {5}00\). Since the smaller the search grid size can have better estimation resolution, the appropriate grid size is selected \(\mu_{1} = 10^{ - 5}\) within the range of \([0{\text{ dB}},{\text{ 10dB, 20dB, 30 dB}}]\). The parameters chosen for PSO are \(c_{1} = 0.9\), \(c_{2} = 0.1\), and \(w^{h}\) is defined to implement a linear decrease in the range between 0.9 and 0.4. The parameters chosen by GSA are \(G_{0} = 100\), \(\alpha = 20\), and \(c_{3} = 10^{ - 10}\). The parameters selected by PSOGSA are the same as those used in PSO and GSA.

Figures 3 and 4 illustrate the selection of iteration numbers and particle numbers using the SI-based search estimators, respectively. Figure 3(a) and 4(a) show the MSE performance of WSF-PSO and WSF-GSA versus the number of iterations and the number of particles, respectively. Meanwhile, the MSE performance of WSF-PSO and WSF-GSA versus the number of iterations and the number of particles is presented in Fig. 3(b) and 4(b), respectively. Because PSO has the influence of historical best position on speed, it has more memory in the search process, and its convergence capacity is more stable in the later search. The results indicate that the MSE performance of WSF-PSO can achieve convergence when the particles numbers \(N_{m,p} = 230\) and iteration numbers \(N_{m,h} = 190\), and the MSE performance of WSF-GSA converges when the particle numbers \(N_{m,p} = 70\) and iteration numbers \(N_{m,h} = 190\). PSOGSA has both the exploration capacity of GSA and the exploitation capacity of PSO. The results of WSF-PSOGSA indicate that when the number of particles is \(N_{m,p} = 50\) and the number of iterations is \(N_{m,h} = 150\), the estimation performance converges, which makes the WSF-PSOGSA with good estimation in both search and convergence. However, the MSE performance of WSF-FAGSA converges under the number of particles \(N_{m,p} = 70\) and the number of iterations \(N_{m,h} = 125\). It is noted that the basic performance of the SI-based estimators depends on the number of iterations and the number of particles selected. More particles can increase the success probability of searching for the best solution. However, more particles require more evaluation operations, resulting in a higher computational load. The size of the population is one of the important parameters in the SI search algorithm. When the population size is too small, it often implies that optimum solutions are more difficult to find whether the particles do not accurately cover the entire search space, which could omit the global optimum. For calculating the fitness function to obtain relatively stable estimate performance, the required \({{\{ }}N_{m,p} , \, N_{m,h} , \, N_{m,p} N_{m,h} \}\) of the WSF with SI searching estimator is shown in Table 5. In order to analyze the estimation performance fairly, in the range of \([0{\text{ dB}},{\text{ 10dB, 20dB, 30 dB}}]\), these conventional spectrum search estimators, such as MVDR, MUSIC, and WSF, select the appropriate grid size \(\mu_{1} = 10^{ - 5}\), so the total search times of MVDR and MUSIC is \(MF_{1,m} = 8 \times 100,{001 = 800,008}\), and the total search times of WSF is \(F_{1}^{M} = 100,{001}^{8}\).

Fig. 3
figure 3

MSE versus number of iterations. a WSF-PSO and WSF-GSA. b WSF-PSOGSA and WSF-FAGSA

Fig. 4
figure 4

MSE versus number of particles. a WSF-PSO and WSF-PSOGSA. b WSF-GSA and WSF-FAGSA

Table 5 Number of calculated fitness (cost) functions analysis

The iteration numbers \(N_{m,h}\) and the particle numbers \(N_{m,p}\) of WSF with SI estimators are given in Table 5. Figure 5 shows the results of MSE versus the number of blocks under SNR = \(15{\text{ dB}}\) and the number of users \(M = 8\). Clearly, increasing the number of blocks induces the performance improvement for all estimators. Again, this figure shows that all estimators have the same convergence speed. It also proves that the performance of all the WSF with SI estimators is slightly better than that of the MUSIC estimator, especially at low SNR. Figure 6 shows MSE versus the number of users under SNR = \(15{\text{ dB}}\). The CFO \(\varepsilon\) of the active users are generated as uniform distributed over the interval \(( - 0.5, \, 0.5)\). This figure shows that as the number of users increases, the MAI will also increase, and the estimation performance of all estimators will also deteriorate. It is shown that the estimation performance of the proposed WSF-FAGSA estimator is more accurate than that of the other estimators as the active user numbers increases, especially in M > 8. And, it can have better accuracy than ESPRIT, MVDR, and root-MVDR estimators. Figure 7 shows MSE of the CFO estimation versus SNR. It is noted that in the presence of noise and MAI, to find out the roots’ precise location becomes more ambiguously for the root-MUSIC estimator under uplink cases. The improper root selection due to local minimum/maximum problem often results in serious bias in the signals’ parameter estimation, especially in low-SNR environments. We observe from the figure that the MSE performance of the proposed WSF-FAGSA estimator is relatively stable, especially in the case of lower SNR. Meanwhile, its performance is slightly better than MUSIC and root-MUSIC. However, the accuracy of the CFO estimation of the searching-based MUSIC and MVDR estimators are governed by the searching grid size whereas all the SI-based estimators do not suffer from this limitation. In short, the proposed WSF-FAGSA estimator reduces the huge computational load by reducing the product of the number of particles and the number of iterations, while maintaining relatively ideal estimation performance.

Fig. 5
figure 5

MSE versus number of blocks

Fig. 6
figure 6

MSE versus number of users

Fig. 7
figure 7

MSE versus SNR

7 Conclusion

In the interleaved OFDMA uplink system, due to the nonlinear and high-dimensional nature of the CFO estimation space based on WSF technology, the complex multiplications required for performing spectrum search will increase exponentially, so it has a rather huge calculation load. The simulation results have confirmed that not only can WSF-GSA reduce the computational load, but its performance can still be slightly better than the MUSIC. In addition, in order to improve the performance of WSF-GSA and not have strict initial requirements, this paper also proposes to dynamically adjust the gravitational constant in GSA by combining the fuzzy adaptive method. The proposed WSF-FAGSA can greatly reduce the required number of iterations, thereby greatly reducing the computational complexity of spectrum search. However, the proposed WSF-FAGSA requires only one OFDMA data block to achieve high resolution performance and at the same time has the capacity to estimate CFO.

Availability of data and materials

Data supporting the findings of this study can be found in the supplementary material to this article (e.g., figures, parameters within chapters).

Code availability

Data supporting the findings of this study can be found in the supplementary material to this article (e.g., figures, parameters within chapters).

References

  1. H. Abdzadeh-Ziabari, W.P. Zhu, M.N.S. Swamy, “Timing and frequency synchronization and doubly selective channel estimation for OFDMA uplink.” IEEE Trans. Circuits Syst. II Express Briefs 67(1), 62–66 (2020)

    Google Scholar 

  2. M.O. Pun, M. Morelli, C.C.J. Kuo, Maximum-likelihood synchronization and channel estimation for OFDMA uplink transmissions. IEEE Trans. Commun. 54(4), 726–736 (2006)

    Article  Google Scholar 

  3. Z. Wang, Y. Xin, G. Mathew, Iterative carrier-frequency offset estimation for generalized OFDMA uplink transmission. IEEE Trans. Wirel. Commun. 8(3), 1373–1383 (2009)

    Article  Google Scholar 

  4. H.T. Hsieh, W.R. Wu, Blind maximum-likelihood carrier-frequency-offset estimation for interleaved OFDMA uplink systems. IEEE Trans. Vehicular Technol. 60(1), 160–173 (2010)

    Article  Google Scholar 

  5. C.C. Shen, A.C. Chang, Blind CFO estimation based on decision directed MVDR approach for interleaved OFDMA uplink systems. IEICE Trans. Commun. 97(1), 137–145 (2014)

    Article  Google Scholar 

  6. S.S. Li, S.M. Phoong, Blind estimation of multiple carrier frequency offsets in OFDMA uplink systems employing virtual carriers. IEEE Access 8, 2915–2923 (2020)

    Article  Google Scholar 

  7. Z. Cao, U. Tureli, Y.D. Yao, Deterministic multiuser carrier-frequency offset estimation for interleaved OFDMA uplink. IEEE Trans. Commun. 52(9), 1585–1594 (2004)

    Article  Google Scholar 

  8. P. Sheeba, P. Muneer, V.P.T. Ijyas, M. Usman, M. Wajid, Equalization techniques for SC-FDMA systems under radio imbalances at both transmitter and receiver. Wirel. Pers. Commun. 129(4), 2563–2581 (2023)

    Article  Google Scholar 

  9. N.H. Cheng, C.C. Chen, Y.F. Wang, Y.F. Chen, Adaptive carrier frequency offset estimation in interference environments for OFDMA uplink systems. IEEJ Trans. Electric. Electronic Eng. 18(10), 1664–1672 (2023)

    Article  Google Scholar 

  10. J.H. Lee, S. Lee, K.J. Bang, Carrier frequency offset estimation using ESPRIT for interleaved OFDMA uplink systems. IEEE Trans. Vehicular Technol. 56(5), 3227–3231 (2007)

    Article  Google Scholar 

  11. R. Miao, J. Xiong, L. Gui, J. Sun, Iterative approach for multiuser carrier frequency offset estimation in interleaved OFDMA uplink. IEEE Trans. Consumer Electron. 55(3), 1039–1044 (2009)

    Article  Google Scholar 

  12. Y. He, X. Shi, Y. Wang, Y. Shen, A fine frequency estimation algorithm based on DFT samples and fuzzy logic for a real sinusoid. IET Radar Sonar Navig. 16(8), 1364–1375 (2022)

    Article  Google Scholar 

  13. K.A. Kumar, M.V.R. Vittal, A new approach for transient CFO estimation by weighted subspace fitting in OFDM communications system. Int. J. Sci. Eng. Technol. Res. 4(22), 4295–4299 (2015)

    Google Scholar 

  14. S. Keerthi, K. Ashwini, M.V. Vijaykumar, Survey paper on swarm intelligence. Int. J. Comput. Appl. 115(5), 8–12 (2015)

    Google Scholar 

  15. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in IEEE International Conference on Neural Networks, pp. 1942–1948, 1995

  16. A.C. Chang, C.C. Shen, Blind carrier frequency offset estimation based on particle swarm optimization searching for interleaved OFDMA uplink. IEICE Trans. Fundament. Electron. Commun. Comput. Sci. 99(9), 1740–1744 (2016)

    Article  Google Scholar 

  17. E. Rashedi, S. Nezamabadi, S. Saryazdi, GSA: A gravitational search algorithm. Inf. Sci. 179(13), 2232–2248 (2009)

    Article  Google Scholar 

  18. G.Y. Ding, D.Q. Zhang, and H. Liu, “An adaptive disruption based gravitational search algorithm with time-varying velocity limitation,” in 2016 35th Chinese Control Conference, pp. 9201–9206, 2016

  19. S. Mirjalili and S.Z.M. Hashim, “A new hybrid PSOGSA algorithm for function optimization,” in International Conference on Computer and Information Application, pp. 374–377, 2010

  20. A.E. Eiben, C.A. Schippers, On evolutionary exploration and exploitation. Fund. Inform. 35(1–4), 35–50 (1998)

    Google Scholar 

  21. S.H. Zahiri, Fuzzy gravitational search algorithm an approach for data mining. Iran. J. Fuzzy Syst. 9(1), 21–37 (2012)

    MathSciNet  Google Scholar 

  22. K. Qian, W. Li, W. Qian, Hybrid gravitational search algorithm based on fuzzy logic. IEEE Access 5, 24520–24532 (2017)

    Article  Google Scholar 

  23. F.S. Saeidi-Khabisi and E. Rashedi, “Fuzzy gravitational search algorithm,” in International Conference on Computer and Knowledge Engineering, pp. 156–160, 2013

  24. F. Olivas, F. Valdez, and O. Castillo, “A fuzzy system for dynamic parameter adaptation in gravitational search algorithm,” in 2016 IEEE 8th International Conference on Intelligent Systems, pp. 146–151, 2016

  25. N. Das and A.P. P., “FB-GSA: A fuzzy bi-level programming based gravitational search algorithm for unconstrained optimization,” Appl. Intell., 51(4), 1857–1887, 2021

  26. M. Lin, Y. Zeng, T. Wu, Q. Wang, L. Fang, S. Guo, GSA-fuzz: Optimize seed mutation with gravitational search algorithm. Sec. Commun. Netw. 15, 2022 (2022)

    Google Scholar 

  27. S. Duman, N. Yorukeren, I.H. Altas, A novel modified hybrid PSOGSA based on fuzzy logic for non-convex economic dispatch problem with valve-point effect. Int. J. Electr. Power Energy Syst. 64, 121–135 (2015)

    Article  Google Scholar 

  28. B. Song, Y. Xiao, X. Lin, Design of fuzzy PI controller for brushless DC motor based on PSO-GSA algorithm. Syst. Sci. Control Eng. 8(1), 67–77 (2020)

    Article  Google Scholar 

  29. J.S. Wang, J.D. Song, Function optimization and parameter performance analysis based on gravitation search algorithm. Algorithms 9(3), 1–13 (2015)

    Google Scholar 

  30. E.H. Mamdani, S. Assilian, An experiment in linguistic synthesis with a fuzzy logic controller. Int. J. Man Mach. Stud. 7(1), 1–13 (1975)

    Article  Google Scholar 

  31. T. Takagi, M. Sugeno, Fuzzy identification of systems and its application to modeling and control. IEEE Trans. Systems Man and Cybernetics 15(1), 116–132 (1985)

    Article  Google Scholar 

  32. T.J. Ross, Fuzzy Logic with Engineering Applications. 4th Edition, Wiley, NJ, Sept. 2016.

  33. R.A. Horn, C.R. Johnson, Matrix Analysis (Cambridge University Press, Cambridge, 1985)

    Book  Google Scholar 

Download references

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Contributions

C-CS was involved in conceptualization, methodology, validation, project administration, supervision, writing—original draft, and writing—reviewing and editing. M-HZ was responsible for software, investigation, visualization, and writing—original draft.

Corresponding author

Correspondence to Chih-Chang Shen.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

All authors have no conflicts of interest. On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, CC., Zhang, MH. Blind CFO estimation based on weighted subspace fitting criterion with fuzzy adaptive gravitational search algorithm. EURASIP J. Adv. Signal Process. 2024, 5 (2024). https://doi.org/10.1186/s13634-023-01091-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-023-01091-2

Keywords