A low complexity Hopfield neural network turbo equalizer
- Hermanus C Myburgh^{1}Email author and
- Jan C Olivier^{2}
DOI: 10.1186/1687-6180-2013-15
© Myburgh and Olivier; licensee Springer. 2013
Received: 11 June 2012
Accepted: 15 January 2013
Published: 8 February 2013
Abstract
Abstract
In this article, it is proposed that a Hopfield neural network (HNN) can be used to jointly equalize and decode information transmitted over a highly dispersive Rayleigh fading multipath channel. It is shown that a HNN MLSE equalizer and a HNN MLSE decoder can be merged in order to realize a low complexity joint equalizer and decoder, or turbo equalizer, without additional computational complexity due to the decoder. The computational complexity of the Hopfield neural network turbo equalizer (HNN-TE) is almost quadratic in the coded data block length and approximately independent of the channel memory length, which makes it an attractive choice for systems with extremely long memory. Results show that the performance of the proposed HNN-TE closely matches that of a conventional turbo equalizer in systems with short channel memory, and achieves near-matched filter performance in systems with extremely large memory.
Keywords
Turbo equalizer Hopfield neural network Rayleigh fading Low complexityIntroduction
Turbo equalization has its roots in turbo coding, first proposed in [1] for the iterative decoding of concatenated convolutional codes. In [2, 3], the idea of turbo decoding was applied to systems transmitting convolutional coded information through multipath channels, in order to improve the bit-error rate (BER) performance, with great success. Due to the computational complexity of its constituent maximum a posteriori (MAP) equalizer and MAP decoder, the computational complexity of these turbo equalizers are exponentially related to the channel impulse response (CIR) length as well as the encoder constraint length, limiting their effective use in systems where the channel memory and/or the encoder constraint length is large, with the MAP equalizer being the main culprit due to long channel delay spreads.
To mitigate the high computational complexity exhibited by the MAP equalizer, several authors have proposed suboptimal equalizers to replace the optimal MAP equalizer in the Turbo Equalizer structure, with complexity that is linearly related to the channel memory length. In [4, 5], it was shown how a minimum mean squared error (MMSE) equalizer is used in a Turbo Equalizer by modifying it to make use of prior information provided in the form of extrinsic information. Various authors have also proposed the use of decision feedback equalizers (DFE) while using extrinsic information as prior information to improve the BER performance after each iteration [6–10]. Also, in [11, 12] it was proposed that a soft interference canceler (SIC) be modified to make use of soft information in order to be used as a low complexity equalizer in a turbo equalizer, and in [13] the way in which a SIC incorporates soft information was modified to improve performance. The proposed equalizers inherently suffer from noise enhancement (MMSE) and error propagation (DFE and SIC) which limit their performance, and hence the overall performance of the turbo equalizers in which they are used. Due to the fact that none of the proposed equalizers are able to produce exact MAP estimates of the transmitted coded information, the performance of the Turbo Equalizer in which they are implemented will ultimately be worse than when an optimal MAP equalizer is utilized, due to the performance loss incurred at the output of these suboptimal equalizers. This trade-off always exists: If one gains in terms complexity, one loses in terms of performance.
In this article, we propose to combat the performance loss due to suboptimal (or non-MAP) equalizer output, by combining the equalizer and the decoder into one equalizer/decoder structure, so that all information can be processed as a whole, and not be passed between the equalizer and the decoder. This vision has successfully been implemented and demonstrated by the authors in [14] using a dynamic Bayesian network (DBN) as basis. In this paper, however, we show that using the Hopfield neural network (HNN) [15] as the underlying structure also works well, and has a number of advantages as discussed in [16].
In [16], the authors proposed a maximum likelihood sequence estimation (MLSE) equalizer which is able to equalize M-ary quadrature amplitude modulation (M-QAM) modulated signals in systems with extremely long memory. The complexity of the equalizer proposed in [16] is quadratic in the data block length and approximately independent of the channel memory length. Its superior computational complexity is due to the high parallelism of its underlying neural network structure. It uses the HNN structure which enables fast parallel processing of information between neurons, producing ML sequence estimates at the output. It was shown in [16] that the performance of the HNN MLSE equalizer closely matches that of the Viterbi MLSE equalizer in short channels, and near-optimally recombines the energy spread across the channel in order to achieve near-matched filter performance when the channel is extremely long.
The HNN has also been shown by several authors to be able to decode balanced check codes [17, 18]. These codes, together with methods for encoding and decoding, were first proposed in [19], but it was later shown in [17, 18] that single codeword decoding can also be performed using the HNN. To date, balanced codes is the only class of codes that can be decoded with the HNN. The ability of the HNN to detect binary patterns allows it to determine the ML codeword from a predefined set of codewords. In this paper it is shown that the HNN ML decoder can be extended to allow for the ML estimation of a sequence of balanced check codes. It is therefore extendable to an MLSE decoder.
In this article, a novel turbo equalizer is developed by combining the HNN MLSE equalizer developed in [16] and a HNN MLSE decoder (used to decode balanced codes, and only balanced codes), resulting in the Hopfield neural network turbo equalizer (HNN-TE), which can be used as replacement for a conventional turbo equalizer (CTE), made up of a equalizer/decoder pair, in systems with extremely long memory, where the coded symbols are interleaved before transmission through the multipath channel. The HNN-TE is able to equalize and decode (balanced codes) in systems with extremely long memory, since the computational complexity is nearly independent of the channel memory length. Like the HNN MLSE equalizer, its superior complexity characteristics are due to the high parallelism of its underlying neural network structure.
This article is structured as follows. Section 2 presents a brief discussion on Turbo Equalization. Section 3 discusses the HNN in general, while the HNN MLSE equalizer and the HNN MLSE decoder are discussed in Section 4, followed by a discussion on the fusion of the two in order to realize the HNN-TE. In Section 5, the results of a computational complexity analysis of the HNN-TE and a CTE are presented, followed by a memory requirements analysis in Section 6. Simulation results are presented in Section 7 and conclusions are drawn in Section 8.
Turbo equalization
The turbo equalizer uses two a maximum a posterior (MAP) algorithms, one to equalize the ISI-corrupted received symbols and one to decode the equalized coded symbols, which iteratively exchange information. With each iteration of the system, extrinsic information is exchanged between the two MAP algorithms in order to improve the ability of each algorithm to produce correct estimates. This principle was first applied to Turbo Coding, where both MAP algorithms were MAP decoders [3], but has since been applied to iterative equalization and decoding (today known as Turbo Equalization) to reduce the BER performance of the coded multipath communication system [2–5].
${L}_{e}^{D}\left({\widehat{\mathbf{s}}}^{\prime}\right)$ is interleaved to produce ${L}_{e}^{D}\left(\widehat{\mathbf{s}}\right)$. ${L}_{e}^{D}\left(\widehat{\mathbf{s}}\right)$ is used together with the received symbols r in the MAP equalizer, with ${L}_{e}^{D}\left(\widehat{\mathbf{s}}\right)$ serving to provide prior information on the received symbols. The equalizer again produces posterior information ${L}^{E}\left(\widehat{\mathbf{s}}\right)$ of the interleaved coded symbols. This process continues until the outputs of the decoder settle, or until a predefined stop-criterion is met [3]. After termination, the output $L\left(\widehat{\mathbf{u}}\right)$ of the decoder gives an estimate of the source symbols.
The proposed HNN-TE is modeled on one HNN structure, implying that there is no exchange of extrinsic information between its constituent parts. Rather, all information is intrinsically processed in an iterative fashion.
The Hopfield neural network
The HNN was first proposed in [15] and it was shown in that the HNN can be used to solve combinatorial optimization problems as well as pattern recognition problems. In [15] Tank and Hopfield derived an energy function and showed how the HNN can be used to minimize this energy function, thus producing near-ML sequence estimates at the output of the neurons. To enable the HNN to solve an optimization problem, the cost function of that problem is mapped to the HNN energy function, where after the HNN iteratively minimizes its energy function and performs near-MLSE. Also, to enable the HNN to solve a binary pattern recognition problem, the autocorrelation matrix of the set of patterns is used as the weights between the HNN neurons, while the noisy pattern to be recognized is used as the input to the HNN. Again, the HNN iteratively performs pattern recognition in order to produce the near-ML patter at the output of the HNN.
Energy function
Equation (9) is used to derive the HNN MLSE equalizer, decoder, and eventually the HNN-TE.
Iterative system
where u = {u _{1}, u _{2}, …, u _{ N }}^{ T } is the internal state of the HNN, s = {s _{1}, s _{2}, …, s _{ N }}^{ T } is the vector of estimated symbols, g(.) is the decision function associated with each neuron and i indicates the iteration number. β(.) is a function used for optimization as in [14].
The estimated symbol vector $\left[{\mathbf{s}}_{i}^{T}|{\mathbf{s}}_{q}^{T}\right]$ is updated with each iteration. $\left[{\mathbf{I}}_{i}^{T}|{\mathbf{I}}_{q}^{T}\right]$ contains the best blind estimate for s, and is therefore used as input to the network, while $\left[\begin{array}{cc}{\mathbf{X}}_{i}& {\mathbf{X}}_{q}^{T}\\ {\mathbf{X}}_{q}& {\mathbf{X}}_{i}\end{array}\right]$contains the cross-correlation information of the received symbols. The system produces the MLSE estimates in s after Z iterations.
The Hopfield neural network turbo equalizer
In this section, the derivation of the HNN-TE is discussed, by first deriving its constituent parts—the HNN MLSE equalizer and the HNN MLSE decoder—and then showing how the HNN-TE is finally realized by combining the two.
HNN MLSE equalizer
The HNN MLSE equalizer was developed by the authors in [16]. The HNN MLSE equalizer was applied to single-carrier M-QAM modulated system with extremely long memory, where the CIR length was as long as L = 250, even though this is not a limit. The ability of the HNN MLSE equalizer to equalize signals in systems with highly dispersive channels is due to the fact that its complexity grows quadratically with an increase in transmitted data block size, and that it is approximately independent of the channel memory length. In the following the HNN MLSE equalizer developed in [16] will be presented, without spending time on the derivation.
where k = 1, 2, 3, …, L - 1 and i and q denote the in-phase and quadrature components of the CIR coefficients.
where r ^{(i)} and r ^{(q)} are the respective in-phase and quadrature components of the received symbols r = {r _{1}, r _{2}, …, r _{ N + L - 1}}^{ T }.
By deriving the cross-correlation matrix X and the input vector I in (10), the model in (9) is complete, and the iterative system in (11) can be used to equalize M-QAM modulated symbols transmitted through a channel with large CIR lengths. The HNN MLSE equalizer was evaluated in [16] for BPSK and 16-QAM with performance reaching the matched-filter bound in extremely long channels.
HNN MLSE decoder
The HNN has been shown to be able to decode balanced codes [17, 18]. A binary word of length m is said to be balanced if it contains exactly m / 2 ones and m / 2 zeros [19]. In addition, balanced codes have the property that no codeword is contained in another word, which simply means that positions of ones in one codeword will never be a subset of the positions of ones in another codeword [19].
The encoding process is described in [19] where the first k bits of the uncoded word is flipped in order to ensure the resulting codedword is “balanced,” whereafter the position k is appended to the balanced codeword before transmission. This encoding process is not followed here, as the set of m = 2^{ n } balanced codewords are determined before hand, after which encoding is performed by mapping a set of n bits to 2^{ n } balanced binary phase-shift keying (BPSK) symbols of length 2^{ n }, or by mapping a set of 2n bits to 2^{ n } balanced quaternary quadrature amplitude modulation (4-QAM) symbols of length 2^{ n }.
The HNN decoder developed here uses the set of predetermined codewords to determined the connection weights describing the level of connection between the neurons. It has previously been shown how a HNN can be used to decoded one balanced code at a time, but the HNN MLSE decoder we derive here is able to simultaneously decode any number of concatenated codewords in order to provide the ML transmitted sequence of codewords. After the HNN MLSE decoding, the ML BPSK or 4-QAM codewords of length 2^{ n } are demapped to n bits (or 2n bits for 4-QAM), which completes the decoding process.
Codeword selection
The authors have found that Walsh-Hadamard codes, widely used in code division multiple access (CDMA) systems [20], are desirable codes for this application, due to their seeming balance and orthogonality characteristics. Walsh-Hadamard codes are linear codes that map n bits to 2^{ n } codewords, where each set of codewords have a Hamming distance of 2^{ n-1} and a Hamming weight of 2^{ n-1}.
- 1.
Reverse the order in which the first 2^{ n-1} codewords appear in the new set.
- 2.
Flip the bits of the reversed set of 2^{ n-1} codewords.
It is clear that C _{8} is balanced in the sense that the rows (codewords) as well as the columns are balanced. It has been found that the HNN decoder performs better if the rows as well as the columns are balanced. The Hamming weight of C _{8} is still 2^{ n-1} = 2^{2}, while the Hamming distance increases slightly larger than 2^{ n-1} = 2^{2}.
By following the steps described above, any set of Walsh-Hadamard codes of length 2^{ n } can be used to create a new set of 2^{ n } balanced codes of length m = 2^{ n }.
Encoding
Encoding is performed by mapping a group of n bits to 2^{ n } BPSK symbols, or a group of 2n bits to 2^{ n } 4-QAM symbols. Before encoding, the set of codewords ${\mathbf{C}}_{{2}^{n}}$ derived from the set of Walsh-Hadamard codes ${\mathbf{H}}_{{2}^{n}}$ is made bipolar by converting the 0’s to -1.
BPSK encoding
Input-output relationship for BPSK encoder
n | 2^{ n } | Rs | Rc |
---|---|---|---|
1 | 2 | 1/2 | 1/2 |
2 | 4 | 1/2 | 1/2 |
3 | 8 | 3/8 | 3/8 |
4 | 16 | 1/4 | 1/4 |
4-QAM encoding
Input-output relationship for 4-QAM encoder
2n | 2^{ n } | Rs | Rc |
---|---|---|---|
2 | 2 | 1 | 1/2 |
4 | 4 | 1 | 1/2 |
6 | 8 | 3/4 | 3/8 |
8 | 16 | 1/2 | 1/4 |
Decoder
The HNN is known to be able to recognize input patterns from a set of stored patterns [15, 21]. In the context of the HNN decoder, the patterns are the balanced codewords, and the HNN is able to determine the ML codeword from a set of codewords. This has been demonstrated before but only for one codeword at a time [17]. Therefore, if a received data block contains P codewords, the HNN will have to be applied P times in order to determine P ML codewords. However, the HNN MLSE decoder developed here is able to determine the most likely sequence of codewords using a single HNN. The HNN MLSE decoder is therefore applied once to a received data block containing any number of codewords.
After the HNN MLSE decoder has determined the sequence of most likely transmitted codewords, the codewords are demapped by calculating the Euclidean distance between each ML codeword and each codeword in ${\mathbf{C}}_{{2}^{n}}$ for BPSK modulation, and each codeword in ${\mathbf{C}}_{{2}^{n}}^{\left(i\right)}\phantom{\rule{1em}{0ex}}+\phantom{\rule{1em}{0ex}}j{\mathbf{C}}_{{2}^{n}}^{\left(q\right)}$ for 4-QAM modulation. The indices(s) corresponding to the codeword(s) that have the lowest Euclidean distance/distances is/are converted to bits, which completes the decoding phase.
The derivation of the HNN MLSE decoder entails the calculation of the cross-correlation matrices X _{ i } and X _{ q }, and the input vectors I _{ i } and I _{ q } in (10). The HNN MLSE decoder is first derived for the decoding of a single codeword, after which it will be extended to enable the decoding of any number of codewords simultaneously. Derivations are performed for 4-QAM only, since the BPSK HNN MLSE decoder is a simplification of its 4-QAM counterpart.
Single codeword decoding
where c is of length 2^{ n } and n is a vector containing complex samples from the distribution $\mathcal{N}(\mu ,{\sigma}^{2})$, where μ in the range 1 = in the range 10 and σ is the noise standard deviation. After the ML codeword is detected, each detected codeword (of length 2^{ n }) can be mapped back to n bits for BPSK modulation and 2n bits for 4-QAM modulation.
Multiple codeword decoding
It was shown how the HNN can be used to decode single codewords, but the HNN decoder can be extended in order to detect ML transmitted sequences of codewords. This step is crucial in our quest of merging the HNN decoder with the HNN MLSE equalizer, since the HNN MLSE equalizer detects ML sequences of transmitted symbols. If the transmitted information is encoded, these sequences contain multiple codewords, and hence the HNN decoder must be extended to detect not only single codewords, but codeword sequences.
where $\mathbf{X}=\left[\begin{array}{ll}{\mathbf{X}}_{i}& {\mathbf{X}}_{q}^{T}\\ {\mathbf{X}}_{q}& {\mathbf{X}}_{i}\end{array}\right]$ is repeated on the diagonal P times and ∅ implies that the rest of X ^{(P)} is empty, containing only 0’s.
where c _{ p } is the p th codeword of length 2^{ n }, where p = 1, 2, …, P, and n is of length 2^{ n } P and contains complex samples from the distribution $\mathcal{N}(\mu ,{\sigma}^{2})$, where μ = 0 and σ is the noise standard deviation.
The extended cross-correlation matrix and input vector in (36) and (37) can now be used to estimate the ML sequence of transmitted codewords, after which each detected codeword (of length 2^{ n }) can be mapped back to n bits for BPSK modulation and 2n bits for 4-QAM modulation.
HNN turbo equalizer
The HNN-TE is an amalgamation of the HNN MLSE equalizer and the HNN MLSE decoder, which were discussed in the previous sections. In this section it is explained how the HNN MLSE equalizer and the HNN MLSE decoder are combined in order to perform iterative joint equalization and decoding (turbo equalization) using a single HNN structure. The HNN-TE is able to jointly equalize and decode BPSK and 4-QAM coded modulated signals in systems with highly dispersive multipath channels, with extremely low computational complexity compared to traditional turbo equalizers which employ a MAP equalizer/decoder pair.
System model
Since we already have complete models for the HNN MLSE equalizer and decoder, the combination of the two is fairly straight-forward. In order to distinguish between equalizer and decoder parameters a number of redefinitions are in order. For the HNN MLSE equalizer the correlation matrix and input vector relating to (10), as derived in (22) and (27), are now X _{ E } and I _{ E }, respectively, and will henceforth be referred to as “equalizer correlation matrix” and “equalizer input vector”. Similarly the HNN MLSE decoder correlation matrix and input vector relating to (10), as derived in (36) and (37), are now X _{ D } and I _{ D }, respectively, and will henceforth be referred to as “decoder correlation matrix” and “decoder input vector”.
The rationale behind the addition of the equalizer correlation matrix and the normalized decoder correlation matrix is that the connection weights in the decoder correlation matrix should bias those of the equalizer correlation matrix. Since X _{TE} contains X _{ E } offset by ${\mathbf{X}}_{D}^{\left(\text{norm}\right)}$, joint equalization and decoding is made possible.
With the new correlation matrix X _{TE} and input vector I _{TE}, the HNN-TE model is complete, and the iterative system in (11) can be used to jointly equalize and decode (turbo equalize) the transmitted coded information.
Transformation
Consequently the new channel matrix Q, rather than the conventional channel matrix H in (3), is used in the calculation of the equalizer correlation matrix X _{ E } derived in (22). Due to the above transformation, Q does not contain the CIR H on the diagonal as in H. Rather, each column in Q (of length N _{ c }) contains a unique random combination of all CIR coefficients (where the rest of the N _{ c } - L elements in a column are equal to 0), dictated by the randomization effect exhibited in Q due to the random interleaver. This randomization effect results from first multiplying the channel H with the interleaving matrix J and then deinterleaving by multiplying the result with J ^{ T } (see (44)). Deinterleaving places the first CIR coefficient (h _{0}) on the diagonal of Q, restoring the one-to-one relationship between each element in r and each corresponding coded transmitted symbol in c.
Computational complexity analysis
The computational complexity of the HNN-TE is compared to that of the CTE by calculating the number of computations performed for each received data block, for a fixed set of system parameters. The number of computations are normalized by the coded data block length so as to factor out the effect of the length of the transmitted data block, which allows us to present the computational complexity in terms of the number of computations required per received coded symbol. The complexity of the HNN-TE is quadratically related to the coded data block length, so a change in N _{ c } will still have an effect on the normalized computational complexity.
where N _{ c } is the coded data block length, L is the CIR length, M is the modulation constellation alphabet size (2 for BPSK and 4 for 4-QAM), Z _{HNN-TE} is the number of iterations and k is the codeword length, which was chosen as k = 8 for a code rate of R _{ c } = 3 / 8. The first term in (46) is associated with the calculation of X _{ i } in (19) and X _{ q } in (21). The second term is associated with the calculation of Λ in (28) and Ω in (29). The third term is for the iterative calculation of the ML coded symbols in (11) while the second to last term in (46) is for the trivial ML detection of codewords after joint iterative MLSE equalization and decoding. The last term is due to the transformation in (43) through (45). Note that in the first and last terms of (46) the exponent is 2.376. It has been shown in [23] that the complexity of multiplication of two N × N matrices can be reduced from O(N ^{3}) to O(N ^{2.376}). However, due to the fact that cubic complexity matrix multiplication is still preferred in practical applications due to ease of implementation, (46) serves as a lower bound on the HNN-TE computational complexity.
Therefore, the computational complexity of the HNN-TE is approximately quadratic at best, or more realistically cubic in the coded data block length (N _{ c }), quadratic in the modulation constellation alphabet size (M), quadratic in the codeword length k, and approximately independent of the channel memory length (L).
where Z _{CTE} is the number of iterations and Q is the number of equalizer states, determined by 2^{ L-1} for BPSK modulation and 4^{ L-1} for 4-QAM. The first term in (47) is associated with the equalizer while the second term is associated with MAP decoding. The computational complexity of the CTE is therefore linear in the coded data block length (N _{ c }), exponential in the channel memory length (L) and quadratic in the codeword length (k).
Memory requirements analysis
HNN-TE memory requirements
Description | Size (float) |
---|---|
Correlation matrices X _{ i }, X _{ q } | $2{N}_{c}^{2}$ |
Input vectors I _{ i }, I _{ q } | 2N _{ c } |
HNN internal state u | 2N _{ c } |
HNN output s | 2N _{ c } |
Channel matrix HJ | (N _{ c } + L - 1)^{2} |
Interleaver matrix J | (N _{ c } + L - 1)^{2} |
Received vector r | N _{ c } + L - 1 |
CTE memory requirements
Description | Size (float) |
---|---|
Equalizer forward and backward messages | N _{ c } M ^{ L-1} |
Decoder forward and backward messages | $\left(\frac{{N}_{c}}{k}\right)k={N}_{c}$ |
Equalizer output L ^{ E }(ŝ) | N _{ c } |
Decoder output L ^{ D }(ŝ’) | N _{ c } |
Channel impulse response H | L |
Received vector r | N _{ c } |
Simulation results
The proposed HNN-TE was evaluated in a mobile fading environment for BPSK and 4-QAM modulation at a code rate of R _{ c } = n / k = 3 / 8. To simulated the fading effect of mobile channels, the Rayleigh fading simulator proposed in [24] was used to generate uncorrelated fading vectors. When imperfect channel state information (CSI) was assumed, least squares channel estimation was used using various amounts of training symbols in the transmitted data block. On the other hand, when perfect CSI was assumed, the CIR coefficients were “estimated” by taking the mean of the uncorrelated fading vectors. Simulations were performed for short and long channels at various mobile speeds. Simulations were also performed to compare the performance of the HNN-TE and a CTE in short mobile fading channels for BPSK modulation. For all simulations the uncoded data block length was N _{ u } = 480 and the coded data block length was N _{ c } = 1280. In all simulations the frequency was hopped four times during each data block in order to further reduce the BER. For the CTE the number of iterations were Z = 5, and instead of using a fixed number of iterations for the HNN-TE, we use the function $Z({E}_{b}/{N}_{0})=2\left({5}^{({E}_{b}/{N}_{0})/5}\right)$ (which produces Z(E _{ b } / N _{0}) = {2, 4, 8, 10, 22, 55} for E _{ b }/N _{0} = {0, 2.5, 5, 7.5, 10}) to determine the number of iterations to be used given E _{ b } / N _{0}.
It is clear from Figures 10, 11, and 12 that the performance of the HNN-TE is superior to that of a CTE in short channels at varying mobile speeds, for both perfect and imperfect CSI. The HNN-TE outperforms the CTE in short channels, but with higher computational complexity. Figure 6 shows that the HNN-TE is more computationally complex than the CTE for short channels (L<10), when the coded data block length is relatively small (N _{ u }<1280). However, the complexity of the HNN-TE is vastly superior to that of the CTE for long channels. It might be argued that the HNN-TE will perform better than the CTE since more iterations are used, but that is not true. It is stated in [3] that the performance of the CTE cannot be improved significantly beyond Z = 3 iterations in Rayleigh fading channels, so the performance gain of the HNN-TE compared to the CTE is probably due to the fact that HNN-TE is able to process all the available information internally as a whole, without having to exchange information between the equalizer and the decoder, as is the case in a CTE.
From Figures 13, 14, 15, 16 and 17 it is clear that the HNN-TE is able to jointly equalize and decode BPSK and 4-QAM modulated signals, transmitted trough extremely long mobile fading channels. While the data rate using 4-QAM modulation is twice that using BPSK modulation, the performance is worse for 4-QAM modulation, due to the fact that Gray coding cannot be applied during coded modulation.
Conclusion
In this article, a low complexity turbo equalizer was developed which is able to jointly equalize and decode BPSK and 4-QAM coded-modulated signals in systems transmitting interleaved information through a multipath fading channels. It uses the Hopfield neural network as framework and hence it was fittingly named the Hopfield Neural Network Turbo Equalizer, or HNN-TE. The HNN-TE is able to turbo equalize coded modulated BPSK and 4-QAM signals in short as well as long multipath channels, slightly outperforming the CTE for short channels, although at higher computational cost. However, the HNN-TE computational complexity in long channels is vastly superior to that of CTE. The computational complexity of the HNN-TE is almost quadratically related to the coded data block length, while being approximately independent of the CIR length. This enables it to turbo equalize signals in systems with multiple hundreds of multipath elements. It was also demonstrated that the HNN-TE is less susceptible than the CTE to channel estimation errors, and it also outperforms the CTE in fast fading channels. The performance of the HNN-TE for BPSK modulation is better than for 4-QAM modulation, since Gray coding cannot be employed due to the coded modulation explained in this paper, while the complexity for 4-QAM is slightly higher.
Declarations
Authors’ Affiliations
References
- Berrou C, Glavieux A, Thitimajshima P: Near Shannon limit error-correction and decoding: Turbo-Codes. Int. Conf. Commun 1993, 1064-1070.
- Douillard C, Jezequel M, Berrou C, Picart A, Didier P, Glavieux A: Iterative correction of intersymbol intereference: turbo-equalization. Europ. Trans. Telecommun 1995, 6: 507-511. 10.1002/ett.4460060506View Article
- Bauch G, Khorram H, Hagenauer J: Iterative equalization and decoding in mobile communication systems. Proceedings of European Personal Mobile Communications Conference (EPMCC) 1997, 307-312.
- Koetter R, Tuchler M, Singer AC: Turbo equalization. IEEE Signal Process. Mag 2004, 21(1):67-80. 10.1109/MSP.2004.1267050View Article
- Koetter R, Tuchler M, Singer AC: Turbo equalization: principles and new results. IEEE Trans. Commun 2002, 50(5):754-767. 10.1109/TCOMM.2002.1006557View Article
- Lopes RR, Barry JR: The soft feedback equalizer for turbo equalization of highly dispersive channels. IEEE Trans. Commun 2006, 54(5):783-788.View Article
- Dual-Hallen A, Hegaard C: Delayed decision feedback sequence estimation. IEEE Trans. Commun 1989, 37(5):428-436. 10.1109/26.24594View Article
- Eyuboglu MV, Qureshi SU: Reduced-state sequence estimation with set partitioning and decision feedback. IEEE Trans. Commun 1988, 36(1):13-20. 10.1109/26.2724View Article
- Wu J, Leong S, Lee K, Xiao C, Olivier JC: Improved BDFE using a priori information for turbo equalization. IEEE Trans. Wirel. Commun 2008, 7(1):233-240.View Article
- Lou H, Xiao C: Soft-decision feedback turbo equalization for multilevel modulations. IEEE Trans. Signal Process 2011, 59(1):186-195.MathSciNetView Article
- Fijalkow I, Pirez D, Roumy A, Ronger S, Vila P: Improved interference cancellation for turbo-equalization. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 2000, 416-419.
- Wang X, Poor HV: Iterative (turbo) soft interference cancellation and decoding for coded CDMA. IEEE Trans. Commun 1999, 47(7):1046-1061. 10.1109/26.774855View Article
- Ampeliotis D, Berberidis K: Low complexity turbo equalization for high data rate. EURASIP J. Commun. Network 2006, 2006(ID 25686):1-12.View Article
- Myburgh HC, Olivier JC: Reduced complexity turbo equalization using a dynamic Bayesian network. EURASIP J. Adv. Signal Process 2012. (Submitted for Publication)
- Hopfield JJ, Tank DW: Neural computations of decisions in optimization problems. Biol. Cybern 1985, 52: 1-25. 10.1007/BF00336930MathSciNetView Article
- Myburgh HC, Olivier JC: Low complexity MLSE equalization in highly dispersive Rayleigh fading channels. EURASIP J. Adv. Signal Process 2010., 2010(ID 874874): http://asp.eurasipjournals.com/content/2010/1/874874
- Wiberg N: A class of Hopfield decodable codes. Proceedings of the IEEE-SP Workshop on Neural Networks for Signal Processing 1993, 88-97.
- Wang Q, Bhargava VK: An error correcting neural network. IEEE Pacific Rim Conference on Communications, Computers and Signal Processing 1989, 530-533.
- Knuth D: Efficient balanced codes. IEEE Trans. Inf. Theory 1986, IT-32(1):530-533.MathSciNet
- Proakis JG: Digital Communications. New York: McGraw-Hill, International Edition; 2001.
- Hopfield JJ: Artificial neural networks. IEEE Circ. Dev. Mag 1988, 4(5):3-10.View Article
- Hebb DO: The Organization of Behavior. New York: Wiley; 1949.
- Winograd S, Coppersmith D: Matrix multiplication via arithmetic progressions. J. Symbolic Comput 1990, 9(3):251-280. 10.1016/S0747-7171(08)80013-2MATHMathSciNetView Article
- Zheng YR, Xiao C: Improved models for the generation of multiple uncorrelated Rayleigh fading waveforms. IEEE Commun. Lett 2002, 6: 256-258.View Article
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.