Skip to main content

A low complexity Hopfield neural network turbo equalizer

Abstract

Abstract

In this article, it is proposed that a Hopfield neural network (HNN) can be used to jointly equalize and decode information transmitted over a highly dispersive Rayleigh fading multipath channel. It is shown that a HNN MLSE equalizer and a HNN MLSE decoder can be merged in order to realize a low complexity joint equalizer and decoder, or turbo equalizer, without additional computational complexity due to the decoder. The computational complexity of the Hopfield neural network turbo equalizer (HNN-TE) is almost quadratic in the coded data block length and approximately independent of the channel memory length, which makes it an attractive choice for systems with extremely long memory. Results show that the performance of the proposed HNN-TE closely matches that of a conventional turbo equalizer in systems with short channel memory, and achieves near-matched filter performance in systems with extremely large memory.

Introduction

Turbo equalization has its roots in turbo coding, first proposed in [1] for the iterative decoding of concatenated convolutional codes. In [2, 3], the idea of turbo decoding was applied to systems transmitting convolutional coded information through multipath channels, in order to improve the bit-error rate (BER) performance, with great success. Due to the computational complexity of its constituent maximum a posteriori (MAP) equalizer and MAP decoder, the computational complexity of these turbo equalizers are exponentially related to the channel impulse response (CIR) length as well as the encoder constraint length, limiting their effective use in systems where the channel memory and/or the encoder constraint length is large, with the MAP equalizer being the main culprit due to long channel delay spreads.

To mitigate the high computational complexity exhibited by the MAP equalizer, several authors have proposed suboptimal equalizers to replace the optimal MAP equalizer in the Turbo Equalizer structure, with complexity that is linearly related to the channel memory length. In [4, 5], it was shown how a minimum mean squared error (MMSE) equalizer is used in a Turbo Equalizer by modifying it to make use of prior information provided in the form of extrinsic information. Various authors have also proposed the use of decision feedback equalizers (DFE) while using extrinsic information as prior information to improve the BER performance after each iteration [610]. Also, in [11, 12] it was proposed that a soft interference canceler (SIC) be modified to make use of soft information in order to be used as a low complexity equalizer in a turbo equalizer, and in [13] the way in which a SIC incorporates soft information was modified to improve performance. The proposed equalizers inherently suffer from noise enhancement (MMSE) and error propagation (DFE and SIC) which limit their performance, and hence the overall performance of the turbo equalizers in which they are used. Due to the fact that none of the proposed equalizers are able to produce exact MAP estimates of the transmitted coded information, the performance of the Turbo Equalizer in which they are implemented will ultimately be worse than when an optimal MAP equalizer is utilized, due to the performance loss incurred at the output of these suboptimal equalizers. This trade-off always exists: If one gains in terms complexity, one loses in terms of performance.

In this article, we propose to combat the performance loss due to suboptimal (or non-MAP) equalizer output, by combining the equalizer and the decoder into one equalizer/decoder structure, so that all information can be processed as a whole, and not be passed between the equalizer and the decoder. This vision has successfully been implemented and demonstrated by the authors in [14] using a dynamic Bayesian network (DBN) as basis. In this paper, however, we show that using the Hopfield neural network (HNN) [15] as the underlying structure also works well, and has a number of advantages as discussed in [16].

In [16], the authors proposed a maximum likelihood sequence estimation (MLSE) equalizer which is able to equalize M-ary quadrature amplitude modulation (M-QAM) modulated signals in systems with extremely long memory. The complexity of the equalizer proposed in [16] is quadratic in the data block length and approximately independent of the channel memory length. Its superior computational complexity is due to the high parallelism of its underlying neural network structure. It uses the HNN structure which enables fast parallel processing of information between neurons, producing ML sequence estimates at the output. It was shown in [16] that the performance of the HNN MLSE equalizer closely matches that of the Viterbi MLSE equalizer in short channels, and near-optimally recombines the energy spread across the channel in order to achieve near-matched filter performance when the channel is extremely long.

The HNN has also been shown by several authors to be able to decode balanced check codes [17, 18]. These codes, together with methods for encoding and decoding, were first proposed in [19], but it was later shown in [17, 18] that single codeword decoding can also be performed using the HNN. To date, balanced codes is the only class of codes that can be decoded with the HNN. The ability of the HNN to detect binary patterns allows it to determine the ML codeword from a predefined set of codewords. In this paper it is shown that the HNN ML decoder can be extended to allow for the ML estimation of a sequence of balanced check codes. It is therefore extendable to an MLSE decoder.

In this article, a novel turbo equalizer is developed by combining the HNN MLSE equalizer developed in [16] and a HNN MLSE decoder (used to decode balanced codes, and only balanced codes), resulting in the Hopfield neural network turbo equalizer (HNN-TE), which can be used as replacement for a conventional turbo equalizer (CTE), made up of a equalizer/decoder pair, in systems with extremely long memory, where the coded symbols are interleaved before transmission through the multipath channel. The HNN-TE is able to equalize and decode (balanced codes) in systems with extremely long memory, since the computational complexity is nearly independent of the channel memory length. Like the HNN MLSE equalizer, its superior complexity characteristics are due to the high parallelism of its underlying neural network structure.

This article is structured as follows. Section 2 presents a brief discussion on Turbo Equalization. Section 3 discusses the HNN in general, while the HNN MLSE equalizer and the HNN MLSE decoder are discussed in Section 4, followed by a discussion on the fusion of the two in order to realize the HNN-TE. In Section 5, the results of a computational complexity analysis of the HNN-TE and a CTE are presented, followed by a memory requirements analysis in Section 6. Simulation results are presented in Section 7 and conclusions are drawn in Section 8.

Turbo equalization

Turbo equalizers are used in multipath communication systems that make use of encoders, usually convolutional encoders, to encoded the source symbol sequence s of length N u (using some generator matrix G) at a rate R c to produce coded information symbols c of length N c  = N u  / R c , after which the coded symbols c are interleaved with a random interleaver before modulation and transmission. The interleaved coded symbols ć are transmitted through a multipath channel with a CIR length of L, causing inter-symbol interference among adjacent transmitted symbols at the receiver. At the receiver the received inter-symbol interference (ISI) corrupted coded symbols are matched filtered and used as input to the turbo equalizer. The received symbol sequence is given by

r=H c ́ +n,
(1)

where n is a vector containing complex Gaussian noise samples and ć is the interleaved coded symbols given by

c ́ =J G T s,
(2)

where J is an N c  × N c interleaver matrix, and H is the N c  × N c channel matrix

H= h 0 0 0 0 0 0 h 0 0 0 0 0 h L - 1 0 0 0 0 0 h L - 1 0 0 0 h 0 0 0 0 0 0 h L - 1 h 0 0 0 0 0 0 h L - 1 h 0 .
(3)

The turbo equalizer uses two a maximum a posterior (MAP) algorithms, one to equalize the ISI-corrupted received symbols and one to decode the equalized coded symbols, which iteratively exchange information. With each iteration of the system, extrinsic information is exchanged between the two MAP algorithms in order to improve the ability of each algorithm to produce correct estimates. This principle was first applied to Turbo Coding, where both MAP algorithms were MAP decoders [3], but has since been applied to iterative equalization and decoding (today known as Turbo Equalization) to reduce the BER performance of the coded multipath communication system [25].

Figure 1 shows the structure of the Turbo Equalizer. The MAP equalizer takes as input the ISI-corrupted received symbols r and the extrinsic information L e D ( s ̂ ) (where s ̂ the interleaved coded symbol estimates) and produces a sequence of posterior transmitted symbol log-likelihood ratio (LLR) estimates L E ( s ̂ ) (note that L e D ( s ̂ ) is zero during the first iteration). Extrinsic information L e E ( s ̂ ) is determined by

L e E ( s ̂ )= L E ( s ̂ )- L e D ( s ̂ ),
(4)
Figure 1
figure 1

Turbo equalizer. Shows the structure of the turbo equalizer.

which is deinterleaved to produce L e E ( s ̂ ), which is used as input to the MAP decoder to produce a sequence of posterior coded symbol LLR estimates L D ( s ̂ ). L D ( s ̂ ) is used together with L e E ( s ̂ ) to determine the extrinsic information

L e D ( s ̂ )= L D ( s ̂ )- L e E ( s ̂ ),
(5)

L e D ( s ̂ ) is interleaved to produce L e D ( s ̂ ). L e D ( s ̂ ) is used together with the received symbols r in the MAP equalizer, with L e D ( s ̂ ) serving to provide prior information on the received symbols. The equalizer again produces posterior information L E ( s ̂ ) of the interleaved coded symbols. This process continues until the outputs of the decoder settle, or until a predefined stop-criterion is met [3]. After termination, the output L( u ̂ ) of the decoder gives an estimate of the source symbols.

The proposed HNN-TE is modeled on one HNN structure, implying that there is no exchange of extrinsic information between its constituent parts. Rather, all information is intrinsically processed in an iterative fashion.

The Hopfield neural network

The HNN was first proposed in [15] and it was shown in that the HNN can be used to solve combinatorial optimization problems as well as pattern recognition problems. In [15] Tank and Hopfield derived an energy function and showed how the HNN can be used to minimize this energy function, thus producing near-ML sequence estimates at the output of the neurons. To enable the HNN to solve an optimization problem, the cost function of that problem is mapped to the HNN energy function, where after the HNN iteratively minimizes its energy function and performs near-MLSE. Also, to enable the HNN to solve a binary pattern recognition problem, the autocorrelation matrix of the set of patterns is used as the weights between the HNN neurons, while the noisy pattern to be recognized is used as the input to the HNN. Again, the HNN iteratively performs pattern recognition in order to produce the near-ML patter at the output of the HNN.

Energy function

The Hopfield energy function can be written as [16]

L=- 1 2 s T Xs- I T Ts,
(6)

where I is a column vector with N elements, X is an N × N matrix. Assuming that s, I, and X contain complex values, these variables can be written as [16]

s = s i + j s q , I = I i + j I q , X = X i + j X q ,
(7)

where s and I are column vectors of length N, and X is an N × N matrix, where subscripts i and q are used to denote the respective in-phase and quadrature components. X is the cross-correlation matrix of the complex received symbols such that

X H = X i T -j X q T = X i +j X q ,
(8)

implying that it is Hermitian. Therefore X i T = X i is symmetric and X q T =- X q is skew symmetric [16]. By using the symmetric properties of X i and X q , (6) can be expanded and rewritten as

L = - 1 2 s i T X i s i + s q T X q s q + 2 s q T X q s i - s i T I i + s q T I q

which in turn can be rewritten as [16]

L=- 1 2 s i T | s q T X i X q T X q X i s i s q - I i T | I q T s i s q .
(9)

It is clear that (9) is in the form of (6), where the variables in (6) are substituted as follows:

s T = s i T | s q T , I T = I i T | I q T , X = X i X q T X q X i .
(10)

Equation (9) is used to derive the HNN MLSE equalizer, decoder, and eventually the HNN-TE.

Iterative system

The HNN minimizes the energy function (6) with the following iterative system:

u ( i ) = T s ( i ) + I s ( i + 1 ) = g β ( i ) u ( i ) ,
(11)

where u = {u 1, u 2, …, u N }T is the internal state of the HNN, s = {s 1, s 2, …, s N }T is the vector of estimated symbols, g(.) is the decision function associated with each neuron and i indicates the iteration number. β(.) is a function used for optimization as in [14].

The estimated symbol vector s i T | s q T is updated with each iteration. I i T | I q T contains the best blind estimate for s, and is therefore used as input to the network, while X i X q T X q X i contains the cross-correlation information of the received symbols. The system produces the MLSE estimates in s after Z iterations.

The Hopfield neural network turbo equalizer

In this section, the derivation of the HNN-TE is discussed, by first deriving its constituent parts—the HNN MLSE equalizer and the HNN MLSE decoder—and then showing how the HNN-TE is finally realized by combining the two.

HNN MLSE equalizer

The HNN MLSE equalizer was developed by the authors in [16]. The HNN MLSE equalizer was applied to single-carrier M-QAM modulated system with extremely long memory, where the CIR length was as long as L = 250, even though this is not a limit. The ability of the HNN MLSE equalizer to equalize signals in systems with highly dispersive channels is due to the fact that its complexity grows quadratically with an increase in transmitted data block size, and that it is approximately independent of the channel memory length. In the following the HNN MLSE equalizer developed in [16] will be presented, without spending time on the derivation.

It was shown in [16] that the correlation matrices X i and X q in (10), for a single carrier system transmitting a data block of length N through a multipath channel of length L with the data block initiated and terminated by L - 1 known tail symbols, with values 1 for BPSK modulation and 1 2 +j 1 2 for M-QAM modulation, can be determined by

X i =- 0 α 1 α L - 1 0 α 1 0 α 1 α 1 0 α L - 1 α L - 1 α 1 α 1 0 α 1 0 α L - 1 α 1 0
(12)

and

X q =- 0 γ 1 γ L - 1 0 γ 1 0 γ 1 γ 1 0 γ L - 1 γ L - 1 γ 1 γ 1 0 γ 1 0 γ L - 1 γ 1 0
(13)

where α = {α 1, α 2, …, α L - 1} and γ = {γ 1, γ 2, …, γ L - 1} are respectively, determined by

α k = j = 0 L - k - 1 h j ( i ) h j + k ( i ) + j = 0 L - k - 1 h j ( q ) h j + k ( q ) ,
(14)

and

γ k = j = 0 L - k - 1 h j ( q ) h j + k ( i ) - j = 0 L - k - 1 h j ( i ) h j + k ( q ) ,
(15)

where k = 1, 2, 3, …, L - 1 and i and q denote the in-phase and quadrature components of the CIR coefficients.

Upon inspection it is easy to see from (12) through (15) that X i and X q can be determined using the respective in-phase and quadrature components of the N × N channel matrix, with the in-phase and quadrature components of the CIR, h ( i ) = { h 0 ( i ) , h 1 ( i ) , , h L - 1 ( i ) } T and h ( q ) = { h 0 ( q ) , h 1 ( q ) , , h L - 1 ( q ) } T , on the diagonals such that

H ( i ) = h 0 ( i ) 0 0 0 0 0 h 0 ( i ) 0 0 0 0 h L - 1 ( i ) 0 0 0 0 0 h L - 1 ( i ) 0 0 0 h 0 ( i ) 0 0 0 0 0 h L - 1 ( i ) h 0 ( i ) 0 0 0 0 0 h L - 1 ( i ) h 0 ( i )
(16)

and

H ( q ) = h 0 ( q ) 0 0 0 0 0 h 0 ( q ) 0 0 0 0 h L - 1 ( q ) 0 0 0 0 0 h L - 1 ( q ) 0 0 0 h 0 ( q ) 0 0 0 0 0 h L - 1 ( q ) h 0 ( q ) 0 0 0 0 0 h L - 1 ( q ) h 0 ( q ) .
(17)

Using H (i) and H (q) the correlation matrices in (12) and (13) can be determined by

X i =- H ( i ) T H ( i ) + H ( q ) T H ( q )
(18)

which is simply

X i =-Re{ H T H}.
(19)

Also

X q =- H ( q ) T H ( i ) - H ( i ) T H ( q ) T ,
(20)

which is

X q =-Im{ H T H}.
(21)

X i and X q are then used to construct the combined correlation matrix in (10).

X= X i X q T X q X i .
(22)

It was also shown in [16] that the input vectors I i and I q in (10) are determined by

I i = λ 1 - ρ ( α 1 + γ 1 + + α L - 1 + γ L - 1 ) λ 2 - ρ ( α 2 + γ 2 + + α L - 1 + γ L - 1 ) λ 3 - ρ ( α 3 + γ 3 + + α L - 1 + γ L - 1 ) λ L - 1 - ρ ( α L - 1 + γ L - 1 ) λ L λ N - L + 1 λ N - L + 2 - ρ ( α L - 1 - γ L - 1 ) λ N - 2 - ρ ( α 3 - γ 3 + + α L - 1 - γ L - 1 ) λ N - 1 - ρ ( α 2 - γ 2 + + α L - 1 - γ L - 1 ) λ N - ρ ( α 1 - γ 1 + + α L - 1 - γ L - 1 )
(23)

and

I q = ω 1 - ρ ( α 1 - γ 1 + + α L - 1 - γ L - 1 ) ω 2 - ρ ( α 2 - γ 2 + + α L - 1 - γ L - 1 ) ω 3 - ρ ( α 3 - γ 3 + + α L - 1 - γ L - 1 ) ω L - 1 - ρ ( α L - 1 - γ L - 1 ) ω L ω N - L + 1 ω N - L + 2 - ρ ( α L - 1 + γ L - 1 ) ω N - 2 - ρ ( α 3 + γ 3 + + α L - 1 + γ L - 1 ) ω N - 1 - ρ ( α 2 + γ 2 + + α L - 1 + γ L - 1 ) ω N - ρ ( α 1 + γ 1 + + α L - 1 + γ L - 1 ) ,
(24)

where ρ=1/ 2 for M-QAM modulation, ρ = 1 in I i and ρ = 0 in I q for BPSK modulation, and Λ = {λ 1, λ 2, …, λ N } is determined by

λ k = j = 0 L - 1 r j + k ( i ) h j ( i ) + j = 0 L - 1 r j + k ( q ) h j ( q ) ,
(25)

and Ω = {ω 1, ω 2, …, ω N } is determined by

ω k = j = 0 L - 1 r j + k ( q ) h j ( i ) - j = 0 L - 1 r j + k ( i ) h j ( q ) ,
(26)

where k = 1, 2, 3, …, N with i and q again denoting the in-phase and quadrature components of the respective elements. The combined input vector in (10) is therefore constructed as

I= I i I q .
(27)

Note that Λ and Ω can easily be determined by

Λ= H ( i ) T r ( i ) + H ( q ) T r ( q ) ,
(28)

and

Ω= H ( i ) T r ( q ) - H ( q ) T r ( i ) ,
(29)

where r (i) and r (q) are the respective in-phase and quadrature components of the received symbols r = {r 1, r 2, …, r N + L - 1}T.

By deriving the cross-correlation matrix X and the input vector I in (10), the model in (9) is complete, and the iterative system in (11) can be used to equalize M-QAM modulated symbols transmitted through a channel with large CIR lengths. The HNN MLSE equalizer was evaluated in [16] for BPSK and 16-QAM with performance reaching the matched-filter bound in extremely long channels.

HNN MLSE decoder

The HNN has been shown to be able to decode balanced codes [17, 18]. A binary word of length m is said to be balanced if it contains exactly m / 2 ones and m / 2 zeros [19]. In addition, balanced codes have the property that no codeword is contained in another word, which simply means that positions of ones in one codeword will never be a subset of the positions of ones in another codeword [19].

The encoding process is described in [19] where the first k bits of the uncoded word is flipped in order to ensure the resulting codedword is “balanced,” whereafter the position k is appended to the balanced codeword before transmission. This encoding process is not followed here, as the set of m = 2n balanced codewords are determined before hand, after which encoding is performed by mapping a set of n bits to 2n balanced binary phase-shift keying (BPSK) symbols of length 2n, or by mapping a set of 2n bits to 2n balanced quaternary quadrature amplitude modulation (4-QAM) symbols of length 2n.

The HNN decoder developed here uses the set of predetermined codewords to determined the connection weights describing the level of connection between the neurons. It has previously been shown how a HNN can be used to decoded one balanced code at a time, but the HNN MLSE decoder we derive here is able to simultaneously decode any number of concatenated codewords in order to provide the ML transmitted sequence of codewords. After the HNN MLSE decoding, the ML BPSK or 4-QAM codewords of length 2n are demapped to n bits (or 2n bits for 4-QAM), which completes the decoding process.

Codeword selection

The authors have found that Walsh-Hadamard codes, widely used in code division multiple access (CDMA) systems [20], are desirable codes for this application, due to their seeming balance and orthogonality characteristics. Walsh-Hadamard codes are linear codes that map n bits to 2n codewords, where each set of codewords have a Hamming distance of 2n-1 and a Hamming weight of 2n-1.

Walsh-Hadamard codes are not “balanced” as described above. The first codeword is always all-ones, while subsets of some codewords are contained in others, violating both restrictions for balance. Instead of using the complete set of Walsh-Hadamard codes to map n bits to 2n codewords, a subset of codes in the Walsh-Hadamard matrix is selected, duplicated and modified so as to construct a new set of 2n codewords of length 2n. Consider the set of length 2n = 8 Walsh-Hadamard codes

H 8 = 1 1 1 1 1 1 1 1 1 0 1 0 1 0 1 0 1 1 0 0 1 1 0 0 1 0 0 1 1 0 0 1 1 1 1 1 0 0 0 0 1 0 1 0 0 1 0 1 1 1 0 0 0 0 1 1 1 0 0 1 0 1 1 0 .
(30)

To construct a set of balanced codewords from H 8, a subset of 2n-1 codewords is selected, which is used as the first 2n-1 codewords in the new set of codewords. The second set of 2n-1 codewords are constructed as follows:

  1. 1.

    Reverse the order in which the first 2n-1 codewords appear in the new set.

  2. 2.

    Flip the bits of the reversed set of 2n-1 codewords.

Assuming the subset selected from H 8 above is the set H 8,4:7 (implying that codewords in rows 4 through 7 are selected), the resulting set of 2n balanced codewords is

C 8 = 1 0 0 1 1 0 0 1 1 1 1 1 0 0 0 0 1 0 1 0 0 1 0 1 1 1 0 0 0 0 1 1 0 0 1 1 1 1 0 0 0 1 0 1 1 0 1 0 0 0 0 0 1 1 1 1 0 1 1 0 0 1 1 0 .
(31)

It is clear that C 8 is balanced in the sense that the rows (codewords) as well as the columns are balanced. It has been found that the HNN decoder performs better if the rows as well as the columns are balanced. The Hamming weight of C 8 is still 2n-1 = 22, while the Hamming distance increases slightly larger than 2n-1 = 22.

By following the steps described above, any set of Walsh-Hadamard codes of length 2n can be used to create a new set of 2n balanced codes of length m = 2n.

Encoding

Encoding is performed by mapping a group of n bits to 2n BPSK symbols, or a group of 2n bits to 2n 4-QAM symbols. Before encoding, the set of codewords C 2 n derived from the set of Walsh-Hadamard codes H 2 n is made bipolar by converting the 0’s to -1.

BPSK encoding

When BPSK modulation is used, n bits are mapped to 2n BPSK symbols. The n bits are used to determine an index k in the range 1– 2n, which is then used to select a codeword from the set of codewords in C 2 n such that the selected codeword c= C 2 n (k). Table 1 shows the number of uncoded bits, codeword length, uncoded bit to coded symbol rate R s and the uncoded bit to coded bit rate R c (code rate) for different n.

Table 1 Input-output relationship for BPSK encoder
4-QAM encoding

When 4-QAM modulation is used, 2n bits are mapped to 2n 4-QAM symbols. The first and second groups of n bits (out of 2n bits) are used to determine two indices, k (i) and k (q), in the range 1– 2n, one for the in-phase part, and the other for the quaternary part of the codeword. The first index k (i) selects a codeword from C 2 n ( i ) , where C 2 n ( i ) is derived as before, and the second index k (q) selects a codeword from C 2 n ( q ) , which can be equal to C 2 n ( i ) or can be uniquely determined as explained earlier. The 4-QAM “codeword” is then calculated as c= C 2 n ( i ) ( k ( i ) )+j C 2 n ( q ) ( k ( q ) ), which is much like the result of coded modulation where groups of coded bits (in this case uncoded bits) are mapped to signal constellation points to improve spectral efficiency [20]. Table 2 shows the number of uncoded bits, codeword length, the uncoded bit to coded symbol rate R s and code rate R c for different 2n. Even though the code rate remains the same as with BPSK modulation, the throughput doubles as expected.

Table 2 Input-output relationship for 4-QAM encoder

Decoder

The HNN is known to be able to recognize input patterns from a set of stored patterns [15, 21]. In the context of the HNN decoder, the patterns are the balanced codewords, and the HNN is able to determine the ML codeword from a set of codewords. This has been demonstrated before but only for one codeword at a time [17]. Therefore, if a received data block contains P codewords, the HNN will have to be applied P times in order to determine P ML codewords. However, the HNN MLSE decoder developed here is able to determine the most likely sequence of codewords using a single HNN. The HNN MLSE decoder is therefore applied once to a received data block containing any number of codewords.

After the HNN MLSE decoder has determined the sequence of most likely transmitted codewords, the codewords are demapped by calculating the Euclidean distance between each ML codeword and each codeword in C 2 n for BPSK modulation, and each codeword in C 2 n ( i ) +j C 2 n ( q ) for 4-QAM modulation. The indices(s) corresponding to the codeword(s) that have the lowest Euclidean distance/distances is/are converted to bits, which completes the decoding phase.

The derivation of the HNN MLSE decoder entails the calculation of the cross-correlation matrices X i and X q , and the input vectors I i and I q in (10). The HNN MLSE decoder is first derived for the decoding of a single codeword, after which it will be extended to enable the decoding of any number of codewords simultaneously. Derivations are performed for 4-QAM only, since the BPSK HNN MLSE decoder is a simplification of its 4-QAM counterpart.

Single codeword decoding

To enable the HNN to store a set of codewords, the average correlation between all pattern must be stored in the weights between the neurons. According to Hebb’s rule of auto-associative memory [22], the connection weight matrix, or correlation matrix, is calculated by taking the cross-correlation of the patterns to be stored. Since we are working with complex symbols, there are two weight matrices to be calculated. The cross-correlation matrices in (9) are calculated as

X i = Re { C T C } = C 2 n ( i ) T C 2 n ( i ) + C 2 n ( q ) T C 2 n ( q )
(32)

and

X q = Im { C T C } = C 2 n ( q ) T C 2 n ( i ) - C 2 n ( i ) T C 2 n ( q ) ,
(33)

where C= C 2 n ( i ) +j C 2 n ( q ) , and C 2 n ( i ) and C 2 n ( q ) are the matrices containing the generated codewords as before, respectively, used for the in-phase and quadrature components of the codeword. Note the similarities between the correlation matrices in (32) and (33) and those in (18) and (20). Also, the two input vectors are simply the real and imaginary components of the noise-corrupted received codeword, such that

I i =Re{c}+Re{n}
(34)

and

I q =Im{c}+Im{n}
(35)

where c is of length 2n and n is a vector containing complex samples from the distribution N(μ, σ 2 ), where μ in the range 1 = in the range 10 and σ is the noise standard deviation. After the ML codeword is detected, each detected codeword (of length 2n) can be mapped back to n bits for BPSK modulation and 2n bits for 4-QAM modulation.

Multiple codeword decoding

It was shown how the HNN can be used to decode single codewords, but the HNN decoder can be extended in order to detect ML transmitted sequences of codewords. This step is crucial in our quest of merging the HNN decoder with the HNN MLSE equalizer, since the HNN MLSE equalizer detects ML sequences of transmitted symbols. If the transmitted information is encoded, these sequences contain multiple codewords, and hence the HNN decoder must be extended to detect not only single codewords, but codeword sequences.

This extension is easily achieved by using the HNN parameters already derived in (32) through (35). Consider a system transmitting a sequence of P balanced codewords of length 2n, where n is the length of the uncoded bit-words. The new correlation matrix is constructed by copying X in (10) along the diagonal according to the number of transmitted codewords P, such that

X ( P ) = X i X q T X q X i X i X q T X q X i X i X q T X q X i ,
(36)

where X= X i X q T X q X i is repeated on the diagonal P times and implies that the rest of X (P) is empty, containing only 0’s.

Also the input vector I in (10), consisting of I i and I q , is also extended according to the number of transmitted codewords P such that

I ( P ) = I i ( P ) I q ( P ) ,
(37)

where

I i ( P ) = Re { c 1 } , Re { c 2 } , , Re { c p } T +Re{n}
(38)

and

I q ( P ) = Im { c 1 } , Im { c 2 } , , Im { c p } T +Im{n},
(39)

where c p is the p th codeword of length 2n, where p = 1, 2, …, P, and n is of length 2n P and contains complex samples from the distribution N(μ, σ 2 ), where μ = 0 and σ is the noise standard deviation.

The extended cross-correlation matrix and input vector in (36) and (37) can now be used to estimate the ML sequence of transmitted codewords, after which each detected codeword (of length 2n) can be mapped back to n bits for BPSK modulation and 2n bits for 4-QAM modulation.

HNN turbo equalizer

The HNN-TE is an amalgamation of the HNN MLSE equalizer and the HNN MLSE decoder, which were discussed in the previous sections. In this section it is explained how the HNN MLSE equalizer and the HNN MLSE decoder are combined in order to perform iterative joint equalization and decoding (turbo equalization) using a single HNN structure. The HNN-TE is able to jointly equalize and decode BPSK and 4-QAM coded modulated signals in systems with highly dispersive multipath channels, with extremely low computational complexity compared to traditional turbo equalizers which employ a MAP equalizer/decoder pair.

System model

Since we already have complete models for the HNN MLSE equalizer and decoder, the combination of the two is fairly straight-forward. In order to distinguish between equalizer and decoder parameters a number of redefinitions are in order. For the HNN MLSE equalizer the correlation matrix and input vector relating to (10), as derived in (22) and (27), are now X E and I E , respectively, and will henceforth be referred to as “equalizer correlation matrix” and “equalizer input vector”. Similarly the HNN MLSE decoder correlation matrix and input vector relating to (10), as derived in (36) and (37), are now X D and I D , respectively, and will henceforth be referred to as “decoder correlation matrix” and “decoder input vector”.

When a coded data block of length N c is transmitted through a multipath channel, X E and X D are determined according to (22) and (36), where both matrices are of size N c × N c . Since the function of the equalizer and the decoder has to be merged, it makes sense to somehow combine X E and X D to enable the equalizer to perform decoding, or to enable the decoder to perform equalization. This combination is performed by first normalizing X D with respect to X E , because of varying energy in a multipath fading channel between received data blocks. X D is therefore normalized with respect to X E such that

X D ( norm ) = X E X D X D .
(40)

Next the new correlation matrix is determined as

X TE = X E + X D ( norm ) .
(41)

The rationale behind the addition of the equalizer correlation matrix and the normalized decoder correlation matrix is that the connection weights in the decoder correlation matrix should bias those of the equalizer correlation matrix. Since X TE contains X E offset by X D ( norm ) , joint equalization and decoding is made possible.

The new input vector also needs to be calculated. I D contains the noise-corrupted coded symbols, while I E contains not only received coded symbol information, but also the ISI information. Note that when there is no multipath or fading (L = 1 and h 0 = 1), I E reduces to I D . The new input vector used in the HNN-TE is therefore simply

I TE = I E .
(42)

With the new correlation matrix X TE and input vector I TE, the HNN-TE model is complete, and the iterative system in (11) can be used to jointly equalize and decode (turbo equalize) the transmitted coded information.

Transformation

Upon reception the received symbol vector has to be deinterleaved to restore the one-to-one relationship between each element in r and c with respect to the first coefficient h 0 of the CIR h = {h 0, h 1, …, h L-1}T. Deinterleaving r transforms the transmission model in (1). Substituting (2) in (1) and applying the deinterleaver, which is simply the Hermitian transpose of the interleaver matrix J, gives

J H r= J H HH G H s+ J H n,
(43)

which is equivalent to transmitting the coded symbol sequence c = G T s through a channel

Q= J H HH.
(44)

Therefore (43) can be written as

J H r=Q G H s+ J H n.
(45)

Consequently the new channel matrix Q, rather than the conventional channel matrix H in (3), is used in the calculation of the equalizer correlation matrix X E derived in (22). Due to the above transformation, Q does not contain the CIR H on the diagonal as in H. Rather, each column in Q (of length N c ) contains a unique random combination of all CIR coefficients (where the rest of the N c  - L elements in a column are equal to 0), dictated by the randomization effect exhibited in Q due to the random interleaver. This randomization effect results from first multiplying the channel H with the interleaving matrix J and then deinterleaving by multiplying the result with J T (see (44)). Deinterleaving places the first CIR coefficient (h 0) on the diagonal of Q, restoring the one-to-one relationship between each element in r and each corresponding coded transmitted symbol in c.

To illustrate this concept, consider the three-dimensional representations of |H J| and |Q| in Figures 2a, b, 3a,b, 4a,b, and 5a,b, for a hypothetical system transmitting coded information through a multipath channel with CIR lengths of L = 1, L = 5, L = 10, and L = 20, respectively, with a block length N c  = 80. Figure 2a,b show |H J| and |Q| for channels of length L = 1, where Figure 2a is clearly interleaved. It is also clear that the new channel Q in Figure 2b is deinterleaved, since the first coefficient h 0 of the CIR has been restored to the diagonal of Q. Figure 3a and 5a show the interleaved channels for L = 5, L = 10, and L = 20, where Figure 3b and 5b show the new channels Q, again with the first CIR coefficient h 0 restored to the diagonal. Even though h 0 is restored to the diagonal of Q, it is clear that the rest of the CIR coefficients h 1, h 2, …, h L-1 are scattered throughout Q. As stated before, each column in Q contains a unique random combination of all CIR coefficients (with h 0 on the diagonal for each column), dictated by the randomization effect exhibited in Q, where the rest of the N c  - L elements in each column are equal to 0.

Figure 2
figure 2

|HJ| and |Q| for systems with L = 1 CIR coefficients. (a) |HJ| (b) |Q|.

Figure 3
figure 3

|HJ| and |Q| for systems with L = 5 CIR coefficients. (a) |HJ| (b) |Q|.

Figure 4
figure 4

|HJ| and |Q| for systems with L = 10 CIR coefficients. (a) |HJ| (b) |Q|.

Figure 5
figure 5

|HJ| and |Q| for systems with L = 20 CIR coefficients. (a) |HJ| (b) |Q|.

Computational complexity analysis

The computational complexity of the HNN-TE is compared to that of the CTE by calculating the number of computations performed for each received data block, for a fixed set of system parameters. The number of computations are normalized by the coded data block length so as to factor out the effect of the length of the transmitted data block, which allows us to present the computational complexity in terms of the number of computations required per received coded symbol. The complexity of the HNN-TE is quadratically related to the coded data block length, so a change in N c will still have an effect on the normalized computational complexity.

The computational complexity of the HNN-TE was calculated as

C C HNN - TE = 2 N c 2 . 376 + 8 ( N c + L - 1 ) + Z HNN - TE ( ( N c M / 2 ) 2 + ( N c M / 2 ) ) + 4 N c k 2 + 2 ( Nc + L - 1 ) 2 . 376 ,
(46)

where N c is the coded data block length, L is the CIR length, M is the modulation constellation alphabet size (2 for BPSK and 4 for 4-QAM), Z HNN-TE is the number of iterations and k is the codeword length, which was chosen as k = 8 for a code rate of R c  = 3 / 8. The first term in (46) is associated with the calculation of X i in (19) and X q in (21). The second term is associated with the calculation of Λ in (28) and Ω in (29). The third term is for the iterative calculation of the ML coded symbols in (11) while the second to last term in (46) is for the trivial ML detection of codewords after joint iterative MLSE equalization and decoding. The last term is due to the transformation in (43) through (45). Note that in the first and last terms of (46) the exponent is 2.376. It has been shown in [23] that the complexity of multiplication of two N × N matrices can be reduced from O(N 3) to O(N 2.376). However, due to the fact that cubic complexity matrix multiplication is still preferred in practical applications due to ease of implementation, (46) serves as a lower bound on the HNN-TE computational complexity.

Therefore, the computational complexity of the HNN-TE is approximately quadratic at best, or more realistically cubic in the coded data block length (N c ), quadratic in the modulation constellation alphabet size (M), quadratic in the codeword length k, and approximately independent of the channel memory length (L).

The complexity of the CTE was determined as

C C CTE = Z CTE 4 N c LQ + 4 N c k 2 ,
(47)

where Z CTE is the number of iterations and Q is the number of equalizer states, determined by 2L-1 for BPSK modulation and 4L-1 for 4-QAM. The first term in (47) is associated with the equalizer while the second term is associated with MAP decoding. The computational complexity of the CTE is therefore linear in the coded data block length (N c ), exponential in the channel memory length (L) and quadratic in the codeword length (k).

Figure 6 and shows the normalized computational complexity of the HNN-TE and the CTE for coded data block lengths of N c  = 80, N c  = 160, N c  = 320, N c  = 640, N c  = 1280, and N c  = 2560, where Z HNN-TE = 25 and Z CTE = 5, for BPSK and 4-QAM modulation when O(N 2.376) matrix multiplication complexity is considered. Figure 7 shows the same information as Figure 6, but with O(N 3) matrix multiplication complexity. It is clear that the computational complexity of the HNN-TE increases with an increase in coded data block length, but for realistic data block lengths the complexity of the HNN-TE is superior to that of the CTE for channels with long memory. The HNN-TE is computationally less complex for BSPK modulation than for 4-QAM, but only slightly so. On the other hand, the complexity of the CTE grows exponentially with and increase in modulation order. From Figure 6 it is clear that the complexity of the HNN-TE is almost quadratically related to the coded data block length and approximately independent of the channel memory length, which is more evident when L is increased. The normalized computational complexity of the HNN-TE and the CTE (for O(N 2.376) and O(N 3) matrix multiplication complexity) for N c  = 1280 using BPSK and 4-QAM for extremely long channels is shown in Figure 8, where there is no comparison between the complexity of the HNN-TE and that of the CTE, for both BSPK and 4-QAM modulation.

Figure 6
figure 6

HNN-TE and CTE normalized computational complexity for short channels and varying coded block length assuming O ( N c 2 . 376 ) matrix multiplication complexity. Blue circle: CTE (BPSK); Black square: CTE (4-QAM); Red circle: HNN-TE - N c  = 80 (BPSK); Red square: HNN-TE - N c  = 160 (BPSK); Red diamond: HNN-TE - N c  = 320 (BPSK); Red down triangle: HNN-TE - N c  = 640 (BPSK); Red left triangle: HNN-TE - N c  = 1280 (BPSK); Red right triangle: HNN-TE - N c  = 2560 (BPSK); Green circle: HNN-TE - N c  = 80 (4-QAM); Green square: HNN-TE - N c  = 160 (4-QAM); Green diamond: HNN-TE - N c  = 320 (4-QAM); Green down triangle: HNN-TE - N c  = 640 (4-QAM); Green left triangle: HNN-TE - N c  = 1280 (4-QAM); Green right triangle: HNN-TE - N c  = 2560 (4-QAM).

Figure 7
figure 7

HNN-TE and CTE normalized computational complexity for short channels and varying coded block length assuming O ( N c 3 ) HNN-TE matrix multiplication complexity. Blue circle: CTE (BPSK); Black square: CTE (4-QAM); Red circle: HNN-TE - N c  = 80 (BPSK); Red square: HNN-TE - N c  = 160 (BPSK); Red diamond: HNN-TE - N c  = 320 (BPSK); Red down triangle: HNN-TE - N c  = 640 (BPSK); Red left triangle: HNN-TE - N c  = 1280 (BPSK); Red right triangle: HNN-TE - N c  = 2560 (BPSK); Green circle: HNN-TE - N c  = 80 (4-QAM); Green square: HNN-TE - N c  = 160 (4-QAM); Green diamond: HNN-TE - N c  = 320 (4-QAM); Green down triangle: HNN-TE - N c  = 640 (4-QAM); Green left triangle: HNN-TE - N c  = 1280 (4-QAM); Green right triangle: HNN-TE - N c  = 2560 (4-QAM).

Figure 8
figure 8

HNN-TE and CTE normalized computational complexity for long channels and N c = 1280 for both O ( N c 2 . 376 ) and O ( N c 3 ) HNN-TE matrix multiplication complexity. Blue circle: CTE - BPSK; Black square: CTE - 4-QAM; Red circle: HNN-TE - BPSK (O( N c 2 . 376 )); Green square: HNN-TE - 4-QAM (O( N c 2 . 376 )); Red square: HNN-TE - BPSK (O( N c 3 )); Green circle: HNN-TE - 4-QAM (O( N c 3 )).

Memory requirements analysis

The memory requirements of the HNN-TE and the CTE are closely related to their respective computational complexities due to the structures employed by these algorithms. Table 3 describes the memory requirements of the HNN-TE for each received data block. The total memory requirement for the HNN-TE is 2 N c 2 +6 N c + N c +L-1+2 ( N c + L - 1 ) 2 where each variable is of type float, which uses 32 bits. The memory requirements of the CTE per data block is shows in Table 4. The total memory requirement of the CTE is N c M L-1 + 4N c  + L. Figure 9 shows the memory requirement of the HNN-TE and the CTE in bytes (32 bits = 8 bytes) for coded data block sizes of N c  = 160, N c  = 640, and N c  = 2560 and CIR lengths increasing from L = 1 to L = 25. From Figure 9 it is clear that the memory requirement of the HNN-TE remains constant over all channel lengths and modulation alphabet sizes, with less than 1 MB of memory required for N c  = 160, 6.6 MB for N c  = 640 and 100 MB for N c  = 2560. The memory requirements of the CTE, however, grows exponentially with the channel memory length, since the size of the trellis structure used in the MAP equalizer grows according to the same measure. The break-even point between the BPSK CTE and the HNN-TE (for both BPSK and 4-QAM) is L = 10.40 for N c  = 160, L = 12.35 for N c  = 640 and L = 14.30 for N c  = 2560, beyond which the HNN-TE require less memory than the CTE. Also, the break-even point between the 4-QAM CTE and the HNN-TE is L = 5.68 for N c  = 160, L = 6.66 for N c  = 640 and L = 7.66 for N c  = 2560. The memory requirements of the HNN-TE are therefore more favorable when higher order modulation alphabets are employed.

Table 3 HNN-TE memory requirements
Table 4 CTE memory requirements
Figure 9
figure 9

HNN-TE and CTE memory requirements per coded data block in bytes. Blue circle: CTE - N c  = 160 (BPSK); Blue square: CTE - N c  = 640 (BPSK); Blue diamond: CTE - N c  = 2560 (BPSK); Blue circle: CTE - N c  = 160 (4-QAM); Blue square: CTE - N c  = 640 (4-QAM); Blue diamond: CTE - N c  = 2560 (4-QAM); Red circle: HNN-TE - N c  = 160 (BPSK); Red square: HNN-TE - N c  = 640 (BPSK); Red diamond: HNN-TE - N c  = 2560 (BPSK); Green circle: HNN-TE - N c  = 160 (4-QAM); Green square: HNN-TE - N c  = 640 (4-QAM); Green diamond: HNN-TE - N c  = 2560 (4-QAM).

Simulation results

The proposed HNN-TE was evaluated in a mobile fading environment for BPSK and 4-QAM modulation at a code rate of R c  = n / k = 3 / 8. To simulated the fading effect of mobile channels, the Rayleigh fading simulator proposed in [24] was used to generate uncorrelated fading vectors. When imperfect channel state information (CSI) was assumed, least squares channel estimation was used using various amounts of training symbols in the transmitted data block. On the other hand, when perfect CSI was assumed, the CIR coefficients were “estimated” by taking the mean of the uncorrelated fading vectors. Simulations were performed for short and long channels at various mobile speeds. Simulations were also performed to compare the performance of the HNN-TE and a CTE in short mobile fading channels for BPSK modulation. For all simulations the uncoded data block length was N u  = 480 and the coded data block length was N c  = 1280. In all simulations the frequency was hopped four times during each data block in order to further reduce the BER. For the CTE the number of iterations were Z = 5, and instead of using a fixed number of iterations for the HNN-TE, we use the function Z( E b / N 0 )=2( 5 ( E b / N 0 ) / 5 ) (which produces Z(E b  / N 0) = {2, 4, 8, 10, 22, 55} for E b /N 0 = {0, 2.5, 5, 7.5, 10}) to determine the number of iterations to be used given E b  / N 0.

Figure 10 show the performance of the HNN-TE and the CTE for channel lengths of L = 4, L = 6, and L = 8 at a fixed mobile speed of 20 km/h, assuming perfect CSI. The performance of the HNN-TE is slightly better than that of the CTE for high SNR levels.

Figure 10
figure 10

HNN-TE and CTE BPSK performance in short channels at a fixed mobile speed assuming perfect CSI. Shows the HNN-TE and CTE performance in systems with CIR lengths of L = 4, L = 6, and L = 8 at a mobile speed of 20 km/h. Red diamond: CTE - L = 4; Red square: CTE - L = 6; Red circle: CTE - L = 8; Blue diamond: HNN-TE - L = 4; Blue square: HNN-TE - L = 6; Blue circle: HNN-TE - L = 8; Black dashed: coded AWGN bound.

Figure 11 shows the performance of the HNN-TE and the CTE for a channel of length L = 6 at mobile speeds of 3 km/h, 50 km/h, 80 km/h, 140 km/h, and 200 km/h, assuming perfect CSI. It is clear that the HNN-TE outperforms the CTE at mobile speeds greater than 20 km/h, with the advantage of performance increasing with an increase in mobile speeds. It seems that the HNN-TE is less affected by increasing mobile speeds, which suggests that the HNN-TE is able to perform well in fast-fading mobile environments.

Figure 11
figure 11

HNN-TE and CTE BPSK performance in a short channel at various mobile speeds assuming perfect CSI. Shows the HNN-TE and CTE performance in a system with CIR length L = 6 at mobile speeds of 3 km/h, 20 km/h, 50 km/h, 80 km/h, and 110 km/h. Red circle: CTE - v = 3 km/h; Red square: CTE - v = 50 km/h; Red diamond: CTE - v = 80 km/h; Red down triangle: v = 140 km/h; Red left triangle: CTE - v=200 km/h; Blue circle: HNN-TE - v = 3 km/h; Blue square: HNN-TE - v = 50 km/h; Blue diamond: HNN-TE - v = 80 km/h; Blue down triangle: HNN-TE - v = 140 km/h; Blue left triangle: HNN-TE - v = 200 km/h; Black dashed: coded AWGN bound.

Figure 12 shows the performance of the HNN-TE and the CTE for a channel of length L = 6 at a mobile speed of 20 km/h, assuming imperfect CSI. To estimate the channel training sequences of length 4L, 6L, 8L, and 10L were used. From Figure 12 it is clear that the HNN-TE is superior to the CTE at high SNR levels when perfect CSI is not available. The HNN-TE seems to be less sensitive to channel estimation errors.

Figure 12
figure 12

HNN-TE and CTE BPSK performance in a short channel at a fixed mobile speed for various amounts of training symbols for channel estimation. Shows the HNN-TE and CTE performance in systems with CIR length L = 6 at a fixed mobile 20 km/h using 4L, 6L, 8L, and 10L symbols for channel estimation. Red circle: CTE - 4L pilots; Red square: CTE - 6L pilots; Red diamond: CTE - 8L pilots; Red down triangle: CTE - 10L pilots; Red left triangle: CTE - Perfect CSI; Blue circle: HNN-TE - 4L pilots; Blue square: HNN-TE - 6L pilots; Blue diamond: HNN-TE - 8L pilots; Blue down triangle: HNN-TE - 10L pilots; Blue left triangle: HNN-TE - Perfect CSI; Black dashed: coded AWGN bound.

It is clear from Figures 10, 11, and 12 that the performance of the HNN-TE is superior to that of a CTE in short channels at varying mobile speeds, for both perfect and imperfect CSI. The HNN-TE outperforms the CTE in short channels, but with higher computational complexity. Figure 6 shows that the HNN-TE is more computationally complex than the CTE for short channels (L<10), when the coded data block length is relatively small (N u <1280). However, the complexity of the HNN-TE is vastly superior to that of the CTE for long channels. It might be argued that the HNN-TE will perform better than the CTE since more iterations are used, but that is not true. It is stated in [3] that the performance of the CTE cannot be improved significantly beyond Z = 3 iterations in Rayleigh fading channels, so the performance gain of the HNN-TE compared to the CTE is probably due to the fact that HNN-TE is able to process all the available information internally as a whole, without having to exchange information between the equalizer and the decoder, as is the case in a CTE.

Figure 13 shows the performance of the HNN-TE for channels of length L = 10, L = 20, L = 50, L = 100 at a fixed mobile speed of 20 km/h for BPSK and 4-QAM modulation, assuming perfect CSI. It is clear that the performance for BPSK modulation is better than the performance for 4-QAM, which is due to the fact that Gray coding cannot be applied in the encoding process described in Section 4.2.2. The performance loss is therefore warranted.

Figure 13
figure 13

HNN-TE BPSK and 4-QAM performance in a long channel at a fixed speed assuming perfect CSI. Shows the HNN-TE BPSK and 4-QAM performance in systems with CIR lengths of L = 10, L = 20, L = 50, and L = 100 at a mobile speed of 20 km/h. Blue circle: BPSK HNN-TE - L = 10; Blue square: BPSK HNN-TE - L = 20; Blue diamond: BPSK HNN-TE - L = 50; Blue down triangle: BPSK HNN-TE - L = 100; Red circle: 4-QAM HNN-TE - L = 10; Red square: 4-QAM HNN-TE - L = 20; Red diamond: 4-QAM HNN-TE - L = 50; Red down triangle: 4-QAM HNN-TE - L = 100; Black dashed: coded AWGN bound.

Figure 14 shows the performance of the HNN-TE for a channel of length L = 50 at mobile speeds of 20 km/h, 80 km/h, 140 km/h, and 200 km/h for BPSK and 4-QAM modulation, assuming perfect CSI. It is clear that an increase in mobile speed leads to a performance degradation, although not as much as expected. Again BPSK modulation performs better than 4-QAM modulation.

Figure 14
figure 14

HNN-TE BPSK and 4-QAM performance in a long channel at various mobile speeds assuming perfect CSI. Shows the HNN-TE BPSK and 4-QAM performance in a system with CIR length L = 50 at mobile speeds of 20 km/h, 80 km/h, 140 km/h, and 200 km/h. Blue circle: BPSK HNN-TE - v = 20 km/h; Blue square: BPSK HNN-TE - v = 80 km/h; Blue diamond: BPSK HNN-TE - v = 140 km/h; Blue down triangle: BPSK HNN-TE - v = 200 km/h; Red circle: 4-QAM HNN-TE - v = 20 km/h; Red square: 4-QAM HNN-TE - v = 80 km/h; Red diamond: 4-QAM HNN-TE - v = 140 km/h; Red down triangle: 4-QAM HNN-TE - v = 200 km/h; Black dashed: coded AWGN bound.

Figure 15 shows the performance of the HNN-TE for a channel of length L = 50 at a mobile speed of 20 km/h for BPSK and 4-QAM modulation, assuming imperfect CSI. To estimate the channel, training sequences of length 4L, 6L, 8L, and 10L were used. As expected, a performance loss is incurred with a decrease in the number of training symbols. Again BPSK modulation outperforms 4-QAM modulation.

Figure 15
figure 15

HNN-TE BPSK and 4-QAM performance in a long channel at a fixed speed for various amounts of training symbols for channel estimation. Shows the HNN-TE BPSK and 4-QAM performance in systems with CIR length L = 50 at a fixed mobile 20 km/h using 4L, 6L, 8L, and 10L symbols for channel estimation. Blue circle: BPSK HNN-TE - 10L; Blue square: BPSK HNN-TE - 8L; Blue diamond: BPSK HNN-TE - 6L; Blue down triangle: BPSK HNN-TE - 4L; Red circle: 4-QAM HNN-TE - 10L; Red square: 4-QAM HNN-TE - 8L; Red diamond: 4-QAM HNN-TE - 6L; Red down triangle: 4-QAM HNN-TE - 4L; Black dashed: coded AWGN bound.

Figure 16 shows the performance of the HNN-TE for a channel of length L = 25 at a mobile speed of 20 km/h for BPSK and 4-QAM modulation, assuming perfect CSI, for different numbers of iterations. The number of iterations were chosen to be Z = 5, Z = 10, Z = 20, and Z = 50. The BER performance increases with an increase in the number of iterations. Since the performance degradation due to a decrease in the number of iterations is low at low signal levels, we adopt an iteration schedule that is dependent on the signal level. As stated before, we use the following function to determine the number of iterations: Z( E b / N 0 )=2( 5 ( E b / N 0 ) / 5 ).

Figure 16
figure 16

HNN-TE BPSK and 4-QAM performance in a long channel at a fixed speed for various numbers iterations. Shows the HNN-TE BPSK and 4-QAM performance in systems with CIR length L = 50 at a fixed mobile 20 km/h using Z = 5, Z = 10, Z = 20, and Z = 50 iterations.

Figure 17 shows the performance of the HNN-TE for a channel of length L = 50 at a mobile speed of 20 km/h for BPSK and 4-QAM modulation, assuming perfect CSI, for different code rates. The code rates were R c  = 1 / 2 (2 / 4), R c  = 3 / 8, R c  = 1 / 4 (4/16), and R c  = 5 / 32. From Figure 17 it is clear that the performance of the HNN-TE increases with a decrease in the code rate, with 4-QAM modulation performing worse than BPSK modulation.

Figure 17
figure 17

HNN-TE BPSK and 4-QAM performance in a long channel for different code rates, at a fixed speed assuming perfect CSI. Shows the HNN-TE BPSK and 4-QAM performance in systems with CIR length L = 25 at a fixed mobile 20 km/h for code rates of R c  = 2 / 4, R c  = 3 / 8, R c  = 4 / 16, and R c  = 5 / 32. Blue circle: BPSK HNN-TE - R c  = 2 / 4; Blue square: BPSK HNN-TE - R c  = 3 / 8; Blue diamond: BPSK HNN-TE - R c  = 4 / 16; Blue down triangle: BPSK HNN-TE - R c  = 5 / 32; Red circle: 4-QAM HNN-TE - R c  = 2 / 4; Red square: 4-QAM HNN-TE - R c  = 3 / 8; Red diamond: 4-QAM HNN-TE - R c  = 4 / 16; Red down triangle: 4-QAM HNN-TE - R c  = 5 / 32; Black dashed circle: coded AWGN bound - R c  = 2 / 4; Black dashed square: coded AWGN bound - R c  = 3 / 8; Black dashed diamond: coded AWGN bound - R c  = 4 / 16; Black dashed down triangle: coded AWGN bound - R c  = 5 / 32.

From Figures 13, 14, 15, 16 and 17 it is clear that the HNN-TE is able to jointly equalize and decode BPSK and 4-QAM modulated signals, transmitted trough extremely long mobile fading channels. While the data rate using 4-QAM modulation is twice that using BPSK modulation, the performance is worse for 4-QAM modulation, due to the fact that Gray coding cannot be applied during coded modulation.

Conclusion

In this article, a low complexity turbo equalizer was developed which is able to jointly equalize and decode BPSK and 4-QAM coded-modulated signals in systems transmitting interleaved information through a multipath fading channels. It uses the Hopfield neural network as framework and hence it was fittingly named the Hopfield Neural Network Turbo Equalizer, or HNN-TE. The HNN-TE is able to turbo equalize coded modulated BPSK and 4-QAM signals in short as well as long multipath channels, slightly outperforming the CTE for short channels, although at higher computational cost. However, the HNN-TE computational complexity in long channels is vastly superior to that of CTE. The computational complexity of the HNN-TE is almost quadratically related to the coded data block length, while being approximately independent of the CIR length. This enables it to turbo equalize signals in systems with multiple hundreds of multipath elements. It was also demonstrated that the HNN-TE is less susceptible than the CTE to channel estimation errors, and it also outperforms the CTE in fast fading channels. The performance of the HNN-TE for BPSK modulation is better than for 4-QAM modulation, since Gray coding cannot be employed due to the coded modulation explained in this paper, while the complexity for 4-QAM is slightly higher.

References

  1. Berrou C, Glavieux A, Thitimajshima P: Near Shannon limit error-correction and decoding: Turbo-Codes. Int. Conf. Commun 1993, 1064-1070.

    Google Scholar 

  2. Douillard C, Jezequel M, Berrou C, Picart A, Didier P, Glavieux A: Iterative correction of intersymbol intereference: turbo-equalization. Europ. Trans. Telecommun 1995, 6: 507-511. 10.1002/ett.4460060506

    Article  Google Scholar 

  3. Bauch G, Khorram H, Hagenauer J: Iterative equalization and decoding in mobile communication systems. Proceedings of European Personal Mobile Communications Conference (EPMCC) 1997, 307-312.

    Google Scholar 

  4. Koetter R, Tuchler M, Singer AC: Turbo equalization. IEEE Signal Process. Mag 2004, 21(1):67-80. 10.1109/MSP.2004.1267050

    Article  Google Scholar 

  5. Koetter R, Tuchler M, Singer AC: Turbo equalization: principles and new results. IEEE Trans. Commun 2002, 50(5):754-767. 10.1109/TCOMM.2002.1006557

    Article  Google Scholar 

  6. Lopes RR, Barry JR: The soft feedback equalizer for turbo equalization of highly dispersive channels. IEEE Trans. Commun 2006, 54(5):783-788.

    Article  Google Scholar 

  7. Dual-Hallen A, Hegaard C: Delayed decision feedback sequence estimation. IEEE Trans. Commun 1989, 37(5):428-436. 10.1109/26.24594

    Article  Google Scholar 

  8. Eyuboglu MV, Qureshi SU: Reduced-state sequence estimation with set partitioning and decision feedback. IEEE Trans. Commun 1988, 36(1):13-20. 10.1109/26.2724

    Article  Google Scholar 

  9. Wu J, Leong S, Lee K, Xiao C, Olivier JC: Improved BDFE using a priori information for turbo equalization. IEEE Trans. Wirel. Commun 2008, 7(1):233-240.

    Article  Google Scholar 

  10. Lou H, Xiao C: Soft-decision feedback turbo equalization for multilevel modulations. IEEE Trans. Signal Process 2011, 59(1):186-195.

    Article  MathSciNet  Google Scholar 

  11. Fijalkow I, Pirez D, Roumy A, Ronger S, Vila P: Improved interference cancellation for turbo-equalization. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 2000, 416-419.

    Google Scholar 

  12. Wang X, Poor HV: Iterative (turbo) soft interference cancellation and decoding for coded CDMA. IEEE Trans. Commun 1999, 47(7):1046-1061. 10.1109/26.774855

    Article  Google Scholar 

  13. Ampeliotis D, Berberidis K: Low complexity turbo equalization for high data rate. EURASIP J. Commun. Network 2006, 2006(ID 25686):1-12.

    Article  Google Scholar 

  14. Myburgh HC, Olivier JC: Reduced complexity turbo equalization using a dynamic Bayesian network. EURASIP J. Adv. Signal Process 2012. (Submitted for Publication)

    Google Scholar 

  15. Hopfield JJ, Tank DW: Neural computations of decisions in optimization problems. Biol. Cybern 1985, 52: 1-25. 10.1007/BF00336930

    Article  MathSciNet  Google Scholar 

  16. Myburgh HC, Olivier JC: Low complexity MLSE equalization in highly dispersive Rayleigh fading channels. EURASIP J. Adv. Signal Process 2010., 2010(ID 874874): http://asp.eurasipjournals.com/content/2010/1/874874

    Google Scholar 

  17. Wiberg N: A class of Hopfield decodable codes. Proceedings of the IEEE-SP Workshop on Neural Networks for Signal Processing 1993, 88-97.

    Google Scholar 

  18. Wang Q, Bhargava VK: An error correcting neural network. IEEE Pacific Rim Conference on Communications, Computers and Signal Processing 1989, 530-533.

    Google Scholar 

  19. Knuth D: Efficient balanced codes. IEEE Trans. Inf. Theory 1986, IT-32(1):530-533.

    MathSciNet  Google Scholar 

  20. Proakis JG: Digital Communications. New York: McGraw-Hill, International Edition; 2001.

    Google Scholar 

  21. Hopfield JJ: Artificial neural networks. IEEE Circ. Dev. Mag 1988, 4(5):3-10.

    Article  Google Scholar 

  22. Hebb DO: The Organization of Behavior. New York: Wiley; 1949.

    Google Scholar 

  23. Winograd S, Coppersmith D: Matrix multiplication via arithmetic progressions. J. Symbolic Comput 1990, 9(3):251-280. 10.1016/S0747-7171(08)80013-2

    Article  MATH  MathSciNet  Google Scholar 

  24. Zheng YR, Xiao C: Improved models for the generation of multiple uncorrelated Rayleigh fading waveforms. IEEE Commun. Lett 2002, 6: 256-258.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hermanus C Myburgh.

Additional information

Competing interests

Both authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Myburgh, H.C., Olivier, J.C. A low complexity Hopfield neural network turbo equalizer. EURASIP J. Adv. Signal Process. 2013, 15 (2013). https://doi.org/10.1186/1687-6180-2013-15

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2013-15

Keywords