Skip to main content

Time–frequency based feature selection for discrimination of non-stationary biosignals

Abstract

This research proposes a generic methodology for dimensionality reduction upon time–frequency representations applied to the classification of different types of biosignals. The methodology directly deals with the highly redundant and irrelevant data contained in these representations, combining a first stage of irrelevant data removal by variable selection, with a second stage of redundancy reduction using methods based on linear transformations. The study addresses two techniques that provided a similar performance: the first one is based on the selection of a set of the most relevant time–frequency points, whereas the second one selects the most relevant frequency bands. The first methodology needs a lower quantity of components, leading to a lower feature space; but the second improves the capture of the time-varying dynamics of the signal, and therefore provides a more stable performance. In order to evaluate the generalization capabilities of the methodology proposed it has been applied to two types of biosignals with different kinds of non-stationary behaviors: electroencephalographic and phonocardiographic biosignals. Even when these two databases contain samples with different degrees of complexity and a wide variety of characterizing patterns, the results demonstrate a good accuracy for the detection of pathologies, over 98%. The results open the possibility to extrapolate the methodology to the study of other biosignals.

Introduction

Biosignal recordings are useful to extract information about the functional state of the human organism. For this reason, such recordings are widely used to support the diagnosis, making automatic decision systems important tools to improve the pathology detection and its evaluation. Nonetheless, since the underlying biological systems use to have a time dependent response to environmental excitations, non-stationarity can be considered as an inherent property of biosignals [1, 2]. Moreover, changes in physiological or pathological conditions may produce significant variations along time. For instance, the normal blood flow inside the heart is mainly laminar and therefore silent; but when the flow becomes turbulent it causes vibration of surrounding tissues and hence is noisy, giving rise to murmurs, which can be detected analyzing the phonocardiographic (PCG) recordings. So, PCG recordings are non-stationary signals that exhibit sudden frequency changes and transients [3]. In another example, the electroencephalographic (EEG) signals represent the clinical signs of the synchronous activity of the neurons in the brain, but in case of epileptic seizures, there is a sudden and recurrent mal–function of the brain that exhibits considerable short–term non-stationarities [4] that can be detected analyzing these recordings.

However, in the aforementioned examples, the conventional analysis in time or frequency domains does not sufficiently provide relevant information for feature extraction and classification, limiting an automatic analysis for diagnostic purposes. Nonetheless, the main difficulty to automatically detect physiological or pathological conditions lies in the wide variety of patterns that use to appear in non-stationary conditions. Thus, for example, the possibility to automatically detect epileptic seizures from EEG signals is limited by the wide variety of frequencies, amplitudes, spikes, and waves that use to appear [5] along the time with no precise localization. Likewise, in PCG signals, murmurs appear overlapped with the cardiac beat, and sometimes cannot be easily distinguished even by the human ear [3]. Thereby, the performance of automatic decision support systems strongly depends on an adequate choice of those features that accurately parameterize the non-stationary behaviors that are present. Thus, a current challenging problem is to detect a variety of non-stationary biosignal activities with a low computational complexity, to provide tools for efficient biosignal databases management and annotation.

As commented before, it is well known that non-stationarity conditions give rise to temporal changes in the spectral content of the biosignals [2]. In this sense, the literature reports different features for examining the dynamic properties during transient physiological or pathological episodes. These features are usually extracted from the time–frequency (t–f) representations [1, 3, 4] of the signals under analysis. In order to estimate such t–f representations, both parametric and nonparametric estimations are generally employed. Among the most popular nonparametric approaches are: short time fourier transform (STFT), wavelet transform (WT); matching pursuit (MP); Choi-Williams distribution (CWD), Wigner–Ville distribution (WVD) [2, 6]; and among the parametric models: time–variant autoregressive models, and adaptive filtering [3, 4].

The features that are extracted from t–f representations are expected to characterize abnormal behaviors [7]. Previous studies about EEG or PCG have shown that techniques such as matching pursuit are efficient for describing the t–f representations with a reduced number of atoms[8, 9]. Nonetheless, a signal decomposition grounded on matching pursuit does not necessarily provide the same number of t–f atoms for each recording, hence the multidimensional reduction arises as an additional issue to handle dynamic features of different lengths. Additionally, two–dimensional time–frequency/scale approaches, such as the t–f distributions (linear or quadratic) or even the Wavelet analysis, have also been widely used in biosignal processing, in particular for EEG [5, 10] and PCG [6, 11]. In this sense, an approach to create optimized quadratic t–f representations is proposed in [12] by designing kernels that lead to the maximum separability among classes. Moreover, recent approaches allow an EEG data representation with adaptive and sparse decompositions [13].

However, despite the flexibility provided by two–dimensional t–f representations, and regarding their use for classification purposes, some issues still remain open. For instance, the intrinsic dimensionality of t–f representations is huge, and thus, the extraction of relevant and non-redundant features becomes essential for classification. For this purpose, [5] proposes a straightforward approach to compute a set of t–f tiles that represent the fractional energy of the biosignal in a specific frequency band and time window; thus the energy can be evaluated by a simple measure, like the mean energy in each tile. Nonetheless, there is a noteworthy unsolved issue associated with local-based analysis in the tiling approach, namely the selection of the size of the local relevant regions [2]. As a result, the choice of features over the t–f representations is highly dependent on the final application. In this sense, linear decomposition methods have been also considered to extract features over t–f planes [1, 14], by arranging the t–f matrix in a single feature vector; however, in this case, it is strongly convenient to fix previously a confined area of relevance over the t–f representations [3]. Thus, in [15], a t–f region is selected by a two-dimensional weighting function based on a mutual information criterion developed to obtain the maximum separability among classes, so the weighted space is mapped to a set of one-dimensional features, although the methodology is restricted to a specific class of t–f representations.

Therefore, the extraction of relevant information from bi–dimensional t–f features have been discussed in the past as a means to improve performance during and after training in the learning process. Namely, as pointed out in [16], two main issues have to be solved to obtain an effective feature selection algorithm: the estimation of the measure associated with a given relevance function (i.e., a measure of distance among t–f planes), and the calculation of the multivariate transformation, which may maximize the differences among classes pointed out by the measures of relevance projecting the features onto a new space [1].

This research proposes a new methodology for dimensionality reduction of t–f based representations. The proposed methodology carries out consecutively a stage of feature selection with a stage of the linear decomposition of the time–frequency planes. At the beginning, the most relevant features (best localized points, or frequency bands over the t–f representations) are selected by means of some kind of relevance measure. As a result, both the irrelevant information and the computational burden of a later transformation and/or classification stage are significantly decreased. Then, data are projected into a lower dimensional subspace using orthogonal transformations. For the sake of comparison, techniques based on principal component analysis (PCA) and partial least squares (PLS) were considered throughout this study as non-supervised and supervised transformations, respectively.

In order to evaluate the generalization capabilities of the proposed methodology, it has been evaluated using two different databases under different classification scenarios: the first uses a database of PCG recordings to detect heart murmurs; the second uses EEG recordings to detect epilepsy; and the third differentiates between five different types of EEG segments.

The article is organized as follows: The first section is dedicated to an overview of linear decomposition methods with extension to matrix data; second, the concepts of relevance in terms of relevant mappings and the selection of t–f based features by means of relevance measures, are described. Then, comparative results against other t–f based methods are provided [3, 5, 6].

Methods

The methodology introduced throughout this article stands on a prior segmentation of the different signals with a further characterization by means of a t–f representation. Later, the (t–f) planes are significantly reduced by means of a feature selection procedure followed by a linear decomposition. Considered stages are described next.

For the sake of simplicity, the time-frequency analysis carried out in this study has estimated using spectrograms based on the classical STFT [5]. A t–f representation of a segment of a non-stationary signal can be seen as a matrix set of features with column and row wise relationships, holding discriminant information about the underlying process.

In this sense, consider a set of t–f representations, X={ X ( k ) :k=1,,K} (comprising K observations), where each X(k) is associated with one and only one class label c ( k ) N, belonging to the class label set C. The k th t–f representation is described by its corresponding feature matrix, X ( k ) R F × T , defined as follows:

X ( k ) = x c 1 ( k ) , x c 2 ( k ) , , x cT ( k ) = x r 1 ( k ) x r 2 ( k ) x rF ( k ) = x 11 ( k ) x 12 ( k ) x 1 T ( k ) x 21 ( k ) x 22 ( k ) x 2 T ( k ) x F 1 ( k ) x F 2 ( k ) x FT ( k ) ,

where each column vector x cj ( k ) represents the power content at F frequencies in the time instants j=1,…,T, while each row vector x ri ( k ) represents the power change along T time instants, given the frequency bands i=1,…,F. The real–valued x ij ( k ) is the power content at frequency i and time j.

Nonetheless, the main drawbacks of these arranged features are their large size and huge quantity of redundant data. Thereby, data reduction methods are required to accurately parameterize the activity of time–varying features, but preserving the information contained in the column and row–wise relationships of the matrix data [14].

Dimensionality reduction of t–f representations using linear decomposition approaches

A straightforward dimensionality reduction approach on input matrix data by means of orthogonal transformations can be carried out by stacking matrix columns into a single vector, as follows:

χ ( k ) = ( x c 1 ( k ) ) , ( x c 2 ( k ) ) , , ( x cT ( k ) ) χ ( k ) R 1 × FT
(1)

Thus, to reduce the dimensionality of the input data, a transformation matrix W R FT × p , with pFT, can be defined to map the original feature space R 1 × FT into a reduced feature space R 1 × p , by means of the linear operation z(k)=χ(k)W, where z(k) is the transformed feature vector. The transformation matrix W can be obtained using a non-supervised approach such as PCA, or using a supervised approach such as PLS [17]. The vectorization approach in Equation (1) will be referred next as vectorized PCA/PLS, depending on the specific transformation used.

On the other hand, given the input feature matrix X(k), a transformation matrix U R q × F can also be used to reduce the number of rows in the data matrix, i.e., Z r ( k ) =U X ( k ) , with Z r ( k ) R q × T . Likewise, a transformation matrix V R T × p can be used to reduce the number of columns of the data matrix, Z c ( k ) = X ( k ) V, with Z c ( k ) R F × p . If both transforms are combined, a further dimensionality reduction can be achieved as:

Z ( k ) =U X ( k ) V
(2)

where Z ( k ) R q × p is the matrix of features with a reduced dimensionality. The estimation of the transformation matrices U and V is carried out from the data matrices X(c)and X(r), respectively, which are defined as:

X ( c ) = X ( 1 ) X ( k ) X ( K ) ; X ( r ) = X ( 1 ) X ( k ) X ( K )

Finally, the feature vector z(k) is obtained by stacking the columns of z(k)into a new single feature vector. As described in [3], these approaches are termed 2D–PCA or 2D–PLS [18, 19], depending on the orthogonal transformation used to compute the matrices U and V in Equation (2).

Relevance analysis over t–f based features

According to some measure of evaluation, a relevance analysis distinguishes those variables which effectively represent the subjacent physiological phenomena. Such variables are named relevant features, whereas the measure of evaluation is known as relevance measure. In this sense, a variable selection tries to reject those variables whose contribution for representing a target is none or negligible (irrelevant features), as well as those that have repeated information (redundant features).

The notion of relevance can be cast into a supervised framework by considering that for each one of the x ij features belonging to the feature subset, the relevance function ρ is defined as follows [1]:

ρ : R F × T × K R F × T ( X , C , x ij ) ρ ( X , C , x ij ) R +
(3)

where the relevance function ρ should satisfy the following properties:

  • – Non-negativity: i.e., ρ(X,C, x ij )0.

  • – Nullity: the function ρ(X,C, x ij ) is null if the feature x ij has not relevance at all.

  • – Non-redundancy: if x ij =α x ij +ς, where the real–valued α≠0 and ς is some noise with mean zero and unit variance, then |ρ(X,C, x ij )ρ(X,C, x ij )|0.

The evaluation of ρ(X,C, x ij ) is called relevance weight, and the main assumption is that the largest weight is associated with the most relevant feature. So, when the whole set of features is considered, a relevance matrix R=[ ρ ij R + ] can be built. Also, to measure the contribution of each frequency band, a simple average can be accomplished as, i.e.,

ρ rF = E { ρ ij : j } , ρ rF R 1 × F .

Then, the variable selection process is carried out by selecting those x ij features or those x rF frequency bands whose relevance values, ρ ij or ρ rF , are over a certain threshold η R + . For this purpose, the following measures of relevance can be assessed as evaluation criteria [20]:

  1. a.

    Linear correlation, given by:

    ρ lc ( x ij |c)= E { ( x ij ( k ) x ij ¯ ) ( c ( k ) c ¯ ) : k } E { ( x ij ( k ) x ij ¯ ) : k } 2 E { ( c ( k ) c ¯ ) : k } 2 ,
    (4)

where x ij ¯ =E{ x ij ( k ) :k} is the value measured for x ij averaged for the k th object, k=1,…,K, and c ¯ =E{ c ( k ) :k}. Likewise, c(k) is the label of the k th object given to X(k). The notation E{·:λ} stands for the expectation operator over the variable λ.

  1. b.

    Symmetrical uncertainty, which is a measure of uncertainty of a random variable, based on the information-theoretical concept of entropy, given by:

    ρ su ( x ij |c)= H { x ij ( k ) : k } H { x ij ( k ) | c ( k ) : k } H { x ij ( k ) : k } H { c ( k ) : k }
    (5)

being H{·:λ} the entropy operator over the variable λ, defined as:

H{ x ij ( k ) :k}= k P( x ij ( k ) )logP( x ij ( k ) ),k=1,,K.
(6)

Likewise, the conditional entropy operator is given by:

H { x ij ( k ) | c ( k ) : k } = k P ( c ( k ) ) k P ( x ij ( k ) | c ( k ) ) × log P ( x ij ( k ) | c ( k ) ) , k = 1 , , K ,
(7)

where P( x ij ( k ) ) and P(c(k)) are the probability distribution functions (PDF) of the features of interest and the labels, respectively; and P( x ij ( k ) | c ( k ) ) is the conditional PDF. For computing these functions, histogram-driven estimators were used, and the sums on Equations (6) and (7) were carried out along the histogram bins. However, if the number of recordings is lower of certain threshold, another kind of estimators, as kernel based could be used.

Selection of the most informative areas from t–f representations

Once the relevance measure is properly determined, the selection of the features (t–f points or frequency bands), is carried out by choosing those variables with a relevance that exceeds a given threshold η, termed as X ( k ) ̂ or χ ( k ) ̂ in its vectorized form. Due to the large size of the vectors, the threshold is varied as a function of the total number of features in hand, i.e., the higher the number of selected features, the lower the relevance threshold η. Nonetheless, handling the t–f representations requires special attention since the features considered are no longer organized as vectors. With this restriction in mind, two different approaches are proposed:

  1. i)

    The first one consists on the evaluation of the relevance for each point of the t–f representation, and then selecting the set of the most relevant to appraise a reduced feature vector that later will be transformed by conventional one dimensional (1–D) linear decomposition methods. This approach is described in Algorithm 1 and will be referred later as 1D–PCA and 1D–PLS (depending on the transformation technique used).

  2. ii)

    The second consists on evaluating the relevance of the time-varying spectral components of the t–f representation, and then selecting the most relevant frequency bands to appraise a t–f based feature matrix, which will be further reduced using a two dimensional (2–D) matrix-based approach. This approach is described in Algorithm 2 and will be referred later as 2D–PCA and 2D–PLS (depending on the transformation technique used).

Algorithm 1 Selection of t–f based features using relevance measures and dimensionality reduction (1–D approach)

Input: t–f dataset {X(1),X(2),…,X(K)}, relevance threshold η.Output: Reduced feature vector set {z(1),z(2),…,z(K)}.

  1. 1.

    Estimate the relevance measure ρ(x ij |c) of the t–f points, using some of the relevance measures defined in Equations (4) or (5).

  2. 2.

    Select the most relevant t–f variables

for k=1 to K do

X ̂ ( k ) = x ij ( k ) i , j : ρ ( x ij | c ) η

end for

  1. 3.

    Convert t–f matrices into vectors

for k=1 to K do

χ ( k ) =vec( X ̂ ( k ) )= ( x ̂ c 1 ( k ) ) , ( x ̂ c 2 ( k ) ) , , ( x ̂ cT ( k ) )

end for

  1. 4.

    Compute the transformation matrix V of 1D–PCA or 1D–PLS using the relevant feature vector set {χ (1),χ (2),…,χ (K)}.

  2. 5.

    Transform the feature vectors χ (k)into the reduced feature vector z (k), as

for k=1 to K do

z(k)=χ(k)V

end for

Algorithm 2 Frequency band selection from t–f representations using relevance measures and dimensionality reduction by matricial approach (2–D approach)

Input: t–f matrix dataset {X(1),X(2),…,X(K)}, relevance threshold η.

Output: Reduced feature vector set {z(1),z(2),…,z(K)}.

  1. 1.

    Estimate the relevance measure ρ(x ij |c) of the t–f points, building the relevance map R:

    R = ρ ( x 11 | c ) ρ ( x 12 | c ) ρ ( x 1 T | c ) ρ ( x 21 | c ) ρ ( x 22 | c ) ρ ( x 2 T | c ) ρ ( x F 1 | c ) ρ ( x F 2 | c ) ρ ( x FT | c )
  2. 2.

    Compute the average relevance value on the frequency range, as

    ρ rF = E { ρ ij : j }
  3. 3.

    Select the most relevant frequency bands

for k=1 to K do

X ̂ ( k ) = x rF ( k ) r : ρ rF η

end for

  1. 4.

    Compute the transformation matrices U and V of 2D–PCA (or 2D–PLS, respectively), using the reduced t–f matrices set { X ̂ ( 1 ) , X ̂ ( 2 ) ,, X ̂ ( K ) }.

  2. 5.

    Transform the reduced t–f matrices X ̂ ( k ) into the reduced feature vector z (k), as

for k=1 to K do

Z ( k ) =U X ̂ ( k ) V

z ( k ) =vec( Z ( k ) )

end for

A schematic representation of each approach for the selection of relevant t–f features is shown in Figure 1. The approach described in Algorithm 1 is described in Figure 1A, while Figure 1B explains the procedure described in Algorithm 2.

Figure 1
figure 1

Proposed approaches. Graphical representation of both approaches considered for the selection of the relevant variables from matricial data.

Table 1 summarizes the eight different combinations accomplished for the proposed approaches, including the algorithm, the transformation and the relevance measure used in each case.

Table 1 Summary of the proposed approaches

Experimental set–up

The approach used to adjust the proposed feature extraction method for the discrimination of non-stationary biosignals is shown in Figure 2. The methodology is divided into three consecutive steps: (i) Estimation of the t–f representation; (ii) Feature selection, which encloses the selection of the relevant variables and a data transformation by linear decomposition methods; and, (iii) Classification, where a simple k ̌ -nearest neighbors ( k ̌ –NN) classifier was used.

Figure 2
figure 2

Methodology. Experimental outline, and methods subject to investigation.

Database acquisition and preprocessing

The method proposed was tested using two non-stationary databases of biosignal recordings, each comprising pathologies with different degrees of complexity and a wide variety of characterizing patterns. These two databases hold separately EEG and PCG recordings. For the PCG database the unit used for classification is a heart beat, whereas for the EGG database the unit is a segment of 23.6 s. Their main characteristics are summarized in Table 2, where K stands for the number of segments stored in each database, K C is the number of segments per class, f s is the sampling frequency, Nbitsis the number of bits of quantization and l is the length of each recording. For the PCG database, because the difference of the heart beats longitude after the segmentation step, a zero-padding length-normalization process was done, according to the length of the longest recording. For the EEG database, since discontinuities between the end and the beginning of a time series are known to cause spurious spectral frequency components, segments of 4396 samples were at first cut out of the recordings. Within these longer intervals, the beginning of each of the final segments of l=4096 samples was the chosen in such a way that the amplitude differences of consecutive data points, and the slopes at the end and the beginning of the time series had the same sign [21]. Finally, for both databases, after the segmentation step, an amplitude normalization process was carried out. Some additional details of each database are given next.

Table 2 Summary of the characteristics of the database

PCG database

This collection is made up of 45 adult subjects, who gave their informed consent approved by an ethical committee, and underwent a medical examination. A diagnosis was carried out for each patient and the severity of the valve lesion was evaluated by cardiologists according to clinical routine. A set of 26 patients were labeled as normal, while 19 were tagged as pathological with evidence of systolic and diastolic murmurs caused by valve disorders. Furthermore, eight phonocardiographic recordings corresponding to the four traditional focuses of auscultation were taken per patient in the phase of postexpiratory and postinspiratory apnea. Every recording lasted 12 s approximately, and was obtained from the patient standing in dorsal decubitus position. Next, after visual and audio inspection by cardiologists, some of the eight signals were removed because of artifacts and undesired noise. An electronic stethoscope (WelchAllyn Meditron model) was used to acquire the PCG simultaneously with a standard 3-lead electrocardiographic (EKG) (since the QRS complex is clearly determined, DII derivation is synchronized as a time reference). Both signals were sampled with 44.1 kHz rate and amplitude resolution of 16 bits. Preprocessing was carried out including downsampling at 4000 Hz, amplitude normalization and inter–beat segmentation, as described in [3]. Finally, after the segmentation process, the database holds 548 heartbeats in total: 274 with murmurs, and 274 that were labeled as normal. The selection of the 548 beats used for training and validation was carried out by expert cardiologists related to the most representative beats of normal and pathological patients (with murmurs) without having into account the number of heart beats provided for each patient. The database belongs to both Universidad Nacional de Colombia and Universidad de Caldas. Recording was carried out taking into account the rules fixed by the Research Ethics Committee of the Universidad de Caldas which provides guidelines and supervision during those procedures involving human beings.

EEG database

The EEG signals correspond to 29 patients with medically intractable focal epilepsies. They were recorded by the Department of Epileptology of the University of Bonn, by means of intracranially implanted electrodes [21]. All EEG signals were recorded with an acquisition system of 128 channels, using average common reference. Data were digitized at 173.61 Hz, with 12 bits of resolution. The database comprises five sets (denoted as Z, O, N, F, S) composed of 100 single channel EEG segments of 23.6 s and 4096 time-points, which were selected and extracted after visual inspection from continuous multichannel EEG to avoid artifacts (e.g., muscular activity or eye movements). Datasets Z and O consist of segments taken from scalp EEG recordings of five healthy subjects using the standard 10–20 electrode placement. Volunteers were awake, relaxed with their eyes open (Z) and eyes closed (O), respectively. Datasets N, F, and S were selected from presurgical diagnosed EEG recordings. The signals were selected from five patients who achieved a complete control of the epileptic episodes after the dissection of one of the hippocampal formations, which was correctly diagnosed as the epileptogenic zone. Segments of set F were recorded in the epileptogenic zone, and segments of Nx in the hippocampal zone on the opposite side of the brain. While sets N and F only contain activity measured on inter–ictal intervals, set S only contains recordings with ictal activity. In this set, all segments were selected from every recording place exhibiting ictal activity.

Estimation of the t–f representations

According to [5, 6] and for the sake of simplicity, the time-frequency analysis has been carried out by a quadratic t–f representation, such as the spectrogram. This representation, based on a classical Fourier Transform, introduces a time localization concept by using a tapering window function of short duration going along the signal, and adding a time dimension. Particularly, the analysis is performed within a range of 0 to 83 Hz and 0 to 2 kHz for EEG and PCG recordings, respectively. Therefore, the spectrogram is computed by using, as tapering function, Gaussian windows with lengths of 2.9 and 0.05 s for EEG and PCG recordings, respectively, with 50% of overlapping. For both databases, the spectrograms are computed by using a resolution of 512 points. To illustrate the difficulty of the problems addressed, Figures 3 and 4 show some EEG and PCG recordings belonging to normal and pathological classes, along with their respective spectrograms. It can be seen that there are some normal segments whose patterns resemble like pathological ones, and vice–versa.

Figure 3
figure 3

Examples. Examples of PCG recordings labeled as normal (A) and heart murmur (B), respectively.

Figure 4
figure 4

EEG examples. Examples of EEG recordings labeled as normal (Z,O), interictal (N,F) and ictal (S), respectively.

Evaluation of the classification performance

In order to test the aforementioned approaches, three different scenarios of the evaluation were proposed:

  1. i.

    Scenario 1. Murmur detection of PCG signals. The PCG recordings were arranged into two classes (normal and pathological).

  2. ii.

    Scenario 2. Classification of EEG signals into three categories. The EEG segments were sorted into three different classes. Z and O types of EEG segments were combined into a single class; N and F types were also combined into a single class; and type S was the third class. This scenario with only three categories is close to the real medical applications. Following this criterion the database was split in: normal (i.e., types Z and O) containing 200 recordings, seizure free (i.e., types N and F) with 200 recordings, and seizure (i.e., type S) with 100 recordings.

  3. iii.

    Scenario 3. Classification of EEG signals into five different categories. In this scenario each type of EEG segments (Z, O, N, F, S) was considered as a single class, each containing 100 recordings.

The evaluation of the classification accuracy of each method was carried out using a simple k ̌ –NN classifier evaluated following a cross-validation scheme [22]. Several reasons justify the use of this classifier: it is straightforward to implement; it generally leads to good recognition performance thanks to the nonlinearity of its decision boundaries; and its complexity is assumed to be independent of the number of classes. The cross–validation approach used to evaluate the performance of the methodology consists of the division of each dataset into 10 folds containing different recordings, and an even quantity of records from each class. Nine of these folds were used for training and the remaining one for validation purposes. The methods enumerated in Table 1 were applied to the training folds, and the resulting feature spaces were used to train the k ̌ –NN classifier. Then, the relevant measures, the transformation matrices, and the classifier obtained during the training phase were used to categorize the recordings of the validation fold. This procedure was repeated changing the training and validation folds, until the 10 folds were used.

For the scenario 1, with only two classes, the classification performance was measured by means of the accuracy, sensitivity and specificity figures of merit, defined by:

Accuracy (%) = n C n T × 100 ; Sensitivity (%) = n TP n TP + n FN × 100 ; Specificity (%) = n TN n TN + n FP × 100

where n C is the number of correctly classified patterns, n T is the total number of patterns used to feed the classifier, nTPis the number of true positives (objective class accurately classified), nFNis the number of false negatives (objective class classified as control class), nTNis the number of true negatives (control class accurately classified), and nFP is the number of false positives (control class classified as objective class). In this study, the pathological classes correspond to the objective class, while the normal classes correspond to the control class. The accuracy, sensitivity and specificity are calculated for each validation fold and the mean and standard deviation were used as figures of merit.

For the multi–class classification problems (scenarios 2 and 3), the sensitivity and specificity were computed taking each class as the target and the remaining ones as the control classes.

Results

This section analyzes the tuning of the parameters that characterize the methods proposed: the number of neighbors of the k ̌ –NN classifier, the number of components used by the linear decomposition approaches, and the relevance threshold. For the sake of comparison the mean and the standard deviation of the accuracy obtained for the different methods were computed. For those configurations that provided the best accuracy, the sensitivity and the specificity were also computed.

The tuning of the proposed methods was carried out for the PCG database using the scenario 1, whereas for the EEG database the procedure was carried out using the scenario 2; finally, with the best configurations obtained for the scenario 2, the scenario 3 was tested.

Tuning of the k ̌ –NN based classifier

By stepwise increasing the number of neighbors, k ̌ , the optimal value was determined as the one which provided the highest accuracy. The procedure was done for each algorithm, by using all the t–f representations available in each scenario, and the relevance threshold was selected as η=100% (i.e., no relevance criterion was introduced). Additionally, the number of components for PCA/PLS (n for the 1D methods, n r and n C for the 2D methods) were selected based on the number of components that describes the 90% of the total variability of the dataset.

Figures 5A,B show the accuracy using the point selection methods described in Algorithm 1, while Figures 5C,D show the results for the methods based on the selection of frequency bands described in Algorithm 2.

Figure 5
figure 5

Tuning of the k ̌ –NN classifier. Accuracy of the classifier versus number of neighbors.

In the framework of the scenario 1, Figure 5A shows that applying Algorithm 2 to the PCG data, the accuracy of the classifier decreases as the number of neighbors increases. Moreover, the standard deviation is lower for intermediate values and becomes larger as the number of neighbors increases. In the context of the scenario 2, Figure 5B shows similar conclusions for EEG signals.

Similar trends appear using the method based on the selection of frequency bands (Algorithm 2). Figures 5C,D show that the performance decreases as the number of neighbors increases for the PCG and EEG databases (scenarios 1 and 2). Note that the results using Algorithm 2 are more stable than with Algorithm 1. These results reflect the overall structure of the feature spaces obtained.

Accordingly, for both Algorithms, the optimal number of neighbors was fixed as k ̌ =1 for the PCG database, and k ̌ =3 for the EEG database. After the feature selection stage, the decision boundary among classes is expected to be clearer than when no relevance measures are used. Thus, after the relevance analysis, the number of neighbors k ̌ of the classifier could be tuned in a higher value, however, the initial estimation (no relevance) is an admissible approximation.

Selection of the relevant features

The variable selection was carried out choosing the most relevant features according to the proposed measures of relevance: linear correlation, ρlc, and symmetrical uncertainty, ρsu. The training sets of the t–f representations for the PCG and EEG signals were used to compute point–wise relevance measures, yielding to a relevance matrix, which is a dependence measure of each t–f point with its respective label. As a result, a global measure of the degree of dependence is accomplished. Therefore, the amount of features is selected according to a universal threshold fixed a priori over the relevance map, which shows the t–f areas or frequency bands with higher relation to the phenomena under study. Additionally, as explained above, the threshold is varied as a function of the total number of features in hand, i.e., the higher the number of selected features, the lower the relevance threshold.

Figure 6 shows the results of each relevance measure for the scenarios 1 and 2. In particular, the relevance measures, shown in Figures 6A,B for the PCG database (scenario 1), demonstrate that a large span of the time–frequency range is poorly relevant. Only some small areas that are clearly defined can be regarded as highly relevant. The relevance measures based on linear correlation and symmetrical uncertainty select those time instants in the systole and diastole, where normal heart murmurs should be present. Regarding the EEG database, in the context of the scenario 2, Figures 6C,D demonstrate that the most relevant zones are in the low frequency bands (0–40 Hz). This result is more comprehensible for the symmetrical uncertainty relevance measure, and stands in the fact that epilepsy is directly related with low frequency components [5].

Figure 6
figure 6

Relevance maps. Relevance of the time–frequency plane and frequency axis obtained for EEG and PCG databases.

In order to find the most relevant features, the relevance measures estimated were reshaped as vectors and later sorted from highest to lowest values. For both databases, the relevance vectors sorted using the relevance measures considered are shown at the bottom of each subfigure in Figure 6. In the case of the t–f point selection, and using the methodology described in Algorithm 1, the variables were selected according to their relevance, selecting those with a value over a certain threshold η. Such threshold should be adjusted to optimize the accuracy of the classifier.

Regarding to the selection of the frequency bands described in Algorithm 2, the relevance measures were averaged over the time axis. As a result, a vector corresponding to the relevance of the frequency axis is calculated. The values of relevance, ρ r F lc and ρ r F su , corresponding to the frequency axis for both databases are shown in the left plots of each subfigure in Figure 6. Thus, the frequency bands were selected according to their relevance.

In order to assess the effectiveness of each relevance measure, and using the accuracy of the classifier as a figure of merit, the number of relevant features selected was increased as a percentage of the total number of variables. The percentages were varied from 5 to 100% with steps of 5%. This test was carried out for both relevance measures (linear correlation and symmetrical uncertainty), both methods of dimensionality reduction (PCA and PLS), and for scenarios 1 and 2 (PCG: Figures 7A,B; EEG: Figures 7C,D) using Algorithm 1. For both scenarios, the most stable measure is based on the symmetrical uncertainty, demonstrating that it is possible to accurately classify the PCG and EEG signals using around the 15% of the information given by each t–f representation.

Figure 7
figure 7

Tuning relevant features 1D. Classifier accuracy vs. number of relevant variables for 1D methods. (A), (B): scenario 1; (C), (D): scenario 2.

A similar test was carried out for the Algorithm 2, varying the threshold of the relevance of the frequency axis and selecting the most relevant frequency bands. The percentages were varied from 5 to 100% with steps of 5%. The results are shown in Figure 8: Figures 8A,B correspond to the scenario 1, whereas Figures 8C,D correspond to the scenario 2. For both scenarios, there is a small and constant performance drop as the number of relevant features diminishes. Once again, symmetrical uncertainty provided more stable and selective results, giving high accuracy rates using a very small portion of the t–f representation.

Figure 8
figure 8

Tuning relevant features 2D. Classifier accuracy versus number of relevant variables for 2D methods. (A), (B): scenario 1; (C), (D): scenario 2.

Data transformation by linear decomposition methods

After selecting the most relevant variables, the data set obtained (comprising the most relevant variables) was further reduced using the linear transformation methods commented before. The amount of latent components n for 1D–PCA (methods 1 and 2) and 1D–PLS (methods 3 and 4), as well as the number of time n C and frequency n r components for 2D–PCA (methods 5 and 6) and 2D–PLS (methods 7 and 8), was selected according to the maximum classification rate obtained.

For the point–wise approach given in Algorithm 1, Figures 9A,D illustrate the classifier accuracy when the number of components of the linear decomposition methods (PCA and PLS) changes. In the framework of the scenario 1, for the case of PCG signals, Figure 9A shows that using PCA, a good accuracy is achieved with a number of components over n=12, whereas n=21 for PLS. In the context of the scenario 2, for the EEG signals, the results shown in Figure 9D demonstrate that, in both approaches, the performance tends to be steady for a relatively small number of components around n=13, for PLS, and n=25 for PCA.

Figure 9
figure 9

Tuning number of components. Accuracy of the classifier versus number of components for the methods considered. (A), (B), (C): scenario 1; (D), (E), (F): scenario 2.

Figures 9B,C,E,F show the performance of the classifier vs. the number of column and row components of the 2D methods used in Algorithm 2. Figures 9B,E show the classifier outcomes using the 2D–PCA methods, while Figures 9C,D show the results using the 2D–PLS methods proposed.

In the scenario 1, the number of row and column components of the t–f representation of PCG signals must be augmented to achieve a stable behavior. In the case of EEG signals used in the scenario 2, both methods provided a stable behavior as the number of components in rows and columns increased. Furthermore, the accuracy increased with a small number of column components, whereas it got stable as the number of row components augmented. Since the column components are related to the temporal activity, and the row components are associated with the spectral variability, the behavior exhibited by the EEG signals can be interpreted as a smooth temporal activity with a higher spectral variability, while in PCG both temporal and spectral activities present a large variability.

Summary of results

For the scenarios 1 and 2, Tables 3 and 4 summarize the values of the parameters that provided the best performance for the methods under study. Both tables show the sizes of the original datasets, the number of neighbors of the k ̌ –NN classifier, the relevance threshold ρmin, the number of relevant variables selected nrel, the dimension of the feature space obtained n, the number of row and column components of the 2D linear decomposition methods, n r and n C , and the figures of merit of the classification. The results are presented with the mean and standard deviation of the accuracy, sensitivity and specificity. The tables allow to directly compare the approaches used to reduce the dimensionality (PCA, PLS, 2D–PCA, and 2D–PLS) and the relevance measures (linear correlation, and symmetrical uncertainty) for each kind of signal and for the eight methods presented in this article (Table 1). Besides, for the sake of comparison, as proposed in [14], the performance is also presented using no relevance analysis, as well as for the tiling approach of the t–f based feature extraction method described in [5], where n r and n C stand for the number of time and frequency partitions, respectively. In the case of the EEG database (scenarios 2 and 3), the splitting in the frequency domain was carried out according to the δ (0.4–4 Hz), θ (4–8 Hz), α (8–12 Hz), β (12–30 Hz), and γ(>30 Hz) bands, whereas for the PCG database (scenario 1), the number of splits in both time and frequency axes was fixed empirically.

Table 3 Best performance obtained for the methodologies studied using the PCG database (scenario 1)
Table 4 Results for the EEG database (scenario 2)

Finally, the scenario 3 was tested for the methods 4 and 5, since they demonstrated to be the best approaches under study. This scenario presents a more complex classification task involving five different classes. The results obtained are summarized in Table 5 in comparison with the approach found in [5].

Table 5 Results for the five class problem with the EEG database (scenario 3)

Discussion

Several tests were carried out to assess the behavior of the proposed methodologies described in Algorithms Algorithm 1 Selection of t–f based features using relevance measures and dimensionality reduction (1–D approach) and Algorithm 2 Frequency band selection from t–f representations using relevance measures and dimensionality reduction by matricial approach (2–D approach). Two different kinds of signals with different stochastic behaviors were tested: PCG signals, with a well defined temporal structure and well localized events; and EEG signals, whose stochastic structure is unfixed. The relevance measures clearly reflected the particular stochastic behavior of each kind of signal.

Figure 6 demonstrates that, for EEG signals, the information content is distributed along the time axis, whereas it is well localized in the case of PCG signals. The relevance analysis also demonstrates the presence of informative and non informative frequency bands. The selectivity of each relevance measure is different and also depends on the specific signal, as it is shown in Figure 6.

In the scenario 1, for PCG signals, the symmetrical uncertainty is the most selective relevance measure; linear correlation provided some peaks of relevance, but in general is very disperse. This is also reflected as a faster decrease of the performance (Figure 7A) for the linear correlation measure, and a more sustained performance with the symmetrical uncertainty (Figure 7B). Since the values of the relevance measures are very low in a large span of the time–frequency plane of PCG signals, a large amount of points can be interpreted as uninformative. Moreover, there is a zone with a lower accuracy after a peak of performance around 20 to 30% of the relevant features. Regarding the 2D methodology, the symmetrical uncertainty is the most stable measure, since its drop of performance is very low. Nevertheless, the method based on linear correlation reveals a similar performance. The larger stability of the symmetrical uncertainty can be explained because it spans larger time–frequency areas, including high frequency components. Therefore, according to the previous results using the Algorithms Algorithm 1 Selection of t–f based features using relevance measures and dimensionality reduction (1–D approach) and Algorithm 2 Frequency band selection from t–f representations using relevance measures and dimensionality reduction by matricial approach (2–D approach), the best relevance measure is the symmetrical uncertainty, given its selectivity and effectiveness for provided feature selection, and its stability in the accuracy.

In the scenario 2, for EEG signals, the behavior for both relevance measures is quite similar. The relevance measure based on the symmetrical uncertainty is the most selective, which is reflected in a more sustained accuracy rate (Figure 7D), compared with the fast declination of the performance shown in Figure 7C, where the linear correlation relevance measure is considered. Regarding to the 2D methods, the linear correlation and the symmetrical uncertainty showed the highest values of relevance for the lower frequency bands. Regarding the linear decomposition methods, 1D-PLS and 2D-PLS methods demonstrate, in general, the best performance, exhibiting a difference with respect to PCA around 2 or 3 points of accuracy. However, PCA tends to stabilize the performance of the classifier with a lower quantity of components, both in the 1D and 2D versions. In any case, the performance of the 1D-PLS and 2D-PLS methods converges with a few amount of components and remains stable.

For the scenario 2, using the 1D methodology (methods 1 to 4), the EEG data needed a small amount of features to achieve high performance rates. Also, the number of temporal components of the 2D methodology is very low, which means that the stochastic activity is easier to parameterize using the 2D approaches. On the other hand, PCG signals need more components in both 1D and 2D approaches, given that local events and specific stochastic behaviors that these signals exhibit must be modeled.

Additionally, as shown in Tables 3 and 4, it can be seen that the proposed methodologies perform better (about 3–4 points, in terms of accuracy) than recent approaches discussed in the literature [5, 14]. These results were expected due to the capabilities of both approaches to capture the most informative relevant points or bands over the t–f planes, which additionally brings computational stability to the dimensionality reduction process. The Algorithm 2 with no relevance criterion (η=100%) provide similar performance to the best approaches (methods 7 and 8), however the feature selection stage based on relevance measures (linear correlation and symmetrical uncertainty) allows to reduce the computational burden of the process, because the size of the matrices is further reduced before the dimensionality reduction process.

Regarding the scenario 3, the results obtained with methods 4 and 8 outperformed those using the algorithm in [5] (up to 10 classification points). Nevertheless, the band selection methodology described in Algorithm 2 (method 8) is more suitable to discriminate among the different classes.

The values that gave the best accuracy rates for each database and for each methodology (1D and 2D) are summarized in Table 6 (methods 4 and 8). The percentage of reduction is computed as the ratio of features removed with the total number of features. This measure is computed for the first stage of variable selection by relevance analysis (% Reduc. 1) as well as for the second stage of linear transformation by PCA or PLS (% Reduc. 2).

Table 6 Summary of best performance rates for each database

The feature selection stage allows an effective selection of the most relevant features. In accordance with the results shown in Table 6 for the scenario 1, an accuracy of 99.64% was obtained with only 10% of the features extracted from the PCG signals; and for the EEG database, accuracies of 98.80 and 94.40% were obtained for the scenarios 2 and 3, respectively, by using 40% of the t–f features.

On the other hand, the methodology of Algorithm 1 needed a lower quantity of components, which is reflected in a lower feature space dimensionality; but Algorithm 2 allows larger matrices with almost the same performance. In the case of the 1D methodology (methods 1 to 4), and for data matrices of size (F·TK, it is necessary to compute transformation matrices of size (F·Fn, while for the 2D methodology (methods 5 to 8), two transformation matrices of F×n r and T×n c are needed while working with two data matrices of size T×(F·K) and F×(T·K).

Conclusions

This research proposes a new and promising approach for feature selection over t–f based features that can be applied to non-stationary biosignal classification. The results obtained showed a high performance under different scenarios and demonstrated that the accuracy is stable for EEG and PCG signals, giving evidence of the generalization capabilities of the proposed methodology for different signals with diverse non-stationary behaviors. The results open the possibility to extrapolate the methodology to the study of other biosignals.

The method directly deals with highly redundant and irrelevant data contained in the bi-dimensional t–f representations, combining a first stage of irrelevant data removal by variable selection using a relevance measure, with a second stage of redundancy reduction by linear transformation methods. Under these premises, two methodologies have been derived: the first one aimed to find the most relevant t–f points; the second one devised to select the frequency bands with a higher relevance. Each methodology needs a particular linear decomposition approach: in the first case, PCA and PLS methods were used, whereas, in the second approach, a to the matrix-data based generalization these methods was used.

Although this work uses the spectrograms, the proposed approaches can be applied to other kind of real-valued t–f representations, such as time-frequency distributions, wavelet transforms, and matching pursuit, among others.

The relevance analysis was evaluated using two supervised measures: linear correlation and symmetrical uncertainty. Under the same premises, the application of these measures demonstrated a significant improvement in comparison with the case when no relevance measure was used. Besides, the relevance measure based on the symmetrical uncertainty provided a better performance, allowing an effective selection of the most relevant variables, thus diminishing the computational burden of the linear decomposition methods and of the classifier. In addition, the relevance analysis serves itself as an interpretation tool, giving information about those t–f patterns closer related to abnormalities and pathological behavior.

On the other hand, it was found that the use of a supervised method (such as PLS) clearly improved the performance of the classifier. Moreover, the performance of the 1D and 2D versions was found almost similar. Although the 1D methodology needs a lower quantity of components, which is reflected in a lower feature space dimensionality, the 2D methodology allows to take into account the dynamic information of each spectral component over the t–f planes, which was reflected in more stable results.

As a future study, the introduction of the relevance measure directly into the linear decomposition method should be evaluated; so a relevance and a redundancy analysis could be carried out in the same step, but probably at the expense of a larger computational burden and memory requirements. Additionally, the use of other linear or nonlinear decomposition techniques, such as linear discriminant analysis or local linear embedding should be evaluated. Moreover, the use of other relevance measures such as mutual information might also be considered, since it is an effective criterion for feature selection algorithms.

References

  1. Sepulveda-Cano LM, Acosta-Medina CD, Castellanos-Dominguez G: Relevance Analysis of Stochastic Biosignals for Identification of Pathologies. EURASIP J. Adv. Signal Process 2011, 2011: 10. 10.1186/1687-6180-2011-10

    Article  Google Scholar 

  2. Sejdic E, Djurovic I, Jiang J: Time-frequency feature representation using energy concentration: an overview of recent advances. Digital Signal Process 2009, 19: 153-183. 10.1016/j.dsp.2007.12.004

    Article  Google Scholar 

  3. Avendano-Valencia L, Godino-Llorente J, Blanco-Velasco M, Castellanos-Dominguez G: Feature extraction from parametric time-frequency representations for heart murmur detection. Annals Biomed. Eng 2010, 38(8):2716-2732. 10.1007/s10439-010-0077-4

    Article  Google Scholar 

  4. Tarvainen MP, Georgiadis S, Lipponen JA, Hakkarainen M, Karjalainen PA: Time-varyingspectrum estimation of heart rate variability signals with Kalman smoother algorithm. 2009, 1-4.

    Google Scholar 

  5. Tzallas A, Tsipouras M, Fotiadis D: Epileptic seizure detection in electroencephalograms using time-frequency analysis. IEEE Trans. Inf. Technol. Biomed 2009, 13(5):703-710.

    Article  Google Scholar 

  6. Quiceno-Manrique AF, Godino-Llorente JI, Blanco-Velasco M, Castellanos-Dominguez G: Selection of dynamic features based on time-frequency representations for heart murmur detection from phonocardiographic signals. Annals Biomed. Eng 2010, 38: 118-37. 10.1007/s10439-009-9838-3

    Article  Google Scholar 

  7. Debbal S, Bereksi-Reguid F: Time–frequency analysis of the first and the second heartbeat sounds. Appl. Math. Comput 2007, 128(2):1041-1052.

    Article  MATH  MathSciNet  Google Scholar 

  8. Jabbari S, Ghassemian H: Modeling of heart systiloc murmurs based on multivariate matching pursuit for diagnosis of valvular disorders. Comput. Biol. Med 2011, 41: 802-811. 10.1016/j.compbiomed.2011.06.016

    Article  Google Scholar 

  9. Durka PJ, Matysiak A, Martínez-Montes E, Valdes-Sosa P, Blinowska KJ: Multichannel matching pursuit and EEG inverse solutions. J. Neurosci. Methods 2005, 148: 49-59. 10.1016/j.jneumeth.2005.04.001

    Article  Google Scholar 

  10. Zandi AS, Javidan M, Dumont GA, Freshi RT: Automated real-ti me epileptic seizure detection in scalp eeg recordings using a n algorithm based on wavelet packet transform. IEEE Trans. Biomed. Eng 2010, 57(7):1639-1651.

    Article  Google Scholar 

  11. Cvetkovic D, Übeyli ED, Cosic I: Wavelet transform feature extraction from human PPG, ECG, and EEG signal responses to ELF PEMF exposures: a pilot study. Digital Signal Process 2008, 18(5):861-874. 10.1016/j.dsp.2007.05.009

    Article  Google Scholar 

  12. Gillespie B, Atlas L: Optimizing time-frequency kernels for classification. IEEE Trans. Signal Process 2001, 49(3):485-496. 10.1109/78.905863

    Article  Google Scholar 

  13. Haufe S, Tomioka R, Dickhaus T, Sannelli C, Blankertz B, Nolte G, Müller KR: Large-scale EEG/MEG source localization with spatial flexibility. NeuroImage 2011, 54: 851-859. 10.1016/j.neuroimage.2010.09.003

    Article  Google Scholar 

  14. Bernat E, Williams W, Gehring W: Decomposing ERP time–frequency energy using PCA. Clin. Neurophys 2005, 116: 1314-1334. 10.1016/j.clinph.2005.01.019

    Article  Google Scholar 

  15. Grall-Maes E, Beauseroy P: Mutual information-based feature extraction on the time-frequency plane. IEEE Trans. Signal Process 2002, 50(4):779-790. 10.1109/78.992120

    Article  Google Scholar 

  16. Zhao Y, Zhang S: Generalized dimension-reduction framework for recent-biased time series analysis. IEEE Trans. Knowl. Data Eng 2006, 18(2):231-244.

    Article  Google Scholar 

  17. Barker M, Rayens W: Partial least squares for discrimination. J. Chemomet 2003, 17(3):166-173. 10.1002/cem.785

    Article  Google Scholar 

  18. Yang J, Zhang D, Frangi A, Yang J: Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Mach. Intell 2004, 26: 131-137. 10.1109/TPAMI.2004.1261097

    Article  Google Scholar 

  19. Zhang D, Zhou ZH: (2D)2PCA: two-directional two-dimensional PCA for efficient face representation and recognition. Neurocomputing 2005, 69(1–3):224-231.

    Article  Google Scholar 

  20. Yu L, Liu H: Efficient feature selection via analysis of relevance and redundancy. J. Mach. Learn. Res 2004, 5: 1205-1224.

    MATH  MathSciNet  Google Scholar 

  21. Andrzejak R, Lehnertz K, Rieke C, Mormann F, David P, Elger C: Indications of nonlinear deterministic and finite dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Phys. Rev. E 2001, 64: 71-86.

    Article  Google Scholar 

  22. Duda R, Hart P: D Stork Pattern Classification 2nd edn. with Computer Manual 2nd Edition Set. Wiley; 2001.

    Google Scholar 

Download references

Acknowledgements

This research was supported by “Centro de Investigación e Innovación de Excelencia - ARTICA, Programa Nacional de Formación de Investigadores ”GENERACIÓN DEL BICENTENARIO”, 2011,” funded by COLCIENCIAS, “Servicio de monitoreo remoto de actividad cardiaca para el tamizaje clinico en la red de telemedicina del departamento de Caldas” funded by Universidad Nacional de Colombia and Universidad de Caldas and through the project grant TEC2009-14123-C04-02 financed by the Spanish government.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan D Martínez-Vargas.

Additional information

Competing interests

The authors declare that they have no competing interest.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Authors’ original file for figure 22

Authors’ original file for figure 23

Authors’ original file for figure 24

Authors’ original file for figure 25

Authors’ original file for figure 26

Authors’ original file for figure 27

Authors’ original file for figure 28

Authors’ original file for figure 29

Authors’ original file for figure 30

Authors’ original file for figure 31

Authors’ original file for figure 32

Authors’ original file for figure 33

Authors’ original file for figure 34

Authors’ original file for figure 35

Authors’ original file for figure 36

Authors’ original file for figure 37

Authors’ original file for figure 38

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

D Martínez-Vargas, J., I Godino-Llorente, J. & Castellanos‐Dominguez, G. Time–frequency based feature selection for discrimination of non-stationary biosignals. EURASIP J. Adv. Signal Process. 2012, 219 (2012). https://doi.org/10.1186/1687-6180-2012-219

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-219

Keywords