Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for recognizing radar emitter signals. In this article, a hybrid recognition approach is presented that classifies radar emitter signals by exploiting the different separability of samples. The proposed approach comprises two steps, i.e., the primary signal recognition and the advanced signal recognition. In the former step, the rough kmeans classifier is proposed to cluster the samples of radar emitter signals by using the rough set theory. In the latter step, the samples within the rough boundary are used to train the support vector machine (SVM). Then SVM is used to recognize the samples in the uncertain area; therefore, the classification accuracy is improved. Simulation results show that, for recognizing radar emitter signals, the proposed hybrid recognition approach is more accurate, and has a lower time complexity than the traditional approaches.
Radar emitter recognition is a critical function in radar electronic support system, for determining the type of radar emitter[1]. Emitter classification based on a collection of received radar signals is a subject of wide interest in both civil and military applications. For example, in battlefield surveillance applications, radar emitter classification provides an important means to detect targets employing radars, especially those from hostile forces. In civilian applications, the technology can be used to detect and identify navigation radars deployed on ships and cars used for criminal activities[2].
The recent proliferation and complexity of electromagnetic signals encountered in modern environments is greatly complicating the recognition of radar emitter signals[1]. Traditional recognition methods are becoming inefficient against this emerging issue[3]. Many new radar emitter recognition methods were proposed, e.g., intrapulse feature analysis[4], stochastic contextfree grammars analysis[1], and artificial intelligence analysis[5–8]. In particular, the artificial intelligence analysis approach attracted much attention. Among the artificial intelligence approaches, neural network and support vector machine (SVM) are widely used for the radar emitter recognition. In[6], Zhang et al. proposed a method based on rough sets theory and radial basis function (RBF) neural networks. Yin et al.[7] proposed a radar emitter recognition method using the single parameter dynamic search neural network. However, the predication accuracy of the neural network approaches is not high and the application of neural networks requires large training sets, which may be infeasible in practice. Compared to the neural network, the SVM yields higher prediction accuracy while requiring less training samples. Ren et al.[2] proposed a recognition method using fuzzy Cmeans clustering SVM. Lin et al. proposed to recognize radar emitter signals using the probabilistic SVM[8] and multiple SVM classifiers[9]. These proposed SVM approaches can improve the accuracy of recognition. Unfortunately, the time complexity of SVM increases rapidly with the increasing number of training samples. The classification method with high accuracy and low time complexity is becoming the focus of research.
Classifiers can be categorized into linear classifiers and nonlinear classifiers. A linear classifier can classify linear separable samples, but cannot classify linearly inseparable samples efficiently. A nonlinear classifier can classify linearly inseparable samples, nevertheless the time complexity of the nonlinear classifier will be increased when processing linearly separable samples. In practice, the radar emitter signals consist of both linearly separable samples and linearly inseparable samples, which makes classification challenging. In the traditional recognition approach, only one classifier is used; thus, it is difficult to classify all radar emitter signal samples. In this article, a hybrid recognition method based on the rough kmeans theory and the SVM is proposed. To deal with the drawback of the traditional recognition approaches, we apply two classifiers to recognize linearly separable samples and linearly inseparable samples respectively. Samples are firstly recognized by the rough kmeans classifier, while linearly inseparable samples are picked up and further recognized by using RBFSVM in the advanced recognition. The simulation results show that the proposed approach can recognize radar emitter signals more accurate and has a lower time complexity when compared with the existing approaches.
The rest of the article is organized as follows. In Section ‘Basic concepts’, some basic concepts are reviewed. In Section ‘Radar emitter recognition system’, a novel radar emitter recognition model is proposed. The performance of the proposed approach is analyzed in Section ‘Simulation results’, and conclusions are given in Section ‘Conclusions’.
Basic concepts
Rough sets
An information system can be expressed by a fourparameters group[10]: S = {URVf}. U is a finite and nonempty set of objects called the universe, and R = C ∪ D is a finite set of attributes, where C denotes the condition attributes and D denotes the decision attributes. V =∪ v_{
r
},(r∈R) is the domain of the attributes, where v_{
r
} denotes a set of values that the attribute r may take. f:U × R→V is an information function. The equivalence relation R partitions the universe U into subsets. Such a partition of the universe is denoted by U/R = E_{1},E_{2},…,E_{
n
}, where E_{
i
} is an equivalence class of R. If two elements uv ∈ U belong to the same equivalence class E ⊆ U/R, u and v are indistinguishable, denoted by ind(R). If ind(R) = ind(R−r), r is unnecessary in R. Otherwise, r is necessary in R.
Since it is not possible to differentiate the elements within the same equivalence class, one may not obtain a precise representation for a set X ⊆ U. The set X, which can be expressed by combining sets of some R basis categories, is called set defined, and the others are rough sets. Rough sets can be defined by upper approximation and lower approximation. The elements in the lower bound of X definitely belong to X, and elements in the upper bound of X belong to X possibly. The upper approximation and lower approximation of the rough set R can be defined as follows[11]:
where$\underset{\_}{R}\left(X\right)$ represents the set that can be merged into X positively, and$\overline{R}\left(X\right)$ represents the set that is merged into X possibly.
Suppose P and Q are both the equivalent relationship of system U, and the knowledge systems decided by them are U/P = {[x]_{
P
}x∈U} and U/Q = {[y]_{
Q
}y∈U}. If for any [x]_{
P
}∈(U/P),$\overline{Q}\left({\left[x\right]}_{P}\right)=\underset{\_}{Q}\left({\left[x\right]}_{P}\right)={\left[x\right]}_{P}$, then knowledge P is dependent on knowledge Q completely, that is to say when disquisitive object is some characteristic of Q, it must be some characteristic of P. P and Q are of definite relationship. If knowledge P is dependent on knowledge Q partly, P and Q are of uncertain relationship. So the dependent extent of knowledge P to knowledge Q is defined as[10]
where$\text{PO}{S}_{Q}\left(P\right)=\cup \underset{\_}{Q}\left(x\right)$ and 0 ≤ γ_{
Q
}≤ 1. The value of γ_{
Q
} reflects the dependent degree of knowledge P to knowledge Q. γ_{
Q
}= 1 shows knowledge P is dependent on knowledge Q completely; γ_{
Q
} close to 1 shows knowledge P is dependent on knowledge Q highly. γ_{
Q
}= 0 shows knowledge P is independent of knowledge Q.
Rough kmeans algorithm
The kmeans algorithm is one of the most popular iterative descent clustering algorithms[12]. The basic idea is to make the samples have high similarity in a class, and low similarity among classes. The center of a cluster can be given by:
where x denotes the sample to cluster, X_{
i
}denotes the cluster i, card(X_{
i
}) denotes the number of the elements in X_{
i
}, and I denotes the number of clusters.
The kmeans algorithm is efficient for clustering. But kmeans clustering algorithm has the following problems:
1.
The number of clusters in the algorithm must be given before clustering [13].
2.
The kmeans algorithm is very sensitive to the initial center selection and can easily end up with a local minimum solution [13, 14].
3.
The kmeans algorithm is also sensitive to the isolated point [15].
To overcome the problem of isolated points, Pawan and West[15] proposed the rough kmeans algorithm. This method introduces upper approximation and lower approximation into kmeans clustering algorithm. The improved cluster center is given by[15]:
where the parameters ω_{lower} and ω_{upper} are lower and upper subject degrees of X relative to their clustering centers. For each object vector v, d(x,t_{
i
}) denotes the distance between the center of cluster t_{
i
} and the sample. The lower and upper subject degrees of x relative to its cluster is based on the value of d(x,t_{
i
})−d_{min}(x)(1 ≤ i ≤ I), where d_{min}(x) = min_{i∈[1,I]}d(x,t_{
i
}). If the value of d(x,t_{
i
})−d_{min}(x) ≥ λ, the sample x is subject to the lower approximation of its cluster, where λ denotes the threshold for determining upper and lower approximation. Otherwise, x will be subject to the upper approximation. The comparative degree can be determined by the number of elements in the lower approximation set and the upper approximation set, as follows:
In this section, we give a very brief introduction to SVM. Let (x_{
i
},y_{
i
})_{1 ≤ i ≤ N} be a set of training examples, each example x_{
i
}∈ R^{
d
}, d being the dimension of the input space, belongs to a class labeled by y ∈ {−1,1}. It amounts to finding w and b, which satisfy
The aim of SVM is to find the hyperplane which makes the samples with the same label at the same side of the hyperplane. The quantity$\frac{\left\right\mathbf{w}\left\right}{2}$ is named the margin, and optimal separating hyperplane (OSH) is the separating hyperplane which maximizes the margin. The larger the margin, the better the generalization is expected to be[16].
To search the minimum$\frac{\left\right\mathbf{w}\left\right}{2}$, Lagrange multiplier is usually used, leading to maximizing
where α = (α_{1},…,α_{
N
}) denotes the nonnegative Lagrange multipliers, x_{
i
} denotes the input of the training data and y_{
i
}denotes the output of the training data[17].
In the nonlinear case, the approach adapted to noisy data is to make a soft margin. We introduce the slack variables (ξ_{1},…,ξ_{
i
}) with ξ_{1} > 0 so that
subject to (12) and ξ_{
i
}> 0. The parameter ∑ξ_{
i
}is the upper bound on the number of training errors and C is the penalty parameter to control errors.
In the nonlinear SVM, a kernel function is introduced to change the initial data into a feature space with high dimension. In the new space the data should be linearly separable. Then the quadratic optimization problem can be converted to maximize
subject to (10) and 0 ≤ α_{
i
}≤ C. K(x,x_{
i
}) is the kernel function. As one of the most popular kernel functions, the RBF kernel function is considered in this article, and it takes the following form[18, 19]:
The result of the minimization is determined by the selection of parameters C and γ. Usually, C and γ are determined by using cross validation.
Radar emitter recognition system
In this section, a hybrid radar emitter recognition approach that consists of a rough kmeans classifier in the primary recognition and a SVM classifier in the advanced recognition is proposed. This approach is based on the fact that in the kmeans clustering, the linearly inseparable samples are mostly at the margins of clusters, which makes it difficult to determine which cluster they belong to. To solve this problem, a linear classifier based on the rough kmeans and a nonlinear classifier SVM are adopted. This approach can classify linearly separable samples and pick up those linearly inseparable samples to be classified in the advanced recognition using SVM.
After sorting and feature extraction, radar emitter signals are described by pulses describing words. Radar emitter recognitions are based on these pulses describing words. The process of the hybrid radar emitter recognition approach is shown in Figure1. Based on the pulses describing words, we can obtain an information sheet of radar emitter signals. By attribute discretization and attribute reduction, the classification rules are extracted. These classification rules are the basis of the initial centers of the rough kmeans classifier, i.e., they determine the initial centers and the number of clusters. After that, the known radar emitter signal samples are clustered by the rough kmeans while the rough kmeans classifier in the primary recognition is built, as described in the following section. The samples in the margin of a cluster are picked up to be used as the training data for the SVM in the advanced recognition. The unknown samples to be classified are recognized firstly by the rough kmeans classifier. The uncertain sample set, which contains most of the samples with linear inseparability, is classified by the SVM in the advanced recognition.
Based on the process of the recognition approach described above, the accuracy of recognition can be given by:
where A_{total} is the accuracy of the hybrid recognition, A_{primary} is the accuracy of the primary recognition, A_{advanced} is the accuracy of the advanced recognition, N_{WIU} is the number of samples which are falsely classified in uncertain area, and N_{
W
} is the number of wrong classified samples.
Primary recognition based on improved rough kmeans
As mentioned above, a classifier based on the rough kmeans is proposed as the primary recognition. In the rough kmeans algorithm, there are two areas in a cluster, i.e., certain area and rough area. But in the rough kmeans classifier proposed in this article, there exist three areas. For example, in two dimension, a cluster is depicted in Figure2. At the edge of the cluster, there is an empty area between the borderline and the midcourt line of the two cluster centers. We name this area as the uncertain area. In clustering, there is no sample in the uncertain area. When the clustering is completed, these clusters will be used as the rough kmeans classifiers. When unknown samples are classified, for each cluster center, samples are nearer than the midcourt line are classified into its class. Linear inseparable samples are usually far from cluster centers and probably out of the cluster, i.e.,in the uncertain area. Thus after distributed into their nearest clusters, the unknown samples in uncertain area will be recognized by the advanced recognition. For those unknown samples in the certain area and rough area, the primary recognition outputs final results.
As shown in Figure2, in the training process of the rough kmean classifier, we calculate the cluster center, rough boundary R_{ro} and uncertain boundary R_{un}in every cluster. After clustering, the center of a cluster and the farthest sample from the center of the cluster are determined. The area between rough boundary and uncertain boundary (R_{ro} < d_{
x
}< R_{un}) is defined as rough area, where d_{
x
}denotes the distance from a sample to the center. In the training, if a training sample is in the rough area, it will be used to train the SVM in the advanced recognition. The uncertain boundary threshold R_{un} is defined as
${R}_{\text{un}}=max\left({d}_{x}\right)$
(19)
where max(d_{
x
}) is the distance from the farthest sample to the center.
In a cluster, The area beyond uncertain boundary (d_{
x
}> R_{un}) is the uncertain area. When unknown samples are recognized, they will be distributed into the nearest cluster. If d_{
x
}> R_{un}, these samples will be further recognized by the advanced recognition. For other unknown samples, the result of the primary recognition will be final.
As the discussions in previous section, the kmeans algorithm has some problems. The rough kmeans method can solve the problems of nondeterminacy in clustering and reduce the effect of isolated samples[20]. But it still requires initial centers and the number of centers as priors. In addition, the choice of initial centers is very important for rough kmeans. So the initial centers are usually determined by computing the least means square. In this article, we propose to determine the initial centers based on rough sets theory. Using this approach, the initial centers are computed based on the classification rules of rough sets. The process can be described as follows:
1.
Classification rules are obtained based on the rough sets theory.
2.
The mean value of every class is obtained.
3.
Define the mean values as the initial clustering centers. The clustering number equals to the number of rules:
where X_{
p
}denotes the set of samples in the classification rule p of the rough sets theory.
In (5), the parameter λ determines the lower and upper subject degree of X_{
k
}relative to some clustering. If the threshold λ is too large, the low approximation set will be empty, while if the threshold λ is too small, the boundary area will be powerless. Usually, λ is set to a value, which makes most samples in the lower approximation and the upper approximation not empty. The threshold λ can be determined by:
1.
Compute the Euler distance of every object to K class clustering centers and distance matrix D(i,j).
2.
Compute the minimum value d_{min}(i) in every row of matrix D(i,j).
3.
Compute distance between every object and other class center d(i) and d_{
t
}(i,j) = d(i)−d_{min}(i).
4.
Obtain the minimum value d_{
s
}(i) (except zero) in every row.
5.
λ is chosen from the minimum value d_{
s
}(i).
After that, known samples are clustered by using (5). The cluster centers C, the rough boundary R_{ro} and the uncertain boundary R_{
un
}are determined.
In addition, the primary recognition result is effected greatly by radiuses of clusters. Rough kmeans clustering can lessen the radiuses of clusters effectively. As shown in Figure3, the radius of kmeans cluster is the distance from the cluster center to the farthest isolated sample. In the rough kmeans, the cluster center is the average of the lower approximation center and the upper approximation center. The upper approximation center is near to the farthest sample. So the cluster radius of rough kmeans R_{
r
} is less than the kmeans radius R, obviously. As the radius is shorten, when unknown samples are recognized, the probability that an uncertain sample is recognized as a certain sample is reduced. Therefore, the accuracy of the primary recognition is increased.
The time complexity of the hybrid recognition approach
The time complexity of the approach proposed in this article consists of two parts, namely the time complexity of the primary recognition and the time complexity of the advanced recognition.
In the training of the primary recognition, samples are clustered by using rough kmeans. The time complexity of the rough kmeans is$\mathcal{O}\left(\text{dmt}\right)$, where d, m, and t denote the dimensionality of samples, the number of training samples and the iterations, respectively. In this article, the optimal initial centers are determined by analyzing the knowledge rule of the training sample set based on rough set theory, instead of iteration. Thus, the time complexity of the primary recognition is$\mathcal{O}\left(\text{dm}\right)$.
The SVM is used as the advanced recognition in our approach. The time complexity of SVM has nothing with the dimension of samples, but is related with the number of samples. The time complexity of SVM training is discussed with respect to the complexity of the quadratic programming. Standard SVM training has$\mathcal{O}\left({m}^{3}\right)$ time complexity[21].
In conclusion, the time complexity of our hybrid recognition is$\mathcal{O}\left(\text{dm}\right)+\mathcal{O}\left({m}^{\prime 3}\right)$, where m^{
′
} denotes the number of training samples for SVM in the advanced recognition (After the primary recognition, the training samples for SVM is reduced). In general,$\mathcal{O}\left(\text{dm}\right)$ is far less than$\mathcal{O}\left({m}^{\prime 3}\right)$. Therefore, the time complexity of the hybrid recognition training is regard as$\mathcal{O}\left({m}^{\prime 3}\right)$.
Simulation results
Data set description and experiment design
The validity and efficiency of the proposed approach is proved by simulations. In the first simulation, radar emitter signals are recognized. The type of radar emitter is the recognition result. The pulse describing words of the radar emitter signal include a radio frequency (RF), a pulse repeating frequency (PRF), antenna rotate rate (ARR) and a pulse width (PW). 240 groups of data are generated on above original radar information for training, while 200 groups are generated for testing. This simulation is repeated 100 times, and the average recognition is obtained. Another simulation is adopted to test the efficiency of the hybrid recognition with the Iris data set. Iris data set contains 150 patterns belonging to three classes. There are 50 exemplars for each class and each input is a fourdimensional real vector[22]. The recognition accuracy and time complexity are compared between SVM and our approach. There are two parts in this simulation. In the first part, all 150 samples are used in training. And these 150 samples are used to test the training accuracy. In the second part, 60 samples from the Iris data set are used to train classifiers and other 90 samples are used for test. The generalization of the proposed approach is proved.
Results of experiment 1: classification of the radar emitter signals
An information sheet of radar emitter signals is built, which is shown as Table1. Data in the information table should be changed into discrete values, because continuous values cannot be processed by the rough sets theory. There are many methods for data discretization and here the equivalent width method[11] is adopted in this article. Based on the equivalent width method, the range is divided into intervals. Intervals of one attribute are of the same size and different attributes can have different numbers of intervals. In our article, attributes are divided into three intervals. The attribute values in the same interval have the same discrete value. The discrete information is shown in Table2, where A, B, C, and D denote the attribute RF, PRF, ARR, and PW, respectively. After that, the dependent extent of radar type to each attribute is computed used (3).${\gamma}_{A}=\frac{7}{8}$,${\gamma}_{B}=\frac{7}{8}$, γ_{
C
}= 0, and${\gamma}_{D}=\frac{7}{8}$. As the dependent extent of radar type to the attribute C (ARR) is 0, the attribute C is unnecessary for classification and removed. The knowledge rules are obtained. Table3 shows these rules, where—denotes an any value. Some radars have different operating modes and in different operating mode, the emitter signals parameters vary obviously. In the clustering of radar emitter signal samples, if cluster samples of one radar emitter into one cluster, that samples of the same radar may gather in several subregions in the cluster. The aggregation of the cluster would be reduced. Thus, we cluster the samples based on the subregions determined by using rough sets. The samples of three types of radar emitters are distributed into seven clusters.
Table 1
Information of known radar emitter signals
No.
RF (MHz)
PRF (Hz)
ARR (round/s)
PW (us)
Type
1
6558
1319
2
1.61
1
2
5436
2530
1
0.62
1
3
1984
1276
2
0.99
2
4
3787
145
3
0.38
2
5
4406
601
2
0.34
2
6
7745
1698
3
3.81
3
7
3214
2083
2
0.71
3
8
2460
1793
2
1.33
3
Table 2
Discrete information of known radar emitter signals
No.
A
B
C
D
Type
1
3
2
2
2
1
2
3
3
1
1
1
3
1
2
2
1
2
4
2
1
3
1
2
5
2
1
2
1
2
6
3
2
3
3
3
7
2
3
2
1
3
8
1
2
2
2
3
Table 3
Knowledge rules
No.
A
B
D
Type
1
3
2
2
1
2
3
3

1
3

2
1
2
4

1

2
5


3
3
6
2
3

3
7
1

2
3
Based on these knowledge rules, initial clustering centers are obtained using (20). The known radar emitter samples are clustered by using the rough kmeans on these initial cluster centers. As shown in Table4, 240 training samples are clustered into seven clusters. The cluster centers, rough boundary and uncertain boundary of the primary recognition are computed. The rough kmeans classifier has been built.
Table 4
Parameters in primary recognition
Cluster
Center
R_{ro}
R_{un}
1
(6567,1324,1.650)
225
662
2
(5643,2569,0.520)
231
578
3
(2196,1534,1.142)
149
356
4
(3987,132,0.430)
130
407
5
(7845,1654,3.940)
465
913
6
(3213,2093,0.695)
200
466
7
(2459,1783,1.331)
128
401
The classification accuracy of each radar emitter is as the confusion matrices shown in Table5. For example, row (1) indicates that on the 34 samples of the subclass 1, 32 have been classified correctly and 2 in subclass 5. The primary recognition accuracy is 86%. The advanced recognition accuracy is 92%. The number of samples that are falsely classified in uncertain areas is 18, while the total number of wrong classified samples is 28. As (18) described, the theoretical accuracy can be computed as:${A}_{\mathit{\text{total}}}=86\%+14\%\times \frac{18}{28}\times 92\%=94.28\%$.
Table 5
Confusion matrices of radar emitter recognition
Subc. 1
Subc. 2
Subc. 3
Subc. 4
Subc. 5
Subc. 6
Subc. 7
Subclass 1
32
0
0
0
2
0
0
Subclass 2
0
36
0
0
0
0
0
Subclass 3
0
0
34
0
0
0
3
Subclass 4
0
0
0
33
0
0
0
Subclass 5
0
0
0
0
20
0
0
Subclass 6
1
0
0
0
0
18
0
Subclass 7
0
0
5
0
0
0
16
The proposed method is compared with the RBF neural network studied by Zhang et al.[6], the RBFSVM and the probabilistic SVM radar recognition approach studied by Lin et al.[8]. As shown in Table6, the accuracy of the hybrid recognition proposed in this article is 94.5%, which is higher than existing methods, i.e., 92, 92.5, and 93%. The accuracy of the hybrid recognition from simulation experiments is close to the theoretical value, i.e., 94.28%.
Table 6
Results of radar emitter recognition methods
Recognition method
Average recognition accuracy
RBF neural network
92.0%
RBFSVM
92.5%
PSVM
94.0%
Method in this article
94.5%
Results of experiment 2: classification of the data set Iris
From Table7, we can know the proposed approach has not only a higher recognition accuracy than SVM, but also high training accuracy and good generalization. In the first part of this experiment, all the 150 samples are used to train and test these two methods. The hybrid recognition proposed in this article has a high training accuracy, i.e., 99.33%, which is higher than SVM’s, i.e., 98.67%. In the second part of this experiment, 60 samples are used as training samples, and other 90 samples are used to test SVM and the hybrid recognition. The recognition accuracy of the proposed approach is 97.78%, which can indicate the hybrid recognition has a good generalization.
Table 7
Recognition results of Iris
Method
Recognition accuracy
The number of training samples for SVM
The time complexity
SVM in the first part
98.67%
150
$\mathcal{O}\left(15{0}^{3}\right)$
Our method in the first part
99.33%
70
$\mathcal{O}\left(7{0}^{3}\right)$
SVM in the second part
93.33%
60
$\mathcal{O}\left(6{0}^{3}\right)$
Our method in the second part
97.78%
36
$\mathcal{O}\left(3{6}^{3}\right)$
In addition, let’s compare the time complexities of SVM and the proposed approach. The time complexity of SVM is$\mathcal{O}\left({m}^{3}\right)$, and that of the proposed approach is$\mathcal{O}\left({m}^{\prime 3}\right)$, where m and m^{
′
}are the number of training samples for the SVM and the number of training samples for the SVM in the advanced recognition of the hybrid recognition, respectively.
When 150 samples are used as training samples, all of them are used to train the classical SVM. m = 150 and the time complexity of the classical SVM is$\mathcal{O}\left(15{0}^{3}\right)$. In our approach, training samples are clustered in the primary recognition, and only the rough samples are used to train the SVM in the advanced recognition. More specifically, there are 70 training samples for the SVM in the advanced recognition, i.e.,${m}^{\prime}=70$, so the time complexity is$\mathcal{O}\left(7{0}^{3}\right)$. Similarly, when 60 samples are used as training samples, all of these samples are used to train the classical SVM, while there are 36 training samples for the SVM in the advanced recognition of the hybrid recognition, i.e., m = 60 and m^{
′
}= 36. So in the second part, the time complexity of the classical SVM is$\mathcal{O}\left(6{0}^{3}\right)$, while the time complexity of the proposed approach is$\mathcal{O}\left(3{6}^{3}\right)$.
From the comparison above, we can know that the time complexity of the hybrid recognition is obviously lower than the classical SVM.
Conclusions
In this article, a hybrid recognition method has been proposed to recognize radar emitter signals. The hybrid classifier consists of a rough kmeans classifier (linear classifier) and a SVM (nonlinear classifier). Based on the linear separability of the classifying sample, the sample is classified by the suitable classifier. Thus for the radar emitter sample set containing both linearly separable samples and linearly inseparable samples, the approach can achieve a higher accuracy.
A linear classifier based on the rough set and the rough kmeans has been proposed, i.e., the rough kmeans classifier. The rough kmeans clustering can reduce the radius of the clusters and increase the accuracy of the primary recognition. The initial centers for the rough kmeans are computed based on the rough set, which can reduce the time complexity of the rough kmeans clustering. The rough kmeans classifier can classify linear separable samples efficiently and pick up linearly inseparable samples. These linear inseparable samples are processed by the SVM in the advanced recognition. Therefore, the training samples for the SVM in the advanced recognition are reduced. Simulation results have shown that the proposed approach can achieve a higher accuracy and a lower time complexity, when compared with existing approaches.
The hybrid recognition approach in this article is suitable for the classification of the radar emitter signal sample set containing both linearly separable and linearly inseparable samples. We admit that our hybrid recognition approach is based on the fact that these linearly inseparable samples which reduce the accuracy of clustering are mostly at the edges of clusters. From (18), we know that if the linearly inseparable sample appears frequently in the center region instead of the edge, the accuracy of recognition will be reduced. How to solve this problem is the focus of our future study.
Declarations
Acknowledgements
The authors would like to thank the editors and reviewers for helpful comments and suggestions. This study was supported by a grant from National Natural Science Foundation of China (grant number: 61102084).
Authors’ Affiliations
(1)
School of Electronics and Information Technology, Harbin Institute of Technology
(2)
Department of Electronic Engineering, King’s College London
References
Latombe G, Granger E, Dilkes F: Fast learning of grammar production probabilities in radar electronic support. IEEE Trans. Aerosp. Electron. Syst 2010, 46(3):12621290.View Article
Ren M, Cai J, Zhu Y, He M: Radar emitter signal classification based on mutual information and fuzzy support vector machines. Proceedings of International Conference on Software Process 2008 2008, 16411646.
Bezousek P, Schejbal V: Radar technology in the Czech Republic. IEEE Aerosp. Electron. Syst. Mag 2004, 19(8):2734. 10.1109/MAES.2004.1346896View Article
Zhang G, Hu L, Jin W: Intrapulse feature analysis of radar emitter signals. J. Infrared Millimeter Waves 2004, 23(6):477480.
Swiercz E: Automatic classification of LFM signals for radar emitter recognition using wavelet decomposition and LVQ classifier. Acta Phys. Polonica A 2011, 119(4):488494.
Zhang Z, Guan X, He Y: Study on radar emitter recognition signal based on rough sets and RBF neural network. Proceedings of the eighth international conference on machine learning and cybernetics 2009, 12251230.
Yin Z, Yang W, Yang Z, Zuo L, Gao H: A study on radar emitter recognition based on SPDS neural network. Inf. Technol. J 2011, 10(4):883888.View Article
Li L, Ji H, Wang L: Specific radar emitter recognition based on wavelet packet transform and probabilistic SVM. IEEE International Conference on Information and Automation 2009, 12831288.
Lin L, Ji H: Combining multiple SVM classifiers for radar emitter recognition. 6th International Conference on Fuzzy Systems and Knowledge Discovery 2010, 140144.
Chen Y, Yang J, Trappe W, Martin R: Detecting and localizing identitybased attacks in wireless and sensor networks. IEEE Trans. Veh. Technol 2010, 59(5):24182434.View Article
Kanungo T, Mount AD, Netanyahu N, Piatko C, Silverman R, Wu A: An efficient kmeans clustering algorithm: analysis and implementation. IEEE Trans. Pattern Anal. Mach. Intell 2002, 24(7):881892. 10.1109/TPAMI.2002.1017616View Article
Pawan L, West C: Interval set clustering of web users with rough kmeans. J. Intell. Inf. Syst 2004, 23: 516.View Article
Chapelle O, Haffner P, Vapnik V: Support vector machines for histogrambased image classification. IEEE Trans. Neural Netw 1999, 10(5):10551064. 10.1109/72.788646View Article
Sun G, Guo W: Robust mobile geolocation algorithm based on LSSVM. IEEE Trans. Veh. Technol 2005, 54(3):10371041. 10.1109/TVT.2005.844676MathSciNetView Article
Yuan X, Wang Y, Wu L: SVMbased approximate model control for electronic Throttle valve. IEEE Trans. Veh. Technol 2008, 57(5):27472756.View Article
Cai Z, Zhao H, Jia M: Spectrum sensing in cognitive radio based on adaptive optimal SVM. Inf. Technol. J 2011, 10(7):14271431.View Article
Lyszko D, Stepaniuk J: Rough Entropy Based kMeans Clustering. 2009.
Tsang I, Kwok J, Cheung P: Core vector machines: Fast SVM training on very large data sets. J. Mach. Learn. Res 2005, 6: 363392.MathSciNet
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.