Next Article in Journal
Big-Data-Mining-Based Improved K-Means Algorithm for Energy Use Analysis of Coal-Fired Power Plant Units: A Case Study
Next Article in Special Issue
Evaluation of Systems’ Irregularity and Complexity: Sample Entropy, Its Derivatives, and Their Applications across Scales and Disciplines
Previous Article in Journal
Solving Stochastic Reaction Networks with Maximum Entropy Lagrange Multipliers
Previous Article in Special Issue
Sample Entropy of the Heart Rate Reflects Properties of the System Organization of Behaviour
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Recognition Method of Driving Fatigue State Based on Sample Entropy and Kernel Principal Component Analysis

Department of Computer, Nanchang University, Nanchang 330029, China
*
Authors to whom correspondence should be addressed.
Entropy 2018, 20(9), 701; https://doi.org/10.3390/e20090701
Submission received: 29 July 2018 / Revised: 7 September 2018 / Accepted: 10 September 2018 / Published: 13 September 2018

Abstract

:
In view of the nonlinear characteristics of electroencephalography (EEG) signals collected in the driving fatigue state recognition research and the issue that the recognition accuracy of the driving fatigue state recognition method based on EEG is still unsatisfactory, this paper proposes a driving fatigue recognition method based on sample entropy (SE) and kernel principal component analysis (KPCA), which combines the advantage of the high recognition accuracy of sample entropy and the advantages of KPCA in dimensionality reduction for nonlinear principal components and the strong non-linear processing capability. By using support vector machine (SVM) classifier, the proposed method (called SE_KPCA) is tested on the EEG data, and compared with those based on fuzzy entropy (FE), combination entropy (CE), three kinds of entropies including SE, FE and CE that merged with KPCA. Experiment results show that the method is effective.

1. Introduction

Driving fatigue is a phenomenon in which, due to continuous driving, drivers’ ability of perception, judgment and operation appear to decrease [1]. Drivers are prone to driving fatigue after long driving, and if they keep driving, their limbs will be stiff, their attention will decrease and their judgment will decline. Driving fatigue may cause people to become delirious, and they may be prone to traffic accidents [2]. Therefore, an effective driving fatigue state recognition method is the key to construct the dangerous driving state warning system.
At present, a series of studies has been conducted on the recognition of driving fatigue status at home and abroad. Guo et al. [3] explored the correlation between ECG indicators and driving fatigue state based on ECG signals and constructed the driving fatigue state recognition model combined with the SVM classifier. Yang et al. [4] conducted research on driving fatigue recognition on the basis of the fusion of eye movement and pulse information. Zhao et al. [5] applied functional brain networks to establish a fatigue recognition model based on EEG data and graph theory methods. Zhao et al. [6] constructed a driving fatigue recognition model based on the human eye feature by using a concatenated convolutional neural network. To judge whether the driver felt fatigue, Zhang et al. [7] conducted research on driving fatigue recognition on the feature extraction of the wavelet entropy of EEG signals. Moreover, Chai and Naik et al. [8] used entropy rate bound minimization as a source separation technique, the autoregressive (AR) modeling as the feature extraction algorithm and the Bayesian neural network as the classification algorithm for driving fatigue recognition; they combined independent component by entropy rate bound minimization analysis (ICA-ERBM) and EEG feature extraction components, which have not been explored previously for fatigue classification. Zeng et al. [9] proposed to use deep convolutional neural networks and deep residual learning to predict the drivers’ mental states from EEG signals; they also developed two mental state classification models called EEG-Conv and EEG-Conv-R. Chai and Ling et al. [10] combined the AR modeling feature extractor with a sparse-DBN classifier to constructed a driving fatigue recognition model, which have not been explored previously for EEG-based driving fatigue classification. Hu et al. [11] conducted a driving fatigue recognition model by the combination of the feature set that consisted of sample entropy, fuzzy entropy, approximate entropy and spectral entropy and the gradient boosting decision tree (GBDT) through the use of three different classifiers.
The literature mentioned above has enriched and extended the research of fatigue recognition from different perspectives. EEG signals, with the highest sensitivity in driving fatigue detection and recognition and the highly correlated relationship with the driver’s mental state, have been studied more deeply [12]. There are many ways to analyze EEG signals, such as time domain analysis, frequency domain analysis, multidimensional statistical analysis and nonlinear analysis [13]. However, the recognition accuracy of the driving fatigue state obtained by using these methods is still not satisfactory after the feature extraction of EEG signals. PCA and SVM were used to obtain better recognition accuracy based on motion imagination EEG signals in [14]. However, because of the nonlinear characteristics of EEG data, the internal model of the PCA is linear, the same with the relationship among the principal components. PCA will lose its effectiveness when the principal components of the study object are nonlinear. Therefore, this paper proposes a driving fatigue recognition method based on sample entropy and kernel principal component analysis. There are several advantages to using the sample entropy kernel principal component analysis (SE_KPCA) method. On the one hand, the recognition of driving fatigue state based on sample entropy (SE) [15] is more accurate. On the other hand, the result of dimension reduction in KPCA [16] is more positive and has a strong non-linear processing capability. On the basis of the above, a driving fatigue state recognition model is constructed with the combination of the support vector machine (SVM) [17] algorithm to achieve effective recognition of the driver’s fatigue state.

2. Sample Entropy

The sample entropy (SE) calculation process is described as follows, given the original signal of length N, denote it by x ( 1 ) , x ( 2 ) , , x ( N ) , and define the m-dimensional vector:
X m ( i ) = x ( i ) , x ( i + 1 ) , , x ( i + m 1 ) ; 1 i N m + 1
Calculate any two m-dimensional vectors:
D [ X m ( i ) , X m ( j ) ] = m a x [ x ( i + k ) x ( j + k ) ] , 0 k m 1 ; i j , i , j N m + 1
D [ X m ( i ) , X m ( j ) ] is the maximum difference between X m ( i ) and X m ( j ) . Given a threshold r, calculate the total number of the maximum difference between any two elements that is less than the threshold:
C = i = 1 N m ( D ( i ) < r )
Define a ratio:
B i m ( r ) = C N m
B i m ( r ) is the ratio of C to the total; calculate its mean:
B ¯ m ( r ) = 1 N m + 1 i = 1 N m + 1 B i m ( r )
where B ¯ m ( r ) is the proportion mean of the m-dimensional sequence. When the signal increases to m + 1 -dimension, repeat Equation (1) to Equation (4), and calculate the proportion mean of the m + 1 -dimensional sequence:
A m + 1 ( r ) = C N m i = 1 N m B i m + 1 ( r )
Get the sample entropy of the sequence:
S a n m E n ( m , r ) = lim N l n ( A m + 1 ( r ) / B ¯ m ( r ) )
When N is finite, Equation (7) can be expressed as follows:
S a n m E n ( m , r , N ) = l n ( A m + 1 ( r ) / B ¯ m ( r ) )
From Equation (8), it is known that the value of S a n m E n is related to m and r. Pincus [18] pointed out that the value of m is generally taken as two, when r is set to be 0.1- to 0.25-times the standard deviation of the original EEG signal time series (0.1 to 0.25 SD; SD is the standard deviation). Thus, m is set as two, and r is set as 0.25 SD in this paper.

3. Principal Component Analysis and Kernel Principal Component Analysis

3.1. Basic Principles of PCA

Suppose that the m-times extracted data matrix of n variables X i , X 2 , , X m is X = ( X p q ) m n . The main steps of PCA analysis [19,20] are as follows:
  • Calculate the sample mean and standard deviation for each indicator X:
    X ¯ = 1 n p = 1 m X p q , S q = 1 N 1 p = 1 m ( X p q X q ¯ ) 2 2 , q = 1 , 2 , , n
  • Normalize X p q and calculate its normalization matrix:
    Y p q = X p q X q ¯ m , p = 1 , 2 , , m , q = 1 , 2 , , n
  • Calculate the correlation coefficient matrix R according to the obtained standardized matrix Y = ( Y p q ) m n :
    r q k = 1 m 1 p = 1 m Y p q Y p k
    R = ( r p q ) m n , r q q = 1 , r q k = r k q
  • Get the eigenvalue of R, denoted as λ . Suppose λ 1 λ 2 λ n > 0 and l 1 , l 2 , , l n are the corresponding feature vectors. Determine the range of K according to the cumulative variance contribution as C V C > 90%, and define the C V C as:
    C V C = q = 1 k λ q / q = 1 n λ q
    the K principal components are created, denoted as:
    Z q = l q X

3.2. Basic Principles of KPCA

There are M samples in the input space, denoted as x k ( k = 1 , 2 , , M ) , x k R N , k = 1 M x k = 0 . The nonlinear mapping function Φ is introduced to the algorithm, transforming the sample points in the input space x 1 , x 2 , , x M into sample points in the feature space as Φ ( x 1 ) , Φ ( x 2 ) , , Φ ( x M ) , and the hypothesis:
k = 1 M Φ ( x k ) = 0
Then, the covariance matrix in the feature space F is defined as:
C ¯ = 1 M j = 1 M Φ ( x j ) Φ ( x j ) T
Therefore, the solving equation of PCA in the feature space is:
λ V = C ¯ v
λ is the eigenvalue, and v F \ 0 is the eigenvector, so:
λ ( Φ ( x k ) v ) = Φ ( x k ) C ¯ v , ( k = 1 , 2 , , M )
Note that v can be expressed linearly by Φ ( x i ) ( i = 1 , 2 , , M ) in the above formula.
v = i = 1 M a i Φ ( x i )
where a 1 , a 2 , , a N is constant. Define an N N matrix satisfying the Mercer condition, denoted as K:
K i j = Φ ( x i ) Φ ( x j )
K is called the nuclear matrix, which can be obtained from Equation (16) to Equation (19) as follows:
M λ a = K a
The required eigenvalues and eigenvectors are obtained by solving the formula Equation (21). The projection of the test sample on the F-space vector V k is:
( V k Φ ( x ) ) = i = 1 M a i k ( Φ ( x i ) Φ ( x ) )
Supposed that Equation (15) is not valid. Then, the K in Equation (21) is replaced by K ˜ .
K i j ˜ = K i j 1 M m = 1 M l i m K m j 1 M n = 1 M K i n l n j + 1 M 2 m , n = 1 M l i m K m n l n j
where l i j = 1 (for all i, j).

3.3. Kernel Function Methods

At present, there are several forms of kernel functions that can be chosen, as follows:
  • Linear kernel function (special case):
    K ( x , x i ) = x x i
  • P-order polynomial kernel function:
    K ( x , x i ) = [ ( x x i ) + 1 ] p
  • Radial basis function (RBF):
    K ( x , x i ) = e x p ( | | x x i | | δ 2 )
  • Multilayer perceptual (MLP) kernel function:
    K ( x , x i ) = t a n h [ v ( x x i ) + c ]
The P-order polynomial kernel function, radial basis function and multilayer perceptual kernel function are used in the model of this paper.

4. EEG Data Processing Method Based on Sample Entropy and Principal Component Analysis/Kernel Principal Component Analysis

4.1. EEG Data Processing Method Based on Sample Entropy and Principal Component Analysis

Based on the above discussion about the method proposed in this paper, the algorithm that combined sample entropy and principal component analysis (SE_PCA) can be divided into the following three steps:
  • Collect the EEG signal, and preprocess it; then, extract the sample entropy characteristics of the data by the formula Equations (1) to (8), and obtain a matrix X m n ;
  • Take the matrix X m n into the formula Equations (9) to (14), then calculate its principal component;
  • Construct a model, and use SVM to classify.

4.2. EEG Data Processing Method Based on Sample Entropy and Kernel Principal Component Analysis

Based on the above discussion about the method proposed in this paper, the SE_KPCA algorithm can be divided into the following four steps:
  • Collect the EEG signal, and preprocess the EEG signal; then, extract the sample entropy characteristics of the data by the formula Equations (1) to (8), and obtain a matrix X m n ;
  • Select the kernel function K ( x , x i ) , the matrix X m n as an input of KPCA, and centralize it in high dimensional space; then, calculate matrix K ˜ according to Equation (23);
  • Calculate the eigenvalues and eigenvectors of the matrix K ˜ , as well as its nonlinear principal component;
  • Construct a model, and use SVM to classify.

5. Method Testing and Result Analysis

5.1. Test Environment and Test Data

Test environment: The platform environment used in the experiment includes a static simulator (Beijing-China Joint Teaching Equipment Co., Ltd., ZY-31D vehicle driving simulator, Beijing, China), and this includes three 24-inch monitors and a software teaching system for driving simulations (ZM-601 V9.2). A 32-electrode EEG collecting cap, the computer system (windows 10 × 64), EEG collecting and preprocessing software (Neuroscan 3.2) and EEG analysis software (MATLAB R2014b) were used.
Test data description: The EEG signal data analyzed in this paper come from the EEG study, which simulated car driving training. Twenty five normal subjects were tested for the current fatigue level during the training, such as the sleep quality on the previous night, the diet on the day, etc., then two sets of experiment data were recorded by every subject, namely fatigue state and non-fatigue state. According to the precious experience in the fatigue-related experiment, each subject was asked to drive for 40 min without a break, then they were asked to take a questionnaire to check their states [21]. The EEG data are 32-electrode, 600 s time series at a sampling rate of 1000 Hz, which consisted of 300 s of rest (non-fatigue) and 300 s of fatigue. After collecting a person’s EEG signal data, filtering and processing them (artifact removal, removal of eye movement interference, signal correction, etc. [21]) were conducted. This paper conducted two sets of experiments. The first one was taking some data from 10 individuals and 60 s for each person (the first 30 s in the non-fatigue state and the other 30 s in the fatigue state), which constructed a 600 * 30 data matrix (for which 600 is 600 s, 30 is the 30 electrodes), as shown in Figure 1. The other one was taking some data from 15 individuals and 60 s of each person, which constructed a 900 * 30 data matrix, as shown in Figure 2. At present, this paper merely compares the experimental results of 30 s, due to less data being able to reduce the time of experiment and the amount of data in 30 s being enough. However, the subsequent experiments would enlarge the selection of different time bands for testing.

5.2. Driving Fatigue State Recognition Test Based on SE_PCA

Firstly, the SE_PCA method was used to analyze the contribution ratio of the 30 electrode principal components. According to formula q = 1 n λ q mentioned in Section 3.1, the contribution rate of the 30 electrode principal components was calculated as shown in Table 1, in which i represents the principal components (or principal elements) and C i represents the contribution rate. From Table 1, each principal element corresponds to two sets of contribution rate, and the former data are from the first experiment and the latter from the second experiment. In the first experiment, the cumulative contribution rate of the top 10 principal components reached 90.63%. As a result, the amount of principal components was reduced from 30 to 10. Similarly, the cumulative contribution rate of the top 14 principal components reached 95.08%, and the cumulative contribution rate of the top 23 principal components reached 99.10%. In the second experiment, the cumulative contribution rate of the top eight principal components reached 90.14%. As a result, the amount of principal components was reduced from 30 to eight. Similarly, the cumulative contribution rate of the top 13 principal components reached 95.17%, and the cumulative contribution rate of the top 23 principal components reached 99.12%.
Secondly, we selected the main component through Equation (13) >90%. This article mainly tests three kinds of situations where the contribution rates reach 90%, 95% and 99%, respectively. The corresponding characteristic variables in the three cases were 10, 14 and 23 in the first experiment; the corresponding characteristic variables in the three cases were 8, 13 and 23 in the second experiment.
Finally, according to the three contribution rates, the accuracy of the recognition in driving fatigue was tested by the SVM classifier. This paper used a method based on k-fold cross-validation in which k = 3 . Seventy percent of the data were used as a training set, then the other thirty percent of the data were used as a test set. The test results are shown in Table 2 and Table 3. Compared with the driving fatigue recognition accuracy rates, which only used the sample entropy, when the contribution rate reached 0.99, the SE_KPCA method improved the recognition accuracy of the driving fatigue state compared with the SE method, and the time performance had also been reduced.

5.3. Driving Fatigue State Recognition Test Based on SE_KPCA

First of all, we analyzed the principal components contribution rates by the SE_KPCA method. For example, when the kernel function chose a P-order polynomial kernel function and the parameter was set as P = 2 , the calculation results were as shown in Table 4. As we can see, each principal element corresponds to two contribution rates; the former data were from the first experiment, and the latter data were from the second experiment, the same for Table 5 and Table 6.
Then, we selected the main component, and the cumulative contribution rate calculation was consistent with the PCA method. For example, the kernel function chose a P-order polynomial kernel function testing for three cases with contribution rates of 90%, 95% and 99%. In the first experiment, Table 4 shows the result when parameter P = 2 , and the characteristic variables in the three cases were: 8, 12, 26. Table 5 shows the result when parameter P = 1 , and the characteristic variables in the three cases were: 9, 14, 23. Table 6 shows the result when parameter P = 0.5 , and the characteristic variables in the three cases were: 10, 14, 22. For the second experiment, Table 4 shows the result when parameter P = 2 , and the characteristic variables in the three cases were: 7, 12, 25. Table 5 shows the result when parameter P = 1 , and the characteristic variables in the three cases were: 8, 13, 23. Table 6 shows the result when parameter P = 0.5 , and the characteristic variables in the three cases were: 9, 13, 21.
Last but not least, the three selected principal components for driving fatigue recognition accuracy were tested by the SVM classifier. The specific test was performed under three different principal component contribution rates by using the KPCA method of the P-order polynomial kernel function, radial basis function and multilayer perceptual kernel function, and every optimal parameters was obtained through multiple experiments. The test results are shown in Table 7, Table 8 and Table 9. The data test results of 15 people are shown in Table 10, Table 11 and Table 12. These tables also include the accuracy of the driving fatigue state recognition based on SE_PCA under the same contribution rates.
The test results from Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12 show that the SE_KPCA method was better than the SE_PCA method at identifying and classifying driving fatigue. In particular, when KPCA’s kernel functions chose a radial basis function with a parameter of 0.2 and a contribution rate of 0.9, the classification accuracy of the SE_KPCA method reached 98.33%, and the time performance was good. The subsequent experiments in this paper all used the radial basis function, and the parameter σ was 0.2, while the contribution rate was set as 0.9.

5.4. Comparison Test between SE_KPCA and the Driving Fatigue Recognition Method Based on Fuzzy Entropy/Combination Entropy

In order to verify the classification effect of the SE_KPCA method further, traditional methods of feature extraction were used to compare the sample entropy, fuzzy entropy (FE) [13,22] and combination entropy (CE) [16]. Take the samples of 10 and 15 individuals’ EEG signals as an example; fuzzy entropy and combination entropy were used for feature extraction, and then SVM was applied to identify the driving fatigue state; the test results are shown from Figure 3, Figure 4, Figure 5 and Figure 6. After the comparison and analysis of the figure, the conclusion was draw that SE_KPCA had significantly improved the classification recognition rate compared with the traditional sample entropy, fuzzy entropy and combination entropy, and the time performance was good.

5.5. Comparison of the SE_KPCA Method Based on KPCA and Fuzzy Entropy/Combination Entropy for Driving Fatigue Identification

5.5.1. Data Description

(1) The EEG data processing method based on KPCA and fuzzy entropy (FE_KPCA):
  • Extract the features of fuzzy entropy from the collected EEG signals according to the fuzzy entropy formula in the literature [22];
  • Select the kernel function K ( x , x i ) ; centralize the fuzzy entropy data in the high dimensional space, and then, calculate the matrix according to Equation (23);
  • Calculate the eigenvalues and eigenvectors of the matrix K ˜ ;
  • Calculate its nonlinear principal component.
(2) The EEG data processing method based on the KPCA and combination entropy (CE_KPCA):
  • Extract the features of the combination entropy from the collected EEG signals according to the combination entropy formula in the literature [21];
  • Select the kernel function K ( x , x i ) ; centralize the combination entropy data in the high dimensional space, and then, calculate the matrix according to Equation (23);
  • Calculate the eigenvalues and eigenvectors of the matrix K ˜ ;
  • Calculate its nonlinear principal component.

5.5.2. Experimental Results

After verifying the validity of the SE_KPCA method in Section 5.4, this paper compares KPCA combined with fuzzy entropy (FE_KPCA) with KPCA combined with combination entropy (CE_KPCA). As shown from Figure 7, Figure 8, Figure 9 and Figure 10, through the same method, the SVM was adopted for classification and identification. After comparing and analyzing all the figures, our conclusion is that the classification recognition rate of SE_KPCA was obviously higher than FE_KPCA and CE_KPCA, and the temporal performance was lower.
As shown from Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10, it can also be seen that FE_KPCA and CE_KPCA have no higher classification accuracy than traditional FE and CE, and the time performance is worse.

6. Conclusions

This paper studies the characteristics of EEG signals in two groups (fatigue state and non-fatigue state). Firstly, feature extraction of the EEG signal was conducted by applying sample entropy, then further feature extraction was made by using kernel principal component analysis, and the SVM classifier was used to classify and identify the two states of fatigue and non-fatigue. Through analysis and comparison of the experiment results that indicate when the kernel functions select the radial basis function, the classification recognition rate performs excellently. Besides, when compared with the traditional methods, the classification recognition rate is also significantly improved. This paper mainly researched the experiment results of the entropy, PCA and KPCA, but the subsequent experiments will introduce more methods for testing.

Author Contributions

T.Q. conceived of and designed the experiments. B.Y. performed the experiments. X.B. and P.L. analyzed the data. All the authors have read and approved the final manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (Nos. 61070139, 81460769 and 61762045).

Acknowledgments

Thanks to Jianfeng Hu’s team for providing EEG experiment data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pei, Y.L.; Ma, Y.L. Effects of fatigue on drivers’ perceptual judgment and operating characteristics. J. Jilin Univ. 2009, 39, 1151–1156. (In Chinese) [Google Scholar]
  2. Li, X.F.; Ma, J.F. Theoretical classification and influencing factors of driving fatigue. J. Decis. Mak. 2017, 8, 87. (In Chinese) [Google Scholar]
  3. Niu, L.B. Research on the Method of Driving Fatigue Based on ECG. Master’s Thesis, Southwest Jiaotong University, Chengdu, China, 2017. Available online: http://www.wanfangdata.com.cn/details/detail.do?_type=degree&id=Y3206287 (accessed on 7 July 2018). (In Chinese).
  4. Li, F.Q. Research on Driving Fatigue Recognition Algorithm Based on Eye Movement and Pulse Information Fusion. Master’s Thesis, Shandong University, Jinan, China, 2015. Available online: http://www.wanfangdata.com.cn/details/detail.do?_type=degree&id=Y2792188 (accessed on 8 July 2018). (In Chinese).
  5. Zhao, C.; Zhao, M.; Yang, Y.; Gao, J.; Rao, N.; Lin, P. The reorganization of human brain networks modulated by driving mental fatigue. IEEE J. Biomed. Health Inf. 2017, 21, 743–755. [Google Scholar] [CrossRef] [PubMed]
  6. Zhao, X.P.; Meng, C.M.; Feng, M.K.; Chang, S.J. Fatigue detection based on cascade convolutional neural network. J. Optoelectron. Laser 2017, 28, 497–502. (In Chinese) [Google Scholar]
  7. Zhang, N.N.; Wang, H.; Fu, R.R. Feature extraction of driving fatigue EEG based on wavelet entropy. J. Autom. Eng. 2013, 35, 1139–1142. (In Chinese) [Google Scholar]
  8. Chai, R.; Naik, G.R.; Nguyen, T.N.; Ling, S.H.; Tran, Y.; Craig, A.; Nguyen, H.T. Driver fatigue classification with independent component by entropy rate bound minimization analysis in an EEG-based system. IEEE J. Biomed. Health Inform. 2017, 21, 715–727. [Google Scholar] [CrossRef] [PubMed]
  9. Zeng, H.; Yang, C.; Dai, G.; Qin, F.; Zhang, J.; Kong, W. EEG classification of driver mental states by deep learning. Cognit. Neurodyn. 2018, 10, 1–10. [Google Scholar] [CrossRef]
  10. Chai, R.; Ling, S.H.; San, P.P.; Naik, G.R.; Nguyen, T.N.; Tran, Y.; Craig, A.; Nguyen, H.T. Improving eeg-based driver fatigue classification using sparse-deep belief networks. Front. Neurosci. 2017, 11, 103. [Google Scholar] [CrossRef] [PubMed]
  11. Hu, J.; Min, J. Automated detection of driver fatigue based on EEG signals using gradient boosting decision tree model. Cognit. Neurodyn. 2018, 12, 1–10. [Google Scholar] [CrossRef] [PubMed]
  12. Peng, J.Q.; Wu, P.D. Exploring EEG characteristics of driving fatigue. J. Beijing Inst. Technol. 2007, 27, 585–589. (In Chinese) [Google Scholar]
  13. Ding, X.H.; Ma, Y.L. Research on Feature Extraction and Classification of EEG Signals in Sports Imaging. Master’s Thesis, Hangzhou Dianzi University, Hangzhou, China, 2016. Available online: http://www.wanfangdata.com.cn/details/detail.do?_type=degree&id=D824056 (accessed on 6 July 2018). (In Chinese).
  14. Guan, J.Q.; Yang, B.H.; Ma, S.W.; Yuan, L. Research on brain recognition of motor images based on PCA and SVM. J. Beijing Biomed. Eng. 2010, 29, 261–265. (In Chinese) [Google Scholar]
  15. Zhang, Y.; Luo, M.W.; Luo, Y. Wavelet transform and sample entropy feature extraction method for EEG signals. J. Intell. Syst. 2012, 7, 339–344. (In Chinese) [Google Scholar]
  16. Gao, X.W. Kernel PCA Feature Extraction Method and Its Application. Master’s Thesis, Nanjing University of Aeronautics and Astronautics, Nanjing, China, 2009. Available online: http://www.wanfangdata.com.cn/details/detail.do?_type=degree&id=D077304 (accessed on 5 July 2018). (In Chinese).
  17. Liu, C.; Zhao, H.B. Motion imagined EEG signal classification based on CSP and SVM algorithms. J. Northeast. Univ. 2010, 31, 1098–1101. (In Chinese) [Google Scholar]
  18. Pincus, S.M. Approximate entropy as a measure of system complexity. Proc. Natl. Acad. Sci. USA 1991, 88, 2297–2301. [Google Scholar] [CrossRef]
  19. Sun, Y.; Ye, N.; Xu, X.H. Feature extraction of EEG signals based on PCA and wavelet transform. In Proceedings of the 2007 Annual Conference of Chinese Control and Decision Science, Wuxi, China, 4–6 July 2007; Northeastern University Press: Shenyang, China, 2007; pp. 669–673. [Google Scholar]
  20. Li, D.M. Classification of epileptic EEG signals based on PCA and the positioning of epileptogenic focus. J. Biomed. Eng. Res. 2017, 36, 218–223. (In Chinese) [Google Scholar]
  21. Mu, Z.; Hu, J.; Min, J. Driver fatigue detection system using electroencephalography signals based on combined entropy features. Appl. Sci. 2017, 7, 150. [Google Scholar] [CrossRef]
  22. Tian, J.; Luo, Z.Z. Feature extraction of motor-imagined EEG signals based on fuzzy entropy. J. Huazhong Univ. Sci. Technol. 2013, 41, 92–95. (In Chinese) [Google Scholar]
Figure 1. Sample entropy data matrix (10 individuals).
Figure 1. Sample entropy data matrix (10 individuals).
Entropy 20 00701 g001
Figure 2. Sample entropy data matrix (15 individuals).
Figure 2. Sample entropy data matrix (15 individuals).
Entropy 20 00701 g002
Figure 3. Comparison of the recognition accuracy rates among the sample entropy, fuzzy entropy, combination entropy and SE_KPCA methods (10 individuals).
Figure 3. Comparison of the recognition accuracy rates among the sample entropy, fuzzy entropy, combination entropy and SE_KPCA methods (10 individuals).
Entropy 20 00701 g003
Figure 4. Comparison of the time among the sample entropy, fuzzy entropy, combination entropy and SE_KPCA methods (10 individuals).
Figure 4. Comparison of the time among the sample entropy, fuzzy entropy, combination entropy and SE_KPCA methods (10 individuals).
Entropy 20 00701 g004
Figure 5. Comparison of the recognition accuracy rates among the sample entropy, fuzzy entropy, combination entropy and SE_KPCA methods (15 individuals).
Figure 5. Comparison of the recognition accuracy rates among the sample entropy, fuzzy entropy, combination entropy and SE_KPCA methods (15 individuals).
Entropy 20 00701 g005
Figure 6. Comparison of the time among the sample entropy, fuzzy entropy, combination entropy and SE_KPCA methods (15 individuals).
Figure 6. Comparison of the time among the sample entropy, fuzzy entropy, combination entropy and SE_KPCA methods (15 individuals).
Entropy 20 00701 g006
Figure 7. Comparison of the recognition accuracy rates among the fuzzy entropy KPCA (FE_KPCA), combination entropy KPCA (CE_KPCA) and SE_KPCA methods (10 individuals).
Figure 7. Comparison of the recognition accuracy rates among the fuzzy entropy KPCA (FE_KPCA), combination entropy KPCA (CE_KPCA) and SE_KPCA methods (10 individuals).
Entropy 20 00701 g007
Figure 8. Comparison of the time among the FE_KPCA, CE_KPCA and SE_KPCA methods (10 individuals).
Figure 8. Comparison of the time among the FE_KPCA, CE_KPCA and SE_KPCA methods (10 individuals).
Entropy 20 00701 g008
Figure 9. Comparison of the recognition accuracy rates among the FE_KPCA, CE_KPCA and SE_KPCA methods (15 individuals).
Figure 9. Comparison of the recognition accuracy rates among the FE_KPCA, CE_KPCA and SE_KPCA methods (15 individuals).
Entropy 20 00701 g009
Figure 10. Comparison of the time among the FE_KPCA, CE_KPCA and SE_KPCA methods (15 individuals).
Figure 10. Comparison of the time among the FE_KPCA, CE_KPCA and SE_KPCA methods (15 individuals).
Entropy 20 00701 g010
Table 1. Contribution rates of each principal component.
Table 1. Contribution rates of each principal component.
i C i i C i i C i i C i i C i
10.566470.0275130.0097190.0043250.0016
0.62860.01970.00680.00330.0017
20.072180.0209140.0086200.0038260.0013
0.09600.01630.00600.00300.0014
30.063390.0191150.0079210.0031270.0012
0.04700.01450.00560.00280.0012
40.0486100.0154160.0065220.0027280.0011
0.03500.01070.00500.00270.0009
50.0425110.0138170.0051230.0023290.0011
0.03220.01040.00460.00240.0008
60.0306120.0125180.0044240.0018300.0008
0.02660.00800.00400.00190.0007
Table 2. Comparison of the sample entropy principal component analysis (SE_PCA) and SE methods (10 individuals).
Table 2. Comparison of the sample entropy principal component analysis (SE_PCA) and SE methods (10 individuals).
Contribution RateSE_PCASE
AccTimeAccTime
0.9080.50%7.68 s
0.9581.83%8.25 s86.60%14.87 s
0.9988.00%10.34 s
Table 3. Comparison of the SE_PCA and SE methods (15 individuals).
Table 3. Comparison of the SE_PCA and SE methods (15 individuals).
Contribution RateSE_PCASE
AccTimeAccTime
0.9059.00%20.18 s
0.9566.33%27.64 s71.44%39.66 s
0.9973.78%35.95 s
Table 4. Contribution rates of each principal component of the P-order ( P = 2 ) polynomial kernel function.
Table 4. Contribution rates of each principal component of the P-order ( P = 2 ) polynomial kernel function.
i C i i C i i C i i C i i C i
10.702670.0190130.0059190.0027250.0013
0.71710.01560.00520.00240.0013
20.053480.0157140.0057200.0023260.0013
0.06790.01310.00420.00230.0011
30.041290.0141150.0048210.0019270.0011
0.03590.01180.00370.00200.0009
40.0318100.0093160.0037220.0018280.0010
0.03130.00890.00340.00190.0009
50.0269110.0077170.0034230.0016290.0009
0.02140.00700.00300.00170.0008
60.0213120.0073180.0030240.0015300.0008
0.01830.00640.00280.00160.0006
Table 5. Contribution rates of each principal component of the P-order ( P = 1 ) polynomial kernel function.
Table 5. Contribution rates of each principal component of the P-order ( P = 1 ) polynomial kernel function.
i C i i C i i C i i C i i C i
10.597770.0257130.0087190.0034250.0017
0.62550.02050.00660.00330.0016
20.080980.0208140.0079200.0030260.0016
0.10350.01770.00550.00310.0013
30.055790.0197150.0065210.0026270.0013
0.04560.01390.00460.00260.0012
40.0434100.0127160.0051220.0022280.0011
0.03900.01120.00460.00250.0009
50.0355110.0104170.0044230.0021290.0010
0.02880.00910.00420.00230.0008
60.0284120.0097180.0041240.0019300.0009
0.02550.00760.00420.00210.0006
Table 6. Contribution rates of each principal component of the P-order ( P = 0.5 ) polynomial kernel function.
Table 6. Contribution rates of each principal component of the P-order ( P = 0.5 ) polynomial kernel function.
i C i i C i i C i i C i i C i
10.505570.0314130.0107190.0041250.0020
0.53430.02490.00810.00410.0020
20.105180.0250140.0097200.0036260.0019
0.13740.02240.00690.00390.0016
30.068590.0238150.0079210.0032270.0015
0.05620.01590.00610.00330.0014
40.0552100.0157160.0062220.0027280.0013
0.04760.01350.00580.00300.0011
50.0424110.0128170.0054230.0025290.0012
0.03660.01120.00520.00280.0009
60.0353120.0128180.0051240.0023300.0010
0.03300.00890.00510.00260.0007
Table 7. Comparison between the sample entropy kernel principal component analysis (SE_KPCA) (P-order) method and the SE_PCA method (10 individuals).
Table 7. Comparison between the sample entropy kernel principal component analysis (SE_KPCA) (P-order) method and the SE_PCA method (10 individuals).
Contribution RateSE_KPCASE_PCA
Parameter P = 2 P = 1 P = 0.5
0.90Acc74.50%73.83%73.33%80.50%
Time66.08 s14.74 s6.41 s7.68 s
0.95Acc82.5%82.33%75.83%81.83%
Time83.25 s15.70 s7.40 s8.25 s
0.99Acc93.17%85.83%75.83%88.00%
Time99.63 s17.77 s10.36 s10.34 s
Table 8. Comparison between the SE_KPCA (RBF) method and the SE_PCA method (10 individuals).
Table 8. Comparison between the SE_KPCA (RBF) method and the SE_PCA method (10 individuals).
Contribution RateSE_KPCASE_PCA
Parameter σ = 0.1 σ = 0.2 σ = 0.7 σ = 1
0.90Acc93.80%98.33%89.50%81.80%80.50%
Time91.60 s36.46 s7.58 s6.47 s7.68 s
0.95Acc93.80%98.33%92.60%85.80%81.83%
Time116.19 s45.00 s13.05 s8.57 s8.25 s
0.99Acc93.80%98.33%92.80%86.30%88.00%
Time138.92 s54.01 s28.50 s21.51 s10.34 s
Table 9. Comparison between the SE_KPCA (MPL) method and the SE_PCA method (10 individuals).
Table 9. Comparison between the SE_KPCA (MPL) method and the SE_PCA method (10 individuals).
Contribution RateSE_KPCASE_PCA
Parameter c = 0.1 c = 0.2 c = 0.7 c = 1
v = 0.001 v = 0.01 v = 0.001 v = 0.01
0.90ACC70.60%70.30%70.67%70.50%80.50%
Time4.03 s3.83 s3.97 s3.81 s7.68 s
0.95ACC79.10%79.60%79.10%73.80%81.83%
Time14.15 s13.82 s14.05 s13.48 s8.25 s
0.99ACC88.00%86.33%88.00%85.83%88.00%
Time7.09 s7.10 s7.07 s6.80 s10.34 s
Table 10. Comparison between the SE_KPCA (P-order) method and the SE_PCA method (15 individuals).
Table 10. Comparison between the SE_KPCA (P-order) method and the SE_PCA method (15 individuals).
Contribution RateSE_KPCASE_PCA
Parameter P = 2 P = 1 P = 0.5
0.90Acc58.56%56.89%57.89%59.00%
Time235.78 s40.49 s19.09 s20.18 s
0.95Acc66.78%66.22%57.223%66.33%
Time268.56 s43.8422.24 s27.64 s
0.99Acc75.56%72.11%58.56%73.78%
Time325.98 s65.78 s35.64 s35.95 s
Table 11. Comparison between the SE_KPCA (RBF) method and the SE_PCA method (15 individuals).
Table 11. Comparison between the SE_KPCA (RBF) method and the SE_PCA method (15 individuals).
Contribution RateSE_KPCASE_PCA
Parameter σ = 0.1 σ = 0.2 σ = 0.7 σ = 1
0.90Acc91.44%91.78%80.89%66.11%59.00%
Time411.60 s259.83 s50.80 s41.20 s20.18 s
0.95Acc91.44%91.67%83.89%74.33%66.33%
Time416.19 s361.75 s72.98 s43.39 s27.64 s
0.99Acc91.56%91.67%84.44%75.89%73.78%
Time458.92 s395.39 s239.60 s99.86 s35.95 s
Table 12. Comparison between the SE_KPCA (MPL) method and the SE_PCA method (15 individuals).
Table 12. Comparison between the SE_KPCA (MPL) method and the SE_PCA method (15 individuals).
Contribution RateSE_KPCASE_PCA
Parameter c = 0.1 c = 0.2 c = 0.7 c = 1
v = 0.001 v = 0.01 v = 0.001 v = 0.01
0.90ACC41.11%40.78%41.11%40.33%59.00%
Time12.52 s11.91 s11.22 s11.94 s20.18 s
0.95ACC62.78%61.78%62.78%61.33%66.33%
Time14.15 s13.82 s14.05 s13.48 s27.64 s
0.99ACC73.11%71.78%73.11%71.56%73.78%
Time22.12 s21.26 s22.13 s21.19 s35.95 s

Share and Cite

MDPI and ACS Style

Ye, B.; Qiu, T.; Bai, X.; Liu, P. Research on Recognition Method of Driving Fatigue State Based on Sample Entropy and Kernel Principal Component Analysis. Entropy 2018, 20, 701. https://doi.org/10.3390/e20090701

AMA Style

Ye B, Qiu T, Bai X, Liu P. Research on Recognition Method of Driving Fatigue State Based on Sample Entropy and Kernel Principal Component Analysis. Entropy. 2018; 20(9):701. https://doi.org/10.3390/e20090701

Chicago/Turabian Style

Ye, Beige, Taorong Qiu, Xiaoming Bai, and Ping Liu. 2018. "Research on Recognition Method of Driving Fatigue State Based on Sample Entropy and Kernel Principal Component Analysis" Entropy 20, no. 9: 701. https://doi.org/10.3390/e20090701

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop