Classifying English vowels in Imagined Speech using EEG https://www.researchgate.net/publication/309967859_Imagined_Speech_Classification_using_EEG
Abstract:
The objective of this work is to assess the possibility of using (Electroencephalogram) EEG for communication between different subjects. Here EEG signals are recorded from 13 subjects by inducing the subjects to imagine the English vowels ‘a’, ‘e’, ‘i’, ‘o’ and ‘u’ through visual stimulus. These recorded signals are then processed to remove artifacts and noise. Common features: Average power, Mean, Variance and Standard deviation are computed and classified using bipolar neural network. This method yields maximum classification accuracy of 44%. The result shows that EEG has some distinctive information for across subject classification.
Keywords:
EEG; Imagined Speech; Classification; Bipolar Neural Network; Brain Computer Interface
Cite this Article as:
Kamalakkannan Ravi, Rajkumar R., Madan Raj. M. and Shenbaga Devi. S., Imagined Speech Classification using EEG, Advances in Biomedical Science and Engineering, Volume 1, Number 2, pp.20-32, 2014.
Reference:
[1] M. Wester and T. Schultz, “Unspoken speech-speech recognition based on electroencephalography,” Master’s thesis, Universit¨at Karlsruhe (TH), Karlsruhe, Germany, 2006.
[2] A. Porbadnigk, M. Wester, and T. S. Jan-p Calliess, “EEG-based speech recognition impact of temporal effects,” 2009.
[3] M. DZmura, S. Deng, T. Lappas, S. Thorpe, and R. Srinivasan, “Toward EEG sensing of imagined speech,” in Human-Computer Interaction. New Trends, pp. 40–48, Springer, 2009.
[4] K. Brigham and B. V. Kumar, “Imagined speech classification with EEG signals for silent communication: a preliminary investigation into synthetic telepathy,” in Bioinformatics and Biomedical Engineering (iCBBE), 2010 4th International Conference on, pp. 1–4, IEEE, 2010.
[5] K. Brigham and B. V. Kumar, “Subject identification from electroencephalogram (EEG) signals during imagined speech,” in Biometrics: Theory Applications and Systems (BTAS), 2010 Fourth IEEE International Conference on, pp. 1–8, IEEE, 2010.
[6] X. Chia, J. B. Hagedorna, D. Schoonovera, and M. D’Zmuraa, “EEG-Based Discrimination of Imagined Speech Phonemes,” International Journal of Bioelectromagnetism, vol. 13, no. 4, pp. 201– 206, 2011.
[7] M. Matsumoto and J. Hori, “Classification of silent speech using adaptive collection,” in Computational Intelligence in Rehabilitation and Assistive Technologies (CIRAT), 2013 IEEE Symposium on, pp. 5–12, IEEE, 2013.
[8] R. A. Mitchell and A. Shaw, “Vowel recognition with time-delay neural network,” IEEE International Conference on Systems Engineering, pp. 637–640, 1990.
[9] S. S. Dhillon and S. Chakrabarti, “Power line interference removal from electrocardiogram using a simplified lattice based adaptive IIR notch filter,” in Engineering in Medicine and Biology Society, 2001. Proceedings of the 23rd Annual International Conference of the IEEE, vol. 4, pp. 3407–3412, IEEE, 2001.
[10] S. G. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 11, no. 7, pp. 674–693, 1989.