Abstract
The Common Vector Approach (CVA) is a subspace classifier with significant success in isolated word recognition. However, when sufficient data is available, mixing the intra-class and inter-class subspaces can reduce recognition rates. To overcome this problem, a method is proposed in which Mel-Frequency Cepstral Coefficients (MFCC), Gammatone Frequency Cepstral Coefficients (GTCC), and i-vector features obtained from samples of classes in the database are divided into equal-sized clusters. This method reduces the number of features per class, thereby minimizing the mixing between intra-class and inter-class subspaces. Recognition is performed using the K-Nearest Neighbor (KNN), Support Vector Machines (SVM), Long Short-Term Memory (LSTM), and CVA classifiers by using Multiple Common Vectors (MCV) obtained per class as feature vectors. By using projection matrices, a test signal is assigned to k classes, and the most suitable class is selected with the proposed Majority Vote Algorithm (MVA). Thus, the probability of assigning a test signal, which is misclassified in classical CVA, to the correct class is increased with the proposed method. The results obtained are higher than those found with the classical CVA method. This study demonstrates the potential and superiority of the new method proposed for isolated word recognition. The developed method provides a more efficient and effective recognition system by increasing classification accuracy compared to the classical CVA.