Research Article  Open Access
Implementation of a Tour Guide Robot System Using RFID Technology and Viterbi AlgorithmBased HMM for Speech Recognition
Abstract
This paper applied speech recognition and RFID technologies to develop an omnidirectional mobile robot into a robot with voice control and guide introduction functions. For speech recognition, the speech signals were captured by shorttime processing. The speaker first recorded the isolated words for the robot to create speech database of specific speakers. After the speech preprocessing of this speech database, the feature parameters of cepstrum and deltacepstrum were obtained using linear predictive coefficient (LPC). Then, the Hidden Markov Model (HMM) was used for model training of the speech database, and the Viterbi algorithm was used to find an optimal state sequence as the reference sample for speech recognition. The trained reference model was put into the industrial computer on the robot platform, and the user entered the isolated words to be tested. After processing by the same reference model and comparing with previous reference model, the path of the maximum total probability in various models found using the Viterbi algorithm in the recognition was the recognition result. Finally, the speech recognition and RFID systems were achieved in an actual environment to prove its feasibility and stability, and implemented into the omnidirectional mobile robot.
1. Introduction
For speech recognition, the dissimilarity between the signal characteristic values and the characteristic values in the database was calculated in early stages to identify the minimum difference as the recognition result. However, this method has a problem of poor recognition effect due to different talking speeds. Afterwards, some scholars proposed the dynamic time warping (DTW) to improve the recognition effect [1, 2]. This method assumes two speech signal segments to be compared, and the shorttime feature parameters of the two segments of speech are extracted, namely, separated into a string of frames to determine a group of parameters from each frame. The comparison between two segments of speech is indeed the comparison between two sequence feature parameters. The DTW can adjust the speech length to reduce the errors in the speech time span. In the recognition system following DTW, the Artificial Neural Network (ANN) and HMM algorithms were proposed.
The ANN is a method often used in the artificial intelligence domain [3, 4]. The ANN does not need to know the mathematical model of the system in experimental data modeling and highly complex recognition of images, letters, or sound, so that it can replace system models. Knowing the type of outputs generated from the type of input could achieve good recognition effect after learning and repeated training. However, the ANN updates weights and bias iteratively, and the amount of calculation is large, so it consumes a lot of computer resources. The HMM uses probability model to describe the pronunciation in statistics [5–7]. A continuous state transition in Markov model can be regarded as the phonation of a short speech segment, namely, a string of connected HMM, which is the representative for segment of speech. The HMM is a method mostly used in speech recognition in recent years. This paper uses the HMM as the speech recognition core.
2. Hardware Design
The direction of the voice controlled guide type omnidirectional mobile robot is controlled by voice, and the robot has the RFID guide system and infrared image tracking and ultrasonic obstacle avoidance functions [8]. All of the proposed robot systems are configured with three subsystems; they are omnidirectional mobile robot side, computer side, and user side and shown in Figure 1. The Peripheral Interface Controller (PIC) microcontroller is the core on the omnidirectional mobile robot side; its main function includes signal process of peripheral devices and motor drive control for three wheels. The computer side uses an industrial computer processing speech recognition calculation, RFID guide system, and infrared image tracking. The user side uses a wireless headset and the RFID active tag as voice control equipments.
3. HMMBased Speech Recognition System
3.1. Preprocessing and Feature Parameter Extraction
The speech signals are preprocessed before speech recognition. The speech preprocessing contains sampling, frame, endpoint detection, preemphasis, and windowing. After the speech signal preprocessing, the characteristic feature parameters are identified for subsequent recognition calculation. In this paper, the Linear Predictive Coefficient (LPC) is used to deduce the cepstrum and deltacepstrum as the most important feature parameters.
The concept of linear prediction originates from that the amplitude of a sampling point is related to the amplitude of an adjacent sampling point during pronunciation. If the postsampling sequence of speech signals is , the present sample of speech signal, that is, , is the sample values of time . If is the predicted value of , since there must be an error between the predicted value and the actual value, the predicted error can be expressed as , as follows: where is the linear predictive coding and is the order number of linear prediction. The coefficient is adjusted; as long as the squared error value of (1) is minimized, an optimal linear predictive coefficient can be obtained. The autocorrelation is determined before solving the linear predictive coefficient, and then the wanted linear predictive coefficient is obtained from the obtained autocorrelation using the Durbin algorithm.
After determining the LPC, the cepstrum coefficient is deduced from the LPC [9]. The cepstrum coefficient separates the vocal tract model from excitation signal, and it can calculate the vocal tract parameters more precisely, so as to control the speech spectrum characteristics. The cepstrum coefficient is determined from the linear predictive coefficient , where is the order number of linear prediction, shown as follows: In a practical environment, the external noise influences the speech receiving, so that the tone in the spectrum is disturbed and distorted. The deltacepstrum can reduce this noise effect. The deltacepstrum parameter is shown in (3), where is the number of related former () or latter frames (). The cepstrum and deltacepstrum parameters are to be used as feature parameters for recognition:
3.2. HMM and Training Reference Model
3.2.1. Build the Initial Model
The states and frames are separated averagely from the audio part of a segment of speech according to the preset HMM state number, and the feature vectors in the frames are used to calculate the mean value and variance , as shown in (4), where is a state of HMM, is the frame, is the feature parameter, is the number of frames in a state, and is the number of feature vectors of cepstrum and deltacepstrum. This paper uses 15 cepstrum and 15 deltacepstrum as characteristic values, and is 30:
3.2.2. Viterbi Algorithm
In order to obtain the correct relationship between frame and HMM state more accurately, this paper uses a Gaussian probability function [10] to determine the similarity probability value of state and frame. A higher probability value indicates a higher similarity between the corresponding frame and the state, as shown in (5), where is the probability value of each state corresponding to its frame, is the feature vector dimension, is the feature vector, is the mean value of states, is the covariance matrix of the density function, and is the probability value of similarity between the feature vector and state :
The HMM can be represented by where is the state sequence, is the state number, is the observed results, is the initial state probability, is the state transition probability, is the state observation probability, is the observation sequence, and is the sequence length.
The Gaussian probability density function determines the probability value between frame and state. The HMM has many optional paths for state transition, and the path with the maximum total probability value among all possible paths is required to be found. This paper uses the Viterbi algorithm [11, 12], as shown in (7)–(10), where is the probability of staying in state at time . is the probability of reaching state at time , is the final probability value of the Viterbi algorithm, and is the optimal state sequence.
Step 1. Initializing
Step 2. Recursing
Step 3. Terminating
Step 4. Path backtracking
3.2.3. Reevaluation
After the new relationship between state and frame is obtained using the Viterbi algorithm, the mean value and variance in old state are updated, and the Gaussian density function is used to determine the updated probability between state and frame again. The new total probability value is obtained using the Viterbi algorithm. The update continues until the maximum total probability value is converged, and this is the reference model after training.
3.3. Speech Recognition
The needed commands are trained into models, which serve as reference database of speech recognition. The feature parameters are determined according to previous procedure during recognition. The reference models of database are compared using the Viterbi algorithm to determine the probability value of each model and find the optimal state sequence. The time warping of speech signals is solved automatically when corresponding to a sequence of frames to the state sequence. The key point in the speech training procedure is to identify the correlation between frame and state. The relationship between frame and state should be updated by continuous path backtracking of Viterbi, until the path with the maximum total probability is determined. The most important step in the recognition procedure is to compare the reference models of training and obtain the maximum total probability value in reference models.
4. Experiment Results
Figure 2 shows the system operation flow of the voice controlled guide type omnidirectional mobile robot. In the RFID guide system, the Reader captures Tag data and then attaches environmental information to the Tags of different ID codes or starts up the speech function. Figure 3 shows the picture of the proposed omnidirectional mobile robot.
We place the robot in the actual environment and test various moving actions (forward, backward, turn left, turn right, stop, and turn back). The voice control of speaker dependent and speaker independent are tested by five users, respectively, and the experimental results of speech recognition rates are shown in Table 1. Figure 4 shows the experiment of the user using speech to control the robot to move forward and turn left. Figure 5 shows the user using speech to control the robot to move forward, receiving the Tag of the classroom when passing by the classroom, the user can use Yes or No to choose whether accessing detailed information on the site. The site is introduced in the video format, so that the user can get acquainted with the environment quickly.

(a)
(b)
(c)
(d)
(a)
(b)
(c)
(d)
5. Conclusions
This paper used the HMMbased speech recognition method to complete a voice controlled guide type omnidirectional mobile robot. The first convenience of voice control is that the operation does not require manual operation, which makes the robot more userfriendly. The guide system based on RFID technology enables the users to know the information of an unfamiliar environment quickly. Finally, the robot movement experiment and the robot guide system experiment proved the feasibility and stability of this voice controlled guide type omnidirectional mobile robot.
Conflict of Interests
The authors declare no conflict of interests.
Acknowledgment
The financial support of this research by the National Science Council of Taiwan, under Grant no. NSC1002221E167004 is greatly appreciated.
References
 H. Sakoe and S. Chiba, “Dynamic programming algorithm optimization for spoken word recognition,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 26, no. 1, pp. 43–49, 1978. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 C. Kim and K.D. Seo, “Robust DTWbased recognition algorithm for handheld consumer devices,” IEEE Transactions on Consumer Electronics, vol. 51, no. 2, pp. 699–709, 2005. View at: Publisher Site  Google Scholar
 D. P. Morgan and C. L. Scofield, Eds., Neural Networks and Speech Processing, Kluwer Academic Publishers, 1991.
 C.F. Juang, C.T. Chiou, and C.L. Lai, “Hierarchical singletontype recurrent neural fuzzy networks for noisy speech recognition,” IEEE Transactions on Neural Networks, vol. 18, no. 3, pp. 833–843, 2007. View at: Publisher Site  Google Scholar
 L. R. Rabiner, “A tutorial on hidden Markov Models and selected applications in speech recognition,” IEEE T Acoust Speech, vol. 77, pp. 257–286, 1978. View at: Google Scholar
 S. Yoshizawa, N. Wada, N. Hayasaka, and Y. Miyanaga, “Scalable architecture for word HMMbased speech recognition and VLSI implementation in complete system,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 53, no. 1, pp. 70–77, 2006. View at: Publisher Site  Google Scholar
 J.H. Im and S.Y. Lee, “Unified training of feature extractor and HMM classifier for speech recognition,” IEEE Signal Processing Letters, vol. 19, no. 2, pp. 111–114, 2012. View at: Publisher Site  Google Scholar
 S. F. Huang, Design and Implementation of an Autonomous Following OmniDirectional Mobile Robot, National Digital Library of Theses and Dissertations, Taipei, Taiwan, 2008.
 Y. Yuan, P. Zhao, and Q. Zhou, “Research of speaker recognition based on combination of LPCC and MFCC,” in Proceedings of the IEEE International Conference on Intelligent Computing and Intelligent Systems (ICIS '10), pp. 765–767, Xiamen, China, October 2010. View at: Publisher Site  Google Scholar
 L. Liu and J. He, “On the use of orthogonal GMM in speaker recognition,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '99), pp. 845–848, Phoenix, Ariz, USA, March 1999. View at: Google Scholar
 C. C. Wen, Ed., Multimedia Applications for Speech Recognition System, National Digital Library of Theses and Dissertations, Taipei, Taiwan, 2008.
 D. F. Tseng, “Robust decoding for convolutionally coded systems impaired by memoryless impulsive noise,” IEEE Transactions on Communications, vol. 61, pp. 4640–4652, 2013. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2014 NengSheng Pai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.