Research Article  Open Access
Deyu Zhou, Yulan He, "SemiSupervised Learning of Statistical Models for Natural Language Understanding", The Scientific World Journal, vol. 2014, Article ID 121650, 11 pages, 2014. https://doi.org/10.1155/2014/121650
SemiSupervised Learning of Statistical Models for Natural Language Understanding
Abstract
Natural language understanding is to specify a computational model that maps sentences to their semantic mean representation. In this paper, we propose a novel framework to train the statistical models without using expensive fully annotated data. In particular, the input of our framework is a set of sentences labeled with abstract semantic annotations. These annotations encode the underlying embedded semantic structural relations without explicit word/semantic tag alignment. The proposed framework can automatically induce derivation rules that map sentences to their semantic meaning representations. The learning framework is applied on two statistical models, the conditional random fields (CRFs) and the hidden Markov support vector machines (HMSVMs). Our experimental results on the DARPA communicator data show that both CRFs and HMSVMs outperform the baseline approach, previously proposed hidden vector state (HVS) model which is also trained on abstract semantic annotations. In addition, the proposed framework shows superior performance than two other baseline approaches, a hybrid framework combining HVS and HMSVMs and discriminative training of HVS, with a relative error reduction rate of about 25% and 15% being achieved in measure.
1. Introduction
Given a sentence such as “I want to fly from Denver to Chicago,” its semantic meaning can be represented as FROMLOC(CITY(Denver)) TOLOC(CITY(Chicago)).
Natural language understanding can be considered as a mapping problem where the aim is to map a sentence to its semantic meaning representation (or abstract semantic annotation) as shown above. It is a structured classification task which predicts output labels (semantic tag or concept sequences) from input sentences where the output labels have rich internal structures.
Early approaches rely on handcrafted semantic grammar rules to fill slots in semantic frames using word pattern and semantic tokens [1, 2]. Such rulebased approaches are typically domainspecific and often fragile. In contrast, statistical approaches are able to accommodate the variations found in real data and hence can in principle be more robust. They can be categorized into three types: generative approaches, discriminative approaches, and a hybrid of the two.
Generative approaches learn the joint probability model, , of input sentence and its semantic tag sequence , then compute using Bayes’ rule, and finally take the most probable semantic tag sequence . The hidden Markov model (HMM), a generative model, has been predominantly employed in statistical semantic parsing. It models sequential dependencies by treating a semantic parse sequence as a Markov chain, which leads to an efficient dynamic programming formulation for inference and learning. Discriminative approaches directly model posterior probability and learn mappings from to . Conditional random fields (CRFs), as one representative example, define a conditional probability distribution over label sequence given an observation sequence, rather than a joint distribution over both label and observation sequences [3]. Another example is the hidden Markov support vector machines (HMSVMs) [4] which combine the flexibility of kernel methods with the idea of HMMs to predict a label sequence given an input sequence.
Nevertheless, statistical models mentioned above require fully annotated corpora for training which are difficult to obtain in practical applications. It thus motivates the investigation of train statistical models on abstract semantic annotations without the use of expensive tokenstyle annotations. This is a highly challenging problem because the derivation from each sentence to its abstract semantic annotation is not annotated in the training data and is considered hidden.
A hierarchical hidden state structure could be used to model embedded structural context in sentences, such as the hidden vector state (HVS) model [5], which learns a probabilistic pushdown automaton. However, it cannot incorporate a large number of correlated lexical or syntactic features in input sentences and cannot handle any arbitrary embedded relations since it only supports rightbranching semantic structures.
In this paper, we propose a novel learning framework to train statistical models from unaligned data. Firstly, it generates semantic parses by computing expectations using initial model parameters. Secondly, parsing results are then filtered based on a measure describing the level of agreement with the sentence abstract semantic annotations. Thirdly, the filtered parsing results are fed into model learning. With the reestimated parameters, the learning of statistical models goes to the next iteration until no more improvements could be achieved. The proposed framework has two advantages: one is that only abstract semantic annotations are required for training without the explicit word/semantic tag alignment; and another is that the proposed learning framework can be easily extended for training any discriminative models on abstract semantic annotations.
We apply the proposed learning framework on two statistical models, CRFs and HMSVMs. Experimental results on the DARPA communicator data show that the framework on both CRFs and HMSVMs outperforms the baseline approach, the previously proposed HVS model. In addition, the proposed framework shows superior performance than two other approaches, a hybrid framework combining HVS and HMSVMs and discriminative training of HVS, with a relative error reduction rate of about 25% and 15% being achieved in measure.
The rest of this paper is organized as follows. Section 2 gives a brief introduction of CRFs and HMSVMs, followed by a review on the existing approaches for training semantic parsers on abstract annotations. The proposed framework is presented in Section 3. Experimental setup and results are discussed in Section 4. Finally, Section 5 concludes the paper.
2. Related Work
In this section, we first briefly introduce CRFs and HMSVMs. Then, we review the existing approaches for training semantic parsers on abstract semantic annotations.
2.1. Statistical Models
Given a set of training data , to learn a function that assigns to a sequence of words , , , a sequence of semantic concepts or tags , , , a common approach is to find a discriminant function that assigns a score to every input and every semantic tag sequence . In order to obtain a prediction , the function is maximized with respect to .
2.1.1. Conditional Random Fields (CRFs)
Linearchain CRFs, as a discriminative probabilistic model over sequences of feature vectors and label sequences, have been widely used to model sequential data. This model is analogous to maximum entropy models for structured outputs. By making a firstorder Markov assumption on states, a linearchain CRF defines a distribution over state sequence given an input sequence ( is the length of the sequence) as where the partition function is the normalization constant that makes the probability of all state sequences sum to one and is defined as .
By exploiting the Markov assumption, can be calculated efficiently by variants of the standard dynamic programming algorithms used in HMM instead of summing over the exponentially many possible state sequences . can be factorized as where is the real weight for each feature function . The feature functions describe some aspect of a transition from to as well as and the global characteristics of . For example, may have value 1 when and , which means that the previous word has the POS tag “DT” (determiner) and the current word has the POS tag “NN” (noun, singular common). The final model parameters for CRFs are a set of real weights , one for each feature.
2.1.2. Hidden Markov Support Vector Machines (HMSVMs)
For HMSVMs [4], the function is assumed to be linear in some combined feature representation of and ; . The parameters are adjusted so that the true semantic tag sequence scores higher than all other tag sequences with a large margin. To achieve the goal, the following optimization problem is solved: where is nonnegative slack variables allowing one to increase the global margin by paying a local penalty on some outlying examples and dictates the desired tradeoff between margin size and outliers. To solve (3), the dual of the equation is solved instead. The solution can be written as where is the Lagrange multiplier of the constraint associated with example and .
2.2. Training Statistical Models from Lightly Annotated Data
Semantic parsing can be viewed as a pattern recognition problem and statistical decoding can be used to find the most likely semantic representation. The majority of statistical approaches to semantic parsing rely on fully annotated corpora. There have been some prior works on learning semantic parsers that map natural language sentences into a formal meaning representation such as firstorder logic [6–10]. However these systems either require a handbuilt, ambiguous combinatory categorical grammar template to learn a probabilistic semantic parser [11] or assume the existence of an unambiguous, contextfree grammar of the target meaning representations [6, 7, 9, 12, 13]. Furthermore, they have only been studied in two relatively simple tasks, GEOQUERY [14] for US geography query and ROBOCUP (http://www.robocup.org/) where coaching instructions are given to soccer agents in a simulated soccer field.
He and Young [5] proposed the hidden vector state (HVS) model based on the hypothesis that a suitably constrained hierarchical model may be trainable without treebank data whilst simultaneously retaining sufficient ability to capture the hierarchical structure needs to robustly extract task domain semantics. Such a constrained hierarchical model can be conveniently implemented using the HVS model which extends the flatconcept HMM model by expanding each state to encode the stack of a pushdown automaton. This allows the model to efficiently encode hierarchical context, but because stack operations are highly constrained it avoids the tractability issues associated with full contextfree stochastic models such as the hierarchical HMM. Such a model is trainable using only lightly annotated data and it offers considerable performance gains compared to the flatconcept model.
Conditional random fields (CRFs) have been extensively studied for sequence labeling. Most applications require the availability of fully annotated data, that is, an explicit alignment of sentence and wordlevel labels. There have been some attempts to train CRFs from a small set of labeled data and a large set of unlabeled data. In these approaches, a training objective is redefined to combine the conditional likelihood of labeled data and unlabeled data. Jiao et al. [15] extended the minimum entropy regularization framework to the structured prediction case so a training objective that combines unlabeled conditional entropy with labeled conditional likelihood is yielded. Mann and McCallum [16] augmented the traditional conditional likelihood objective function with an additional term that aims to minimize the predicted label entropy on unlabeled data. Entropy regularization was employed for semisupervised learning. In [17], a training objective combining the conditional likelihood on labeled data and the mutual information on unlabeled data is proposed. It is based on the rate distortion theory in information theory. Mann and Mccallum [18] used labeled features instead of fully labeled instances to train linearchain CRFs. Generalized expectation criteria are used to express a preference for parameter settings in which the model distribution on unlabeled data matches a target distribution. They tested their approach on the classified advertisements data set (CLASSIFIED) [19] consisting of classified advertisements for apartment rentals in the San Francisco Bay Area with 12 fields being labeled for each of the advertisements, including size, rent, neighborhood, and features. With only labeled features, their approach gave a mediocre result with 68.3% accuracy being achieved. With an additional inclusion of 100 labeled instances, the accuracy is increased to 80%. The DARPA communicator data used in our experiment appear to be more complex than the CLASSIFIED data since semantic annotations in the DARPA communicator data describe embedded structural context in sentences while semantic labels in the CLASSIFIED data do not represent any hierarchical relations.
3. The Proposed Framework
Given the training data , where is the abstract annotation for sentence , the parameters will be estimated through a maximum likelihood procedure. The loglikelihood of with expectation over the abstract annotation is calculated as follows: where is the unknown semantic tag sequence of the th word sequence. To learn statistical models, we extended the use of expectation maximization (EM) algorithm to estimate model parameters. The EM algorithm [20] is widely employed in statistical models for parameter estimation when the model depends on unobserved latent variables. Given a set of observed data , a set of unobserved latent data, or missing values , the EM algorithm seeks to find the maximum likelihood estimation of the marginal likelihood by alternating between performing an expectation step and a maximization step. (i)Estep: given the current estimate of the parameters, calculate the expected value for unobserved latent variables or data.(ii)Mstep: find the parameter that maximizes this quantity. These parameter estimates are then used to determine the distribution of the latent variables in the next Estep.
We propose a learning framework based on EM to train statistical models from abstract semantic annotations as illustrated in Figure 1. The whole procedure works as follows. Given a set of sentences and their corresponding semantic annotations , each annotation is expanded to the flattened semantic tag sequence at initialization step. Based on the flattened semantic tag sequences, the initial model parameters are estimated. After that, the semantic tag sequence is generated for each sentence using the current model, . Then, is filtered based on a score function which measures the agreement of the generated semantic tag sequences with the actual flattened semantic tag sequences. In the maximization step, model parameters are reestimated using the filtered . The iteration continues until convergence. The details of each step are discussed in Figure 1.
3.1. Preprocessing
Given a sentence labeled with an abstract semantic annotation as shown in Table 1, we first expand the annotation to the flattened semantic tag sequence as in Table 1(a). The provision of abstract annotations implies that the semantics encoded in each sentence need not be provided in expensive token style. Obviously, there are some input words such as articles, which have no specific semantic meanings. In order to cater for these irrelevant input words, a DUMMY tag is introduced in the preterminal position. Hence, the flattened semantic tag sequence is finally expanded to the semantic tag sequence as in Table 1(b).

3.2. Expectation with Constraints
During the expectation step, that is, calculating the most likely semantic tag sequence given a sentence, we need to impose the following two constraints which are implied from abstract semantic annotations.(1)Considering the calculated semantic tag sequence as a hidden state sequence, state transitions are only allowed if both current and next states are listed in the semantic annotation defined for the sentence.(2)If a lexical item is attached to a preterminal tag of a flattened semantic tag, the semantic tag must appear bound to that lexical item in the training annotation.
To illustrate how these two constraints are applied, the sentence “I want to return on Thursday to Dallas” with its annotation “RETURN(TOLOC(CITY(Dallas)) ON(DATE(Thursday)))” is taken as an example. The transition from RETURN+TOLOC+CITY to RETURN is allowed since both states can be found in the semantic annotation and follows constraint 1. However, the transition from RETURN to FLIGHT is not allowed as it does not follow constraint 1 and FLIGHT is not listed in the semantic annotation. Also, for the lexical item Dallas in the training sentence, the only valid semantic tag is RETURN+TOLOC+CITY because to apply constraint 2 Dallas has to be bound with the preterminal tag CITY.
We further describe how these two constraints can be imposed into two different models, CRFs and HMSVMs:
3.2.1. Expectation in CRFs
The most probable labeling sequence in CRFs can be efficiently calculated using the Viterbi algorithm. Similar to the forwardbackward procedure for HMM, the marginal probability of states at each position in the sequence can be computed as where .
The forward values and backward values are defined in iterative form as (7).
Given the training data , the parameter can be estimated through a maximum likelihood procedure. To calculate the loglikelihood of with expectation over the abstract annotation as follows, where is the unknown semantic tag sequence of the th word sequence and . It can be optimized using the same optimization method as in standard CRFs training.
To infer the wordlevel semantic tag sequences based on abstract annotations, (7) are modified as shown in (8), where is defined as follows:
3.2.2. Expectation in HMSVM
To calculate the most likely semantic tag sequence for each sentence , , we can decompose the discriminant function into two components, , where
Here, is considered as the coefficient for the transition from state (or semantic tag) to state while can be treated as the coefficient for the emission of word from state . They are defined as follows: where describes the similarity of the input patterns between word and word , the th word in the training example , and is a set of dual parameters or Lagrange multiplier of the constraint associated with example and semantic tag sequence as in (4). Using the results derived in (13), Viterbi decoding can be performed to generate the best semantic tag sequence.
To incorporate the constraints as defined in the abstract semantic annotations, the values of and are modified for each sentence: where and are defined as follows: where and in fact encode the two constraints implied from abstract annotations.
3.3. Filtering
For each sentence, the semantic tag sequences generated in the expectation step are further processed based on a measure on the agreement of the semantic tag sequence with its corresponding abstract semantic annotation . The score of is defined as where . Here, is the number of the semantic tags in which also occur in , is the number of semantic tags in , and is the number of semantic tags in the flattened semantic tag sequence for . The score is similar to the measure which is the harmonic mean of precision and recall. It essentially measures the agreement of the generated semantic tag sequence with the abstract semantic annotation. We filter out sentences with their score below certain predefined threshold and the remaining sentences together with their generated semantic tag sequences are fed into the next maximization step. In our experiments, we empirically set the threshold to 0.1.
3.4. Maximization
Given the filtered training examples from the filtering step, the parameters are adjusted using the standard training algorithms.
For CRFs, the parameter can be estimated through a maximum likelihood procedure. The model is traditionally trained by maximizing the conditional loglikelihood of the labeled sequences, which is defined as where is the number of sequences.
The maximization can be achieved gradient ascent where the gradient of the likelihood is
For HMSVMs, the parameters are adjusted so that the true semantic tag sequence scores higher than all the other tag sequences with a large margin. To achieve the goal, the optimization problem as stated in (3) is solved using an online learning approach as described in [4]. In short, it works as follows: a pattern sequence is presented and the optimal semantic tag sequence is computed by employing Viterbi decoding. If is correct, no update is performed. Otherwise, the weight vector is updated based on the difference from the true semantic tag sequence .
4. Experimental Results
Experiments have been conducted on the DARPA communicator data (http://www.bltek.com/spokendialogsystems/cucommunicator.html/) which were collected in 461 days. From these, 46 days were randomly selected for use as test set data and the remainders were used for training. After cleaning up the data, the training set consists of 12702 utterances while the test set contains 1178 utterances.
The abstract semantic annotations used for training only list a set of valid semantic tags and the dominance relationships between them without considering the actual realized semantic tag sequence or attempting to identify explicit word/concept pairs. Thus, it avoids the need for expensive treebank style annotations. For example, for the sentence “I wanna go from Denver to Orlando Florida on December tenth,” the abstract annotation would be FROMLOC(CITY) TOLOC(CITY(STATE)) MONTH(DAY).
To evaluate the performance of the model, a reference frame structure was derived for every test set sentence consisting of slot/value pairs. An example of a reference frame is shown in Table 2.

Performance was then measured in terms of measure on slot/value pairs, which combines the precision and recall values with equal weight and is defined as .
We modified the open source of the CRF suite (http://www.chokkan.org/software/crfsuite/) and (http://www.cs.cornell.edu/people/tj/svm_light/svm_hmm.html/) to implement our proposed learning framework. We employed two algorithms to estimate the parameters of CRFs, the stochastic gradient descent (SGD) iterative algorithm [21], and the limitedmemory BFGS (LBFGS) method [22]. For both algorithms, the regularization parameter was empirically set in the following experiments.
4.1. Overall Comparison
We first compare the time consumed in each iteration using HMSVMs or CRFs as shown in Figure 2. The experiments were conducted on the Intel(R) Xeon(TM) model Linux server equipped with 3.00 Ghz processor and 4 GB RAM. It can be observed that, for CRFs, the time consumed in SGD is almost doubled compared to that in LBFGS in each iteration. However, since SGD converges much faster than LBFGS, the total time required for training is almost the same. As SGD gives balanced precision and recall values, it should be preferred more than LBFGS in our proposed learning procedure. On the other hand, as opposed to CRFs which consume much less time after iteration 1, HMSVMs take almost the same run time for all the iterations. Nevertheless, the total run time until convergence is almost the same for CRFs and HMSVMs.
(a) CRFs with LBFGS
(b) CRFs with SGD
(c) HMSVMs
Figure 3 shows the performance of our proposed framework for CRFs and HMSVMs at each iteration. At each word position, the feature set used for both statistical models consists of the current word and the current partofspeech (POS) tag. It can be observed that both models achieve the best performance at iteration 8 with an measure of 92.95% and 93.18% being achieved using CRFs and HMSVMs, respectively.
(a) CRFs with LBFGS
(b) CRFs with SGD
(c) HMSVMs
4.2. Results with Varied Features Set
We employed word features (such as current word, previous word, and next word) and POS features (such as current POS tag, previous one, and next one) for training. To explore the impact of the choices of features, we explored with feature sets comprised of words or POS tags occurring before or after the current word within some predefined window size.
Figure 4 shows the performance of our proposed approach with the window size varying between 0 and 3. Surprisingly, the model learned with feature set chosen by setting window size 0 gives the best overall performance. Varying window size between 1 and 3 only impacts the convergence rate and does not lead to any performance difference at the end of the learning procedure.
(a) CRFs with LBFGS
(b) CRFs with SGD
(c) HMSVMs
4.3. Performance with or without Filtering Step
In a second set of experiments, we compare the performance with or without the filtering step as discussed in Section 3.3. Figure 5 shows that the filtering step is indeed crucial as it boosted the performance by nearly 4% for CRFs with LBFGS and 3% for CRFs with SGD and HMSVMs.
(a) CRFs with LBFGS
(b) CRFs with SGD
(c) HMSVMs
4.4. Comparison with Existing Approaches
We compare the performance of CRFs and HMSVMs with HVS, all trained on abstract semantic annotations. While it is hard to incorporate arbitrary input features into HVS learning, both CRFs and HMSVMs have the capability of dealing with overlapping features. Table 3 shows that they outperform HVS with a relative error reduction of 36.6% and 43.3% being achieved, respectively. In addition, the superior performance of HMSVMs over CRFs shows the advantage of HMSVMs on learning nonlinear discriminant functions via kernel functions.

We further compare our proposed learning approach with two other methods. One is a hybrid generative/discriminative framework (HF) [23] which combines HVS with HMSVMs so as to allow the incorporation of arbitrary features as in CRFs. The other is a discriminative approach (DT) based on parse error measure to train the HVS model [24]. The generalized probabilistic descent (GPD) algorithm [25] was employed to adjust the HVS model to achieve the minimum parse error rate.
Table 3 shows that our proposed learning approach outperforms both HF and DT. Training statistical models on abstract annotations allows the calculation of conditional likelihood and hence results in direct optimization of the objective function to reduce the error rate of semantic labeling. On the contrary, the hybrid framework firstly uses the HVS parser to generate full annotations for training HMSVMs. This process involves the optimization of two different objective functions (one for HVS and another for HMSVMs). Although DT also uses an objective function which aims to reduce the semantic parsing error rate, it is in fact employed for supervised reranking where the input is the best parse results generated from the HVS model.
5. Conclusions
In this paper, we have proposed an effective learning approach which can train statistical models such CRFs and HMSVMs without using the expensive treebank style annotation data. Instead, it trains the statistical models from only abstract annotations in a constrained way. Experimental results show that, using the proposed learning approach, both CRFs and HMSVMs outperform the previously proposed HVS model on the DARPA communicator data. Furthermore, they also show superior performance than the two other methods: one is the hybrid framework (HF) combining both HVS and HMSVMs, and the other is discriminative training (DT) of the HVS model, with a relative error reduction rate of about 25% and 15% being achieved when compared with HF and DT, respectively.
In future work, we will explore other score functions in filtering step to describe the precision of the parsing results. Also, we plan to apply the proposed framework in some other domains such as information extraction and opinion mining.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The submitted paper is the extended version of the conference paper for CIKM 2011 with the title “A novel framework of training hidden Markov support vector machines from lightlyannotated data.” The authors thank the anonymous reviewers for their insightful comments. This work was funded by the National Natural Science Foundation of China (61103077), Ph.D. Programs Foundation of Ministry of Education of China for Young Faculties (20100092120031), Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry, and the Fundamental Research Funds for the Central Universities (the Cultivation Program for Young Faculties of Southeast University).
References
 J. Dowding, R. Moore, F. Andry, and D. Moran, “Interleaving syntax and semantics in an efficient bottomup parser,” in Proceedings of the 32th Annual Meeting of the Association for Computational Linguistics, pp. 110–116, Las Cruces, NM, USA, 1994. View at: Google Scholar
 W. Ward and S. Issar, “Recent improvements in the cmu spoken language understanding system,” in Proceedings of the Workshop on Human Language Technology, pp. 213–216, Plainsboro, NJ, USA, 1994. View at: Google Scholar
 J. D. Lafferty, A. McCallum, and F. C. N. Pereira, “Conditional random fields: probabilistic models for segmenting and labeling sequence data,” in Proceedings of the 18th International Conference on Machine Learning (ICML ’11), pp. 282–289, 2001. View at: Google Scholar
 Y. Altun, I. Tsochantaridis, and T. Hofmann, “Hidden markov support vector machines,” in Proceedings of the International Conference in Machine Learning, pp. 3–10, 2003. View at: Google Scholar
 Y. He and S. Young, “Semantic processing using the hidden vector state model,” Computer Speech and Language, vol. 19, no. 1, pp. 85–106, 2005. View at: Publisher Site  Google Scholar
 R. J. Kate and R. J. Mooney, “Using stringkernels for learning semantic parsers,” in Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics (ACL ’06), pp. 913–920, 2006. View at: Publisher Site  Google Scholar
 Y. W. Wong and R. J. Mooney, “Learning synchronous grammars for semantic parsing with lambda calculus,” in Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL ’07), pp. 960–967, June 2007. View at: Google Scholar
 W. Lu, H. Ng, W. Lee, and L. Zettlemoyer, “A generative model for parsing natural language to meaning representations,” in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP ’08), pp. 783–792, Stroudsburg, PA, USA, October 2008. View at: Google Scholar
 R. Ge and R. Mooney, “Learning a compositional semantic parser using an existing syntactic parser,” in Proceedings of the 47th Annual Meeting of the ACL, pp. 611–619, 2009. View at: Google Scholar
 M. Dinarelli, A. Moschitti, and G. Riccardi, “Discriminative reranking for spoken language understanding,” IEEE Transactions on Audio, Speech and Language Processing, vol. 20, no. 2, pp. 526–539, 2012. View at: Publisher Site  Google Scholar
 L. S. Zettlemoyer and C. Michael, “Learning to map sentences to logical form: structured classification with probabilistic categorial grammars,” in Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence (UAI ’05), pp. 658–666, July 2005. View at: Google Scholar
 A. Giordani and A. Moschitti, “Syntactic structural kernels for natural language interfaces to databases,” in Machine Learning and Knowledge Discovery in Databases, W. Buntine, M. Grobelnik, D. Mladeni, and J. ShaweTaylor, Eds., vol. 5781 of Lecture Notes in Computer Science, pp. 391–406, Springer, Berlin, Germany, 2009. View at: Google Scholar
 A. Giordani and A. Moschitti, “Translating questions to SQL queries with generative parsers discriminatively reranked,” in Proceedings of the 24th International Conference on Computational Linguistics, pp. 401–410, 2012. View at: Google Scholar
 J. Zelle and R. Mooney, “Learning to parse database queries using inductive logic programming,” in Proceedings of the AAAI, pp. 1050–1055, 1996. View at: Google Scholar
 F. Jiao, S. Wang, C.H. Lee, R. Greiner, and D. Schuurmans, “Semisupervised conditional random fields for improved sequence segmentation and labeling,” in Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING/ACL ’06), pp. 209–216, July 2006. View at: Google Scholar
 G. S. Mann and A. McCallum, “Efficient computation of entropy gradient for semisupervised conditional random fields,” in Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLTNAACL ’07), pp. 109–112, 2007. View at: Google Scholar
 Y. Wang, G. Haffari, S. Wang, and G. Mori, “A rate distortion approach for semisupervised conditional random fields,” in Proceedings of the 23rd Annual Conference on Neural Information Processing Systems (NIPS ’09), pp. 2008–2016, December 2009. View at: Google Scholar
 G. S. Mann and A. Mccallum, “Generalized expectation criteria for semisupervised learning of conditional random fields,” in Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, pp. 870–878, June 2008. View at: Google Scholar
 T. Grenager, D. Klein, and C. D. Manning, “Unsupervised learning of field segmentation models for information extraction,” in Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL ’05), pp. 371–378, Ann Arbor, Mich, USA, June 2005. View at: Google Scholar
 J. A. Bilmes, “A gentle tutorial of the em algorithm and its application to parameter estimation for gaussian mixture and hidden markov models,” in Proceedings of the International Conference on Systems Integration, 1997. View at: Google Scholar
 S. ShalevShwartz, Y. Singer, and N. Srebro, “Pegasos: primal estimated subgradient solver for svm,” in Proceedings of the 24th International Conference on Machine Learning (ICML ’07), pp. 807–814, June 2007. View at: Publisher Site  Google Scholar
 J. Nocedal, “Updating quasinewton matrices with limited storage,” Mathematics of Computation, vol. 35, no. 151, pp. 773–782, 1980. View at: Publisher Site  Google Scholar  MathSciNet
 D. Zhou and Y. He, “A hybrid generative/discriminative framework to train a semantic parser from an unannotated corpus,” in Proceeding of the 22nd International Conference on Computational Linguistics (COLING ’08), pp. 1113–1120, Manchester, UK, August 2008. View at: Google Scholar
 D. Zhou and Y. He, “Discriminative training of the hidden vector state model for semantic parsing,” IEEE Transactions on Knowledge and Data Engineering, vol. 21, no. 1, pp. 66–77, 2009. View at: Publisher Site  Google Scholar
 H. K. J. Kuo, E. FoslerLussier, H. Jiang, and C.H. Lee, “Discriminative training of language models for speech recognition,” in Proceedings of the IEEE International Conference on Acustics, Speech, and Signal Processing (ICASSP ’02), vol. 1, pp. 325–328, IEEE, Merano, Italy, May 2002. View at: Google Scholar
Copyright
Copyright © 2014 Deyu Zhou and Yulan He. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.