Recent Advances in Information TechnologyView this Special Issue
Research Article | Open Access
Fei Zhu, Quan Liu, Hui Wang, Xiaoke Zhou, Yuchen Fu, "Unregistered Biological Words Recognition by Q-Learning with Transfer Learning", The Scientific World Journal, vol. 2014, Article ID 173290, 9 pages, 2014. https://doi.org/10.1155/2014/173290
Unregistered Biological Words Recognition by Q-Learning with Transfer Learning
Unregistered biological words recognition is the process of identification of terms that is out of vocabulary. Although many approaches have been developed, the performance approaches are not satisfactory. As the identification process can be viewed as a Markov process, we put forward a Q-learning with transfer learning algorithm to detect unregistered biological words from texts. With the Q-learning, the recognizer can attain the optimal solution of identification during the interaction with the texts and contexts. During the processing, a transfer learning approach is utilized to fully take advantage of the knowledge gained in a source task to speed up learning in a different but related target task. A mapping, required by many transfer learning, which relates features from the source task to the target task, is carried on automatically under the reinforcement learning framework. We examined the performance of three approaches with GENIA corpus and JNLPBA04 data. The proposed approach improved performance in both experiments. The precision, recall rate, and score results of our approach surpassed those of conventional unregistered word recognizer as well as those of Q-learning approach without transfer learning.
From the perspective of computational linguistics, unregistered words are the ones that are out of vocabulary. They could be terms that are not documented in the vocabulary or newly generated ones. Studies on unregistered words are mainly focused on automatic recognition of them. Approaches of recognizing unregistered words are divided into rule-based approaches, statistics-based approaches, and rule-statistics hybrid approaches. Many unregistered words recognition systems have worked pretty well so far, attaining high precision in identifying general unregistered words. However, there are limited unregistered words recognition systems for dedicated domains, such as recognizer for biology terms.
Recognition of biological terms is the most important step in the extraction of biological knowledge , with the overall aim of identifying specific terms, such as gene, protein, disease, and drug. Numerous technologies in computing have already been employed. However, it is difficult to correctly identify biological terms in texts because they often use alphabets, digits, hyphens, and other characters [2–6]. Arbitrarily referring to biological terms makes it even harder to conduct automatic recognition. In biological text, biological named entities are usually multiword phrases and some have prefixes and/or suffixes, which makes it harder to determine the boundaries of terms. Biological terms are also affected by their context. In some cases, a biological term has a different meaning among species. As a result, it is difficult for computers to recognize biological terms automatically. Thus, general terms recognition system does not work well when they are implemented to detect biological terms.
Considering the importance and the disability of current approaches to identify unregistered words, we hereby propose a novel approach to recognize words based on transfer learning, by which we turn the process of recognizing the terms into a property marking process by redefining the property of terms according to features of terms and the corresponding context. The approach takes advantage of features of extracted candidate terms combined with transfer based error-driven learning to identify terms. By the approach, it is easier to recognize the terms with composite structure. Moreover, since the learning of the rules and feature extraction of terms rely completely on machine learning methods, it is possible to avoid the subjectivity of artificial extraction effectively and it can fit the new application well if we use a new training sample data.
2. Unregistered Words Recognition
The approaches of recognizing words are divided into rule-based approaches, statistics-based approaches, and rule-statistics hybrid approaches. At present, statistics-based approaches rely on frequency information of words; rule-based approaches depend on the features of the context.
The rule-based approach generates rule set or pattern base through morphological features of new coming words and identifies unregistered words by the rules or patterns. Statistics-based approach uses statistic policy to draw out candidate string and then either utilizes linguistics knowledge to exclude fake unregistered words or takes advantage of statistical analysis models, such as SVM [7–11], -test , -gram , HMM [14–16], CRF [17–19], neural network model , maximum entropy model [21–23], and other hybrid approaches , to find out the most relevant substring.
Generally, the rule-based unregistered words recognition system can get high recognition precision through high quality knowledge by the rules made by experts, as well as having the advantage of a small system overhead and fast running speed. However, the establishment of rules depends largely on the manual efforts, causing the difficulty of ensuring the consistency of rules. With the increasing of the rule set scale, it is getting harder and harder to carry on regular maintenance. What is more, when failing to find exactly matching rule, the system will get trouble in making an appropriate decision.
The statistics-based approaches use mathematical statistics as well as confidence of word composition to extract different kinds of knowledge for recognizing unregistered words. This kind of approaches is easy to be implemented. Combining confidence of word composition allows for considering context and experience to a larger extent. The approach turns the binary rule, true or false, to a quantification index. However, the acquisition of statistical information depends on the training corpus which needs much manual efforts. Moreover, computation cost is larger than that of rule-based approach and recognition precision is lower.
In practice, many applications use the combination of the two approaches.
We can use precision rate and recall rate to evaluate the performance of recognition evaluation. The definitions of precision rate, recall rate, and scores are as follows:
3. Introduction to Transfer Learning
Under the conventional frame of machine learning, the objective of learning on the basis of given abundant training data is to fulfill a model for prediction. However, machine learning algorithms require great amount of training data which would cost vast manual cost and material resources.
What is more, training data and testing data are assumed to obey the identical data distribution in traditional machine learning which cannot be satisfied under many circumstances. Generally training data is likely to be overdue, which requests us to remark plenty of training data to meet our training need, which is very expensive that plentiful manual efforts and material resources have to be costive.
Therefore it is very important to fully take advantage of old training data. Transfer learning, which aims at helping learning task in the new circumstance of knowledge learned from another circumstance, can transfer knowledge from existing data to aid future learning. Transfer learning will not obey the assumption of identical distribution as traditional machine learning.
At present, the work on transfer learning can be divided into three parts: instance-based isomorphic space transfer learning, feature-based isomorphic space transfer learning, and heterogeneous space transfer learning , among which instance-based isomorphic space transfer learning turns out to have stronger knowledge transfer ability, feature-based isomorphic space transfer learning has broader knowledge transfer ability, and heterogeneous space transfer learning has stronger study and extension ability. Each has its own merits.
3.1. Instance-Based Isomorphic Space Transfer Learning
The basic idea of instance-based isomorphic space transfer learning is that there should exist partial assistant training data which is suitable to train an efficient disaggregated model and adapt to test data despite the difference between assistant training data and source training data.
Consequently, the aim of instance-based isomorphic space transfer learning is to find out instances suitable for testing data from assistant training data and transferring those instances to the study of source training data. Some researchers extended traditional AdaBoost  algorithm and come up with Tradaboosting , a boosting algorithm with transfer ability , to give it transfer ability to take full advantage of assistant training data to help classify the targets in the study of transfer learning based on instances.
Instance-based isomorphic space transfer learning works only when source data is extremely similar to auxiliary data. Actually, it is very difficult for instance-based isomorphic space transfer learning to find out transferable knowledge when there are great differences between source data and auxiliary data. However, source data and auxiliary data may be mixed in features level despite the fact that they cannot share some common knowledge in instances.
3.2. Feature-Based Isomorphic Space Transfer Learning
There are many research works on feature-based isomorphic space transfer learning, such as COCC algorithm , TPLSA algorithm , spectrum algorithm , and self-learning algorithm , which use clustering algorithm to produce a common feature for learning algorithm.
The basic thought of feature-based isomorphic space transfer learning is to utilize mutual clustering algorithm to cluster source data and auxiliary data to obtain common features which are better than features which are based on source data only and to realize transfer learning through source data of the new space. With the idea, feature-based supervised transfer learning and feature-based unsupervised transfer learning are then proposed.
The work on supervised features-based transfer learning  depends on mutual clustering based interdisciplinary classification. That is, how to use existed annotated data in the original filed to conduct transfer learning when there are only few sparse annotated data in the new and different field. A unified information theoretical formalized formulation is defined for interdisciplinary classification problem, among which the problems based on mutual clustering turn into optimization of destination function. In general, the objective function is defined as the loss of mutual information among source data, common features space, and auxiliary data.
The work on self-learning clustering algorithm  can be categorized to feature-based unsupervised transfer learning. Feature-based unsupervised transfer learning fits the case that neither of the auxiliary data of the two fields is available. Then what we have to deal with is how to utilize plenty of unannotated auxiliary data for transfer learning. The basic idea of self-learning clustering is to obtain common features through clustering on source data and auxiliary data. As new features are based on auxiliary data, the generated features tend to be better than those only from source data.
The two learning strategies introduced above solve transfer learning problem that is based on features of source data and auxiliary data in identical feature space. There is also another kind of transfer learning that is based on features across feature spaces, solving the case that source data and auxiliary data are in different spaces.
3.3. Heterogeneous Space Transfer Learning
Transfer learning aims at solving problems that source data and auxiliary data exist in different spaces. Lots of easily obtained annotated data are utilized to solve the problem with few annotated data.
Some work used the data with two views as a bridge to connect feature spaces of the two data spaces, which in fact acted as a translator between them. Through the translator, the nearest neighbor algorithm  is combined with translation features to translate auxiliary data into source data feature space. Thus generating a uniform model for learning.
3.4. Application of Transfer Learning in Natural Language Processing
Instance-based method in natural language processing applications was first proposed in machine translation which found out the example sentence that was the most similar with input sentence from a large-scale bilingual corpus and put the sentences in the target language and make appropriate adjustment as the input sentence translation result.
Recently, instance-based approaches for natural language processing have showed some flaws despite their pretty good behaviors. The main reason that causes them is that the longest match principle used to solve rule conflict cannot guarantee full applicability. The second reason is that a set of patterns is chosen after corpus pruning, despite having some kind of generality, and also cannot assure the correctness of results annotated according to this pattern in all cases.
Transfer-learning approach was applied to part-of-speech tagging with the same good performance as that of statistics-based approaches. The advantage of transfer-learning approach is its ability of making tagging decision on a richer event set. Moreover, some research work showed that it was easier to be understood and revised.
The advantages of transfer learning based approach can exactly make up disadvantages of instance-based approach. Therefore, our approach, based on the fundamental idea of instance-based approach, makes use of transfer learning based approach. The proposed approach obtains proper nouns through corpus and extracts relative elements which are defined as feature information of composition and structure of proper nouns from the basic proper nouns string. And then transfer learning based part draws rules from basic proper nouns string. Finally the annotation will be tagged to the candidate proper nouns.
Reinforcement learning provides a framework to learn directly from the interaction and achieves goals [35, 36]. Reinforcement learning framework is abstract, flexible, and can be applied in many different applications.
In artificial intelligence field, agent is defined as an entity that has cognitive skills, the ability to solve the problem, and the ability to communicate with the outside environment. By agent, we can establish some system for controlling model. In fact, the model based on agent is an anthropomorphic model; as a result, we can control the behavior of people in the system and unify other control units, providing a unified description of the method. Agents, connected through network, act as intelligent nodes on the network, therefore constructing a distributed multiagent system.
In reinforcement learning framework, an agent, named as controller, is a learner and decision-maker, interacting with environment which is outside of agent. Controller chooses an action; the environment responds to the action, generates new scenes to the agent, and then returns a reward. The framework [35, 37, 38] of reinforcement learning is showed as Figure 1.
Controller interacts with the environment at each step during a discrete-time sequence (). At each time step , agent gets the representation of environment denoted by state , where is the set of all possible states; controller chooses an action according to its policy using , where is all available actions. By taking the action, agent receives a reward and gets to a new status . The ultimate goal of controller is to maximize the sum of the rewards in long term. The mapping from state to action selection is policy of the agent policy, denoted by . Reinforcement learning solves how agent changes policy through experience.
The temporal difference (TD) learning is capable of learning directly from raw experience without determining dynamic model of environment in advance [36, 38]. Moreover, the model learned by temporal difference is updated by estimation which is based on part of learning rather than final results of the learning. These two characteristics of temporal difference make it particularly suitable for solving the prediction problems and control problems in real-time control applications. Given some experience with policy , temporal difference learning updates estimated of , as where is the actual return after time step and is a step size parameter. Temporal difference learning updates in step using the observed reward and estimated .
Let be the value of taking action , in under a policy.  is defined as
Q-learning is an off-policy version of TD control, which is defined by
The term identification process is actually the determination process of which kind of label should be tagged to a word, and thus can be viewed as Markov processes, denoted by , where represents the state of tagging, stands for the action by the controller, and indicates the return attained.
4.1. Definition of State
Definition 1. Proper noun feature word (F) is the word that reflects the categorization character of the unregistered word. There are prefix feature word (PF), intermediate feature word (IF), and suffix feature word (SF) according to the different position of the feature word in the proper noun.
Definition 2. Conjunctive word (J) is the conjunction part of a proper noun word to connect the words.
Definition 3. Word boundary (B) denotes the boundary word of proper noun word and its contexts. The left word boundary (LB) represents the previous context of the word and the right word boundary (RB) is the following context of the word.
Definition 4. Other word (O) is the word that is not any part of proper noun.
Hereby we define seven states for a candidate term, as listed in Table 1.
4.2. Definition of Action
In reinforcement learning framework, policy defines the learning agent behavior at a given time. It in fact is a mapping from perceived states to available actions. Reinforcement learning model obtains rewards by mapping the scene to the action which affects not only the direct rewards, but also the next scene, so that all subsequent rewards will be influenced. Specific states and actions are very different in various applications.
Definition 6. Positive rules are those by which features are determined as proper nouns.
Definition 7. Negative rules are those by which features are not determined as proper nouns.
Definition 8. Neuter rules are those by which features are not determined as proper nouns.
We refer the feature with information valid value less than the average valid value as low information value and the one greater than the average as high information value.
We extract positive rules from features with low information value supplemented by extracting negative rules and draw negative rules from features with high information value supplemented by drawing positive rules. The advantage of using this policy is that we can control the total number of rules and thus spare searching space and storage space.
We define three actions that a controller can choose in a certain state, as in Table 2.
4.3. Definitions of Reward and Return
Reward function in reinforcement learning defines the goal of the problem. The perceived state of the environment is mapped to a value, reward, representing internal needs of the state. The ultimate goal of reinforcement learning agent is to maximize the total reward in long term.
In our work, controller makes decisions under different combinations of the word-annotation pair, so that by the actions we can maximal correct tagged unregistered words. Here, we use annotation quality indicator to evaluate the behavior. Given a feature, we use valid annotation value to score the quality of the feature as
The reward of tagging is given by:
4.4. Transfers Learning
Transfer learning involves reusing knowledge learned from earlier tasks to learn new problems more effectively. The task learned previously is called the source task and the new task is called the target task. Figure 2 shows how the action-value -value reuses the empirical works of the source corpus.
We use -value reuse for the transfer, where the action-value function, -source, learned from the corpus is used as a starting point for the new problem, and a new action-value function, -target, is learned to correct errors in the source action-value function. However, the source state and action spaces may not coincide with the target state and action spaces. Therefore, the controller must be given a mapping between the source and target tasks. Therefore, the controller’s new function is given by
The goal of transfer learning algorithms is to utilize knowledge gained in a source task to speed up learning. Algorithm 1 generates a transfer function for reinforcement learning.
4.5. Unregistered Words Identification by Q-Learning with Transfer Learning
The processing flow unregistered words by Q-learning with transfer learning is showed as follows.
Step 1 (Tagging initially). Use an initial tagger machine to annotate the training corpus.
Step 2 (Generating a set of candidate rules). For each incorrect term, the rule template will be used to generate candidate rules. The state of the rule condition is the context of the word and the action is to amend the incorrect tags.
Step 3 (Attaining rules). Apply each rule in candidate rule set to annotated corpus so as to get a tagging result, and compare the result with the standard answer, and then get the rule with the high evaluation score. Use the result returned by the rule as the basis of next iteration, and assign the rule with the highest priority.
Step 4. Repeat the above steps until the evaluation score is less than a predefine threshold.
An ordered rule set will be generated through the above automatic learning process. By the approach, we can use more syntactic and semantic rule in a wider range. Particularly, the tagging can be built on the basis of word and its corresponding context. The transfer-based tagging requires much less computation than most of Markov-based model. What attracts us the most is that the transfer-based approach is free from over training which is suffered by most hidden Markov models.
5. Experiment and Results
There are numerous benchmark corpuses for biological terms identification, such as the GENIA  data set and JNLPBA04 shared task data set . The GENIA corpus contains 2,000 MEDLINE abstracts with more than 400,000 words and almost 100,000 annotations of biological terms . JNLPBA04  has several shared tasks for natural language processing in biomedicine and its application. Both data sets are often used as benchmark data sets for evaluation.
We carry on two rounds of testing. In the first round of testing, we randomly selected terms from GENIA corpus and divided them into two parts, one part for training and the other for testing. In the experiment, we identify four kinds of unregistered biological terms: DNA, RNA, cell line, protein, and cell type. And we use precision, recall rate, and score to evaluate the results. The results are showed as Figure 3.
In the second round of testing, we randomly selected terms from JNLPBA04 and divided them into two parts, one part for training and the other for testing. The results are showed as Figure 4.
We can see from both results that the Q-learning based recognizer is generally better than the general unregistered word recognizer, but not for all cases. This is because that Q-learning tries the policy that can get the best identification effort to the controller’s knowledge. However, it does not always work as the controller may fall to a local optimal solution. When we add knowledge by transfer learning, the recognizer gains a remarkable improvement in the three evaluation factors. Hereby we can say that Q-learning with knowledge transfer learning approach is the best of the three.
Huge numbers of biological texts provide us with a highly reliable information source for biological research. How to mine information and find new knowledge efficiently and effectively are a very important new issue to researchers. Recognizing unregistered biological words from texts is essential to biological text mining.
In this work, we propose unregistered biological words. This approach used Q-learning algorithm so as to attain optimal solution to choose the tag for the terms and took advantage of transfer learning to fully take advantage of existing knowledge (Algorithm 2).
We carried on two rounds of testing on three approaches. In the first round testing, three recognizers identified unregistered words from GENIA corpus, and in the second round of testing, they identified unregistered words from JNLPBA04 corpus. Both of the testing results showed that the approach by Q-learning algorithm with transfer learning, was the best of the three. It really improved the performance of unregistered biological words identification.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work was supported by National Natural Science Foundation of China (61303108, 61272005, and 61373094) and High School Natural Foundation of Jiangsu (13KJB520020). Fei Zhu conceived and designed the experiments. Xiaoke Zhou performed the experiments. Fei Zhu analyzed the data. Xiaoke Zhou contributed in reagents/materials/analysis tools. Fei Zhu wrote the paper.
- F. Zhu, P. Patumcharoenpol, C. Zhang et al., “Biomedical text mining and its applications in cancer research,” Journal of Biomedical Informatics, vol. 46, no. 2, pp. 197–376, 2013.
- T. Rocktäschel, M. Weidlich, and U. Leser, “ChemSpot: a hybrid system for chemical named entity recognition,” Bioinformatics, vol. 28, no. 12, pp. 1633–1640, 2012.
- K. B. Cohen and L. Hunter, “Getting started in text mining,” PLoS Computational Biology, vol. 4, no. 1, article e20, pp. 1–3, 2008.
- B. Kolluru, L. Hawizy, P. Murray-Rust, J. Tsujii, and S. Ananiadou, “Using workflows to explore and optimise named entity recognition for chemistry,” PLoS ONE, vol. 6, no. 5, Article ID e20181, 2011.
- Y. Sasaki, Y. Tsuruoka, J. McNaught, and S. Ananiadou, “How to make the most of NE dictionaries in statistical NER,” BMC Bioinformatics, vol. 9, supplement 11, article S5, 2008.
- A. Ritter, C. Sam, M. Mausam, and O. Etzioni, “Named entity recognition in tweets: an experimental study,” in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP '11), pp. 1524–1534, July 2011.
- T. Joachims, “Training linear SVMs in linear time,” in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '06), pp. 217–226, Philadelphia, Pa, USA, August 2006.
- C.-C. Chang and C.-J. Lin, “LIBSVM: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, article 27, 2011.
- C. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2007.
- Z. Ju, J. Wang, and F. Zhu, “Named entity recognition from biomedical text using SVM,” in Proceedings of the 5th International Conference on Bioinformatics and Biomedical Engineering (iCBBE '11), pp. 1–4, May 2011.
- G. D. Zhou, “Recognizing names in biomedical texts using mutual information independence model and SVM plus sigmoid,” International Journal of Medical Informatics, vol. 75, no. 6, pp. 456–467, 2006.
- E. Rosten, R. Porter, and T. Drummond, “Faster and better: a machine learning approach to corner detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 105–119, 2010.
- S. Green, M.-C. De Marneffe, J. Bauer, and C. D. Manning, “Multiword expression identification with tree substitution grammars: a parsing tour de force with french,” in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP '11), pp. 725–735, July 2011.
- J. Zhang, D. Shen, G. Zhou, J. Su, and C.-L. Tan, “Enhancing HMM-based biomedical named entity recognition by studying special phenomena,” Journal of Biomedical Informatics, vol. 37, no. 6, pp. 411–422, 2004.
- L. Yeganova, L. Smith, and W. J. Wilbur, “Identification of related gene/protein names based on an HMM of name variations,” Computational Biology and Chemistry, vol. 28, no. 2, pp. 97–107, 2004.
- G. Zhou, J. Zhang, J. Su, D. Shen, and C. Tan, “Recognizing names in biomedical texts: a machine learning approach,” Bioinformatics, vol. 20, no. 7, pp. 1178–1190, 2004.
- Y. He and M. Kayaalp, “Biological entity recognition with conditional random fields,” AMIA Annual Symposium Proceedings, pp. 293–297, 2008.
- A. Usié, R. Alves, F. Solsona, M. Vázquez, and A. Valencia, “CheNER: chemical named entity recognizer,” Bioinformatics, vol. 28, no. 12, pp. 1633–1640, 2012.
- L. Li, R. Zhou, and D. Huang, “Two-phase biomedical named entity recognition using CRFs,” Computational Biology and Chemistry, vol. 33, no. 4, pp. 334–338, 2009.
- F. Zamora-Martínez, V. Frinken, S. España-Boquera et al., “Neural network language models for off-line handwriting recognition,” Pattern Recognition, vol. 47, no. 4, pp. 1642–1652, 2014.
- S. Raychaudhuri, J. T. Chang, P. D. Sutphin, and R. B. Altman, “Associating genes with gene ontology codes using a maximum entropy analysis of biomedical literature,” Genome Research, vol. 12, no. 1, pp. 203–214, 2002.
- J. Wang, W. Shao, and F. Zhu, “Biological terms boundary identification by maximum entropy model,” in Proceedings of the 6th IEEE Conference on Industrial Electronics and Applications (ICIEA '11), pp. 2446–2448, June 2011.
- S. K. Saha, S. Sarkar, and P. Mitra, “Feature selection techniques for maximum entropy based biomedical named entity recognition,” Journal of Biomedical Informatics, vol. 42, no. 5, pp. 905–911, 2009.
- F. Zhu and B. Shen, “Combined SVM-CRFs for biological named entity recognition with maximal bidirectional squeezing,” PLoS ONE, vol. 7, no. 6, pp. 1–9, 2012.
- S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, 2010.
- R. Klette, Object Detection, Concise Computer Vision, Springer, London, UK, 2014.
- W. Dai, Q. Yang, G.-R. Xue, and Y. Yu, “Boosting for transfer learning,” in Proceedings of the 24th International Conference on Machine Learning (ICML '07), pp. 193–200, June 2007.
- Y. Yao and D. Gianfranco, “Boosting for transfer learning with multiple sources,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 1855–1862, June 2010.
- W. Dai, G.-R. Xue, Q. Yang, and Y. Yu, “Co-clustering based classification for out-of-domain documents,” in Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '07), pp. 210–219, ACM, August 2007.
- G.-R. Xue, W. Dai, Q. Yang, and Y. Yu, “Topic-bridged PLSA for cross-domain text classification,” in Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '08), pp. 627–634, ACM, July 2008.
- X. Ling, W. Dai, G.-R. Xue, Q. Yang, and Y. Yu, “Spectral domain-transfer learning,” in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '08), pp. 488–496, ACM, August 2008.
- R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng, “Self-taught learning: transfer learning from unlabeled data,” in Proceedings of the 24th International Conference on Machine Learning (ICML '07), pp. 759–766, June 2007.
- J. Chen and X. Liu, “Transfer learning with one-class data,” Pattern Recognition Letters, vol. 37, pp. 32–40, 2014.
- D. T. Larose, “k-nearest neighbor algorithm,” in Discovering Knowledge in Data: An Introduction To Data Mining, pp. 90–106, John Wiley & Sons, 2005.
- H. Van Hasselt, Reinforcement Learning: State of the Art, Springer, Berlin, Germany, 2007.
- R. S. Sutton and A. G. Barto, Reinforcement Learning, MIT Press, 1998.
- C. Ye, N. H. C. Yung, and D. Wang, “A fuzzy controller with supervised learning assisted reinforcement learning algorithm for obstacle avoidance,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 33, no. 1, pp. 17–27, 2003.
- K. L. Du and M. N. S. Swamy, “Reinforcement learning,” in Neural Networks and Statistical Learning, pp. 547–561, Springer, London, UK, 2014.
- L. Busoniu, R. Babuska, B. De Schutter, and D. Ernst, Reinforcement Learning and Dynamic Programming Using Function Approximators, Automation and Control Engineering Series, CRC Press, 2010.
- X. Xu, L. Zuo, and Z. Huang, “Reinforcement learning algorithms with function approximation: recent advances and applications,” Information Sciences, vol. 261, pp. 1–31, 2014.
- B. Baddeley, “Reinforcement learning in continuous time and space: interference and not Ill conditioning is the main problem when using distributed function approximators,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 38, no. 4, pp. 950–956, 2008.
- J.-D. Kim, T. Ohta, Y. Tateisi, and J. Tsujii, “GENIA corpus—a semantically annotated corpus for bio-textmining,” Bioinformatics, vol. 19, no. 1, pp. i180–i182, 2003.
- J. D. Kim, T. Ohta, Y. Tsuruoka et al., “Introduction to the bio-entity recognition task at JNLPBA,” in Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and Its Applications, pp. 70–75, 2044.
Copyright © 2014 Fei Zhu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.