- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Journal of Robotics
Volume 2012 (2012), Article ID 505191, 15 pages
Robots Learn Writing
1Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37240, USA
2Institute of Robotics and Automatic Information System, Nankai University, Tianjin 300071, China
3Graduate School of Decision and Technology, Tokyo Institute of Technology, Tokyo 152-8552, Japan
Received 19 March 2012; Accepted 19 June 2012
Academic Editor: Huosheng Hu
Copyright © 2012 Huan Tan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
- R. A. Brooks, “A robust layered control system for a mobile robot,” IEEE journal of robotics and automation, vol. 2, no. 1, pp. 14–23, 1986.
- R. Brooks, “How to build complete creatures rather than isolated cognitive simulators,” in Architectures for Intelligence, K. VanLehn, Ed., pp. 225–239, Lawrence Erlbaum Associates, New York, NY, USA, 1991.
- A. Sloman and J. Chappell, “The altricial-precocial spectrum for robots,” in Proceedings of the International Joint Conferences on Artificial Intelligence (IJCA '05), pp. 1187–1192, Edinburgh, UK, 2005.
- A. Stoytchev, “Toward learning the binding affordances of objects: a behavior-grounded approach,” in Proceedings of the AAAI Symposium on Developmental Robotics, pp. 21–23, 2005.
- S. Schaal, “Learning from demonstration,” in Advances in Neural Information Processing Systems, M. J. M. C. Mozer and T. Petsche, Eds., pp. 1040–1046, The MIT Press, Cambridge, Mass, USA, 1997.
- M. Uchiyama, “Formation of high speed motion pattern of mechanical arm by trial,” Transactions, Society of Instrument and Control Engineers, vol. 19, pp. 706–712, 1978.
- C. Atkeson and J. McIntyre, “Robot trajectory learning through practice,” in Proceedings of the IEEE Conference on Robotics and Automation, pp. 1737–1742, San Francisco, Calif, USA, 1986.
- B. D. Argall, S. Chernova, M. Veloso, and B. Browning, “A survey of robot learning from demonstration,” Robotics and Autonomous Systems, vol. 57, no. 5, pp. 469–483, 2009.
- A. Billard, “Learning motor skills by imitation: a biologically inspired robotic model,” Cybernetics and Systems, vol. 32, no. 1-2, pp. 155–193, 2001.
- S. Calinon, F. Guenter, and A. Billard, “On learning, representing, and generalizing a task in a humanoid robot,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 37, no. 2, pp. 286–298, 2007.
- A. Ijspeert, J. Nakanishi, and S. Schaal, “Learning attractor landscapes for learning motor primitives,” in Advances in Neural Information Processing Systems, S. Becker, S. Thrun, and K. Obermayer, Eds., vol. 15, pp. 1547–1554, The MIT Press, 2003.
- R. Dillmann, O. Rogalla, M. Ehrenmann, R. Zollner, and M. Bordegoni, “Learning robot behaviour and skills based on human demonstration and advice: the machine learning paradigm,” in Proceedings of the 9th International Symposium of Robotics Research (ISRR '99), pp. 229–238, Snowbird, Utah, USA, October 1999.
- Y. Kuniyoshi, M. Inaba, and H. Inoue, “Learning by watching: extracting reusable task knowledge from visual observation of human performance,” IEEE Transactions on Robotics and Automation, vol. 10, no. 6, pp. 799–822, 1994.
- T. Inamura, M. Inaba, and H. Inoue, “Acquisition of probabilistic behavior decision model based on the interactive teaching method,” in Proceedings of the 9th International Conference on Advanced Robotics,, pp. 523–528, 1999.
- R. M. Voyles and P. K. Khosla, “A Multi-agent system for programming robots by human demonstration,” Integrated Computer-Aided Engineering, vol. 8, no. 1, pp. 59–67, 2001.
- A. J. Ijspeert, J. Nakanishi, and S. Schaal, “Movement imitation with nonlinear dynamical systems in humanoid robots,” in IEEE International Conference on Robotics and Automation, pp. 1398–1403, Washington, DC, USA, May 2002.
- I. Jolliffe, Principal Component Analysis, Springer, New York, NY, USA, 1986.
- D. J. Bartholomew, “The foundations of factor analysis,” Biometrika, vol. 71, no. 2, pp. 221–232, 1984.
- J. B. Tenenbaum, V. de Silva, and J. C. Langford, “A global geometric framework for nonlinear dimensionality reduction,” Science, vol. 290, no. 5500, pp. 2319–2323, 2000.
- S. T. Roweis and L. K. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, no. 5500, pp. 2323–2326, 2000.
- C. K. I. Williams, “On a connection between kernel PCA and metric multidimensional scaling,” Machine Learning, vol. 46, no. 1–3, pp. 11–19, 2002.
- C. Bishop, Pattern Recognition and Machine Learning, Springer, New York, NY, USA, 2006.
- C. Rasmussen, “Gaussian processes in machine learning,” in Advanced Lectures on Machine Learning, pp. 63–71, The MIT Press, Cambridge, Mass, USA, 2004.
- C. G. Atkeson, A. W. Moore, and S. Schaal, “Locally weighted learning,” Artificial Intelligence Review, vol. 11, no. 1–5, pp. 11–73, 1997.
- S. Vijayakumar and S. Schaal, “Locally weighted projection regression: an O (n) algorithm for incremental real time learning in high dimensional space,” in Proceedings of The 17th International Conference on Machine Learning, pp. 288–293, Stanford, Calif, USA, 2000.
- S. Chernova and M. Veloso, “Confidence-based policy learning from demonstration using Gaussian mixture models,” in Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, p. 233, 2007.
- H. Abdi, “A neural network primer,” Journal of Biological Systems, vol. 2, no. 3, pp. 247–283, 1994.
- R. Sutton and A. Barto, Reinforcement Learning: An Introduction, The MIT press, 1998.
- J. Peters, S. Vijayakumar, and S. Schaal, “Reinforcement learning for humanoid robotics,” in Proceedings of the IEEE-RAS International Conference on Humanoid Robotis, pp. 1–20, Karlsruhe, Germany, 2003.
- P. Dyer and S. R. McReynolds, The Computation and Theory of Optimal Control, Academic Press, 1970.
- E. Theodorou, J. Buchli, and S. Schaal, “Reinforcement learning of motor skills in high dimensions: a path integral approach,” in IEEE International Conference on Robotics and Automation (ICRA '10), pp. 2397–2403, 2010.
- E. A. Theodorou, J. Buchli, and S. Schaal, “A generalized path integral control approach to reinforcement learning,” The Journal of Machine Learning Research, vol. 11, pp. 3137–3181, 2010.
- H. Tan and K. Kawamura, “A computational framework for integrating robotic exploration and human demonstration in imitation learning,” in Proceedings of the IEEE International Conference on System, Man and Cybernetics, pp. 2501–2506, Anchorage, Alaska, USA, 2011.
- O. C. Jenkins and M. J. Matarić, “A spatio-temporal extension to isomap nonlinear dimension reduction,” in Proceedings of the 21st International Conference on Machine Learning (ICML '04), p. 56, July 2004.
- C. M. Bishop, M. Svensén, and C. K. I. Williams, “GTM: the generative topographic mapping,” Neural Computation, vol. 10, no. 1, pp. 215–234, 1998.
- S. Calinon and A. Billard, “A probabilistic programming by demonstration framework handling constraints in joint space and task space,” in Proceedings of the IEEE International Conference on Intelligent Robots and Systems, pp. 367–372, September 2008.
- D. Grimes, R. Chalodhorn, and R. Rao, “Dynamic imitation in a humanoid robot through nonparametric probabilistic inference,” in Proceedings of the Robotics: Science and Systems (RSS '06), The MIT Press, 2006.
- A. Shon, K. Grochow, A. Hertzmann, and R. Rao, “Gaussian process Cca for image synthesis and robotic imitation,” Tech. Rep. UW-CSE-TR-2005-06-02, University of Washington CSE Department, 2005.
- M. Schneider and W. Ertel, “Robot learning by demonstration with local gaussian process regression,” in Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS '10), pp. 255–260, October 2010.
- H. Tan, E. Erdemir, K. Kawamura, and Q. Du, “A potential field method-based extension of the dynamic movement primitive algorithm for imitation learning with obstacle avoidance,” in Proceedings of the IEEE International Conference on Mechatronics and Automation, pp. 525–530, Beijing, China, 2011.
- H. Tan, Q. Du, and N. Wu, “A Framework for cognitive robots to learn behaviors through imitation and interaction with humans,” in Proceedings of the IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support, pp. 235–238, New Orleans, La, USA, 2012.
- S. J. Russell and P. Norvig, Artificial Intelligence : A Modern Approach, Prentice Hall, Upper Saddle River, NJ, USA, 3rd edition, 2010.
- K. Kawamura, R. Peters II, R. Bodenheimer et al., “Multiagent-based cognitive robot architecture and its realization,” International Journal of Humanoid Robotics, vol. 1, pp. 65–93, 2004.
- K. Kawamura, S. M. Gordon, P. Ratanaswasd, E. Erdemir, and J. F. Hall, “Implementation of cognitive control for a humanoid robot,” International Journal of Humanoid Robotics, vol. 5, no. 4, pp. 547–586, 2008.
- H. Tan, “Implementation of a framework for imitation learning on a humanoid robot using a cognitive architecture,” in The Future of Humanoid Robots: Research and Applications, R. Zaier, Ed., pp. 189–210, InTech Open Access Publishing, 2012.
- H. Tan and C. Liang, “A conceptual cognitive architecture for robots to learn behaviors from demonstrations in robotic aid area,” in Proceedings of 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 1248–1262, Boston, Mass, USA, 2011.