Table of Contents Author Guidelines Submit a Manuscript
Abstract and Applied Analysis
Volume 2013, Article ID 715275, 7 pages
http://dx.doi.org/10.1155/2013/715275
Research Article

Least Square Regularized Regression for Multitask Learning

1Department of Mathematics, Beijing University of Chemical Technology, Beijing 100029, China
2Department of Mathematics, Beijing University of Aeronautics and Astronautics, Beijing 100091, China
3Department of Systems Engineering and Engineering Management, City University of Hong Kong, Hong Kong

Received 11 October 2013; Accepted 13 November 2013

Academic Editor: Yiming Ying

Copyright © 2013 Yong-Li Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. R. Caruana, “Multitask learning,” Machine Learning, vol. 28, no. 1, pp. 41–75, 1997. View at Google Scholar · View at Scopus
  2. Y. Zhang and D.-Y. Yeung, “A convex formulation for learning task relationships in multi-task learning,” in Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence (UAI '10), pp. 733–742, Catalina Island, Calif, USA, July 2010. View at Scopus
  3. T. Evgeniou and M. Pontil, “Regularized multi-task learning,” in Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 109–117, New York, NY, USA, August 2004. View at Scopus
  4. A. Argyriou, T. Evgeniou, and M. Pontil, “Convex multi-task feature learning,” Machine Learning, vol. 73, no. 3, pp. 243–272, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. J. Liu, S. Ji, and J. Ye, “Multi-task feature learning via efficient L2,1-norm minimization,” in Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence (UAI '09), pp. 339–348, Montreal, Canada, June 2009. View at Scopus
  6. B. Bakker and T. Heskes, “Task clustering and gating for bayesian multitask learning,” Journal of Machine Learning Research, vol. 4, no. 1, pp. 83–99, 2004. View at Publisher · View at Google Scholar · View at Scopus
  7. A. Argyriou, C. A. Micchelli, M. Pontil, and Y. M. Ying, “A spectral regularization framework for multi-task structure learning,” in Proceedings of the 21st Annual Conference on Advances in Neural Information Processing Systems (NIPS '07), December 2007. View at Scopus
  8. J. Guinney, Q. Wu, and S. Mukherjee, “Estimating variable structure and dependence in multitask learning via gradients,” Machine Learning, vol. 83, no. 3, pp. 265–287, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. Y. Xue, X. Liao, L. Carin, and B. Krishnapuram, “Multi-task learning for classification with Dirichlet process priors,” Journal of Machine Learning Research, vol. 8, pp. 35–63, 2007. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  10. S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. J. Baxter, “Learning internal representations,” in Proceedings of the Workshop on Computational Learning Theory (COLT '95), Morgan Kaufmann, San Mateo, Calif, USA, 1995.
  12. N. Intrator and S. Edelman, “Making a low-dimensional representation suitable for diverse tasks,” Connection Science, vol. 8, no. 2, pp. 205–224, 1996. View at Publisher · View at Google Scholar · View at Scopus
  13. S. Thrun, “Is learning the n-th thing any easier than learning the first?” in Proceedings of the Advances in Neural Information Processing Systems (NIPS '96), D. Touretzky and M. Mozer, Eds., 1996. View at Google Scholar
  14. T. Heskes, “Solving a huge number of similar tasks: a combination of multi-task learning and a hierarchical Bayesian approach,” in Proceedings of the International Conference on Machine Learning (ICML '98), 1998.
  15. B. Romera-Paredes, A. Argyriou, N. Berthouze, and M. Pontil, “Exploiting unrelated tasks in multi-task learning,” in Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS '12), pp. 951–959, La Palma, Spain, 2012.
  16. J. Baxter, “A model of inductive bias learning,” Journal of Artificial Intelligence Research, vol. 12, pp. 149–198, 2000. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  17. R. K. Ando and T. Zhang, “A framework for learning predictive structures from multiple tasks and unlabeled data,” Journal of Machine Learning Research, vol. 6, pp. 1817–1853, 2005. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  18. S. Ben-David and R. S. Borbely, “A notion of task relatedness yielding provable multiple-task learning guarantees,” Machine Learning, vol. 73, no. 3, pp. 273–287, 2008. View at Publisher · View at Google Scholar · View at Scopus
  19. M. Solnon, S. Arlot, and F. Bach, “Multi-task regression using minimal penalties,” Journal of Machine Learning Research, vol. 13, pp. 2773–2812, 2012. View at Google Scholar · View at MathSciNet
  20. Y. M. Ying and D.-X. Zhou, “Learnability of Gaussians with flexible variances,” Journal of Machine Learning Research, vol. 8, pp. 249–276, 2007. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  21. Q. Wu, Y. Ying, and D.-X. Zhou, “Learning rates of least-square regularized regression,” Foundations of Computational Mathematics, vol. 6, no. 2, pp. 171–192, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet