Table of Contents Author Guidelines Submit a Manuscript
Journal of Biomedicine and Biotechnology
Volume 2008, Article ID 218097, 7 pages
http://dx.doi.org/10.1155/2008/218097
Research Article

Classification Models for Early Detection of Prostate Cancer

1Institute of Medical Informatics, Charité - Universitätsmedizin, Hindenburgdamm 30, 12200 Berlin, Germany
2Molecular Modelling Group, Institut für Molekulare Pharmakologie, Robert Rössle Straße 10, 13125 Berlin, Germany
3Department of Urology, Charité - Universitätsmedizin, Charitéplatz 1, 10098 Berlin, Germany

Received 12 September 2007; Accepted 2 January 2008

Academic Editor: Daniel Howard

Copyright © 2008 Joerg D. Wichard et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. A. Jemal, R. Siegel, E. M. Ward, and M. J. Thun, “Cancer facts & figures,” Department of Epidemiology and Surveillance Research, American Cancer Society, Atlanta, Ga, USA, 2006. View at Google Scholar
  2. T. Hastie, R. Tibshirani, and J. Friedman, in The Elements of Statistical Learning, Springer Series in Statistics, Springer, New York, NY, USA, 2001.
  3. Y. Bard, in Nonlinear Parameter Estimation, Academic Press, New York, NY, USA, 1974.
  4. L. Breiman, J. Friedman, R. Olshen, and C. Stone, in Classification and Regression Trees, Wadsworth and Brooks, Monterey, Calif, USA, 1993.
  5. C. M. Bishop, in Neural Networks for Pattern Recognition, Oxford University Press, Oxford, UK, 1995.
  6. V. N. Vapnik, in The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 1999.
  7. B. E. Boser, I. Guyon, and V. N. Vapnik, “A training algorithm for optimal margin classifiers,” in Proceedings of the 5th Annual ACM Conference on Computational Learning Theory (COLT '92), pp. 144–152, Pittsburgh, Pa, USA, July 1992. View at Publisher · View at Google Scholar
  8. C. Merkwirth, U. Parlitz, and W. Lauterborn, “Fast nearest-neighbor searching for nonlinear signal processing,” Physical Review E, vol. 62, no. 2, pp. 2089–2097, 2000. View at Publisher · View at Google Scholar
  9. J. D. Wichard and C. Merkwirth, “ENTOOL—A Matlab toolbox for ensemble modeling,” 2007, http://www.j-wichard.de/entool/. View at Google Scholar
  10. C. Stephan, H. Cammann, A. Semjonow et al., “Multicenter evaluation of an artificial neural network to increase the prostate cancer detection rate and reduce unnecessary biopsies,” Clinical Chemistry, vol. 48, no. 8, pp. 1279–1287, 2002. View at Google Scholar
  11. C. Stephan, M. Klaas, C. Müller, D. Schnorr, S. A. Loening, and K. Jung, “Interchangeability of measurements of total and free prostate-specific antigen in serum with 5 frequently used assay combinations: an update,” Clinical Chemistry, vol. 52, no. 1, pp. 59–64, 2006. View at Publisher · View at Google Scholar · View at PubMed
  12. M. P. Perrone and L. N. Cooper, “When networks disagree: ensemble methods for hybrid neural networks,” in Neural Networks for Speech and Image Processing, R. J. Mammone, Ed., pp. 126–142, Chapman-Hall, New York, NY, USA, 1993. View at Google Scholar
  13. A. Krogh and P. Sollich, “Statistical mechanics of ensemble learning,” Physical Review E, vol. 55, no. 1, pp. 811–825, 1997. View at Publisher · View at Google Scholar
  14. A. Krogh and J. Vedelsby, “Neural network ensembles, cross validation, and active learning,” in Advances in Neural Information Processing Systems, G. Tesauro, D. Touretzky, and T. Leen, Eds., vol. 7, pp. 231–238, MIT Press, Cambridge, Mass, USA, 1995. View at Google Scholar
  15. S. Geman, E. Bienenstock, and R. Doursat, “Neural networks and the bias/variance dilemma,” Neural Computation, vol. 4, no. 1, pp. 1–58, 1992. View at Google Scholar
  16. E. B. Kong and T. G. Dietterich, “Error-correcting output coding corrects bias and variance,” in Proceedings of the 12th International Conference on Machine Learning (ICML '95), pp. 313–321, Tahoe City, Calif, USA, July 1995.
  17. R. Kohavi and D. Wolpert, “Bias plus variance decomposition for zero-one loss functions,” in Proceedings of the 13th International Conference on Machine Learning (ICML '96), L. Saitta, Ed., pp. 275–283, Morgan Kaufmann, Bari, Italy, July 1996.
  18. P. Domingos, “A unified bias-variance decomposition for zero-one and squared loss,” in Proceedings of the 17th National Conference on Artificial Intelligence, pp. 564–569, Austin, Tex, USA, July-August 2000.
  19. U. Naftaly, N. Intrator, and D. Horn, “Optimal ensemble averaging of neural networks,” Network: Computation in Neural Systems, vol. 8, no. 3, pp. 283–296, 1997. View at Publisher · View at Google Scholar
  20. L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123–140, 1996. View at Google Scholar
  21. C. Schaffer, “Selecting a classification method by cross-validation,” in Proceedings of the 4th International Workshop on Artificial Intelligence and Statistics, pp. 15–25, Fort Lauderdale, Fla, USA, January 1993.
  22. J. R. Quinlan, “Comparing connectionist and symbolic learning methods,” in Computational Learning Theory and Natural Learning Systems, vol. 1, pp. 445–456, MIT Press, Cambridge, Mass, USA, 1994. View at Google Scholar
  23. I. Guyon and A. Elisseeff, “An introduction to variable and feature selection,” Journal of Machine Learning Research, vol. 3, pp. 1157–1182, 2003. View at Publisher · View at Google Scholar
  24. D. H. Wolpert, “Stacked generalization,” Neural Networks, vol. 5, pp. 241–259, 1992. View at Publisher · View at Google Scholar
  25. J. D. Wichard, C. Merkwirth, and M. Ogorzałek, “Detecting correlation in stockmarkets,” Physica A, vol. 344, no. 1-2, pp. 308–311, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  26. A. Rothfuss, T. Steger-Hartmann, N. Heinrich, and J. D. Wichard, “Computational prediction of the chromosome-damaging potential of chemicals,” Chemical Research in Toxicology, vol. 19, no. 10, pp. 1313–1319, 2006. View at Publisher · View at Google Scholar · View at PubMed
  27. D. W. Hosmer and S. Lemeshow, in Applied Logistic Regression, John Wiley & Sons, New York, NY, USA, 1989.
  28. M. Riedmiller and H. Braun, “A direct adaptive method for faster backpropagation learning: the RPROP algorithm,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 1, pp. 586–591, San Francisco, Calif, USA, March-April 1993. View at Publisher · View at Google Scholar
  29. C. Igel and M. Hüsken, “Improving the Rprop learning algorithm,” in Proceedings of the 2nd International ICSC Symposium on Neural Computation (NC '02), H. Bothe and R. Rojas, Eds., pp. 115–121, Academic Press, Berlin, Germany, May 2000.
  30. C. Merkwirth, J. D. Wichard, and M. Ogorzałek, “Stochastic gradient descent training of ensembles of DT-CNN classifiers for digit recognition,” in Proceedings of the 16th European Conference on Circuit Theory and Design (ECCTD '03), vol. 2, pp. 337–341, Kraków, Poland, September 2003.
  31. N. Cristianini and J. Shawe-Taylor, in An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods, Cambridge University Press, Cambridge, UK, 2000.
  32. V. N. Vapnik and A. J. Tscherwonenkis, in Theorie der Zeichenerkennung, Akademie, Berlin, 1979.
  33. C. C. Chang and C. J. Lin, “Libsvm—Alibrary for support vector machines,” 2001. View at Google Scholar
  34. R. E. Bellman, in Adaptive Control Processes, Princeton University Press, Princeton, NJ, USA, 1961.
  35. C. Merkwirth, M. Ogorzałek, and J. D. Wichard, “Stochastic gradient descent training of ensembles of DT-CNN classifiers for digit recognition,” in Proceedings of the 16th European Conference on Circuit Theory and Design (ECCTD '03), vol. 2, pp. 337–341, Kraków, Poland, September 2003.
  36. R. A. Hilgers, “Distribution-free confidence bounds for ROC curves,” Methods of Information in Medicine, vol. 30, no. 2, pp. 96–101, 1991. View at Google Scholar
  37. P. Finne, R. Finne, A. Auvinen et al., “Predicting the outcome of prostate biopsy in screen-positive men by a multilayer perceptron network,” Urology, vol. 56, no. 3, pp. 418–422, 2000. View at Publisher · View at Google Scholar
  38. M. Remzi, T. Anagnostou, V. Ravery et al., “An artificial neural network to predict the outcome of repeat prostate biopsies,” Urology, vol. 62, no. 3, pp. 456–460, 2003. View at Publisher · View at Google Scholar
  39. A. R. Zlotta, M. Remzi, P. B. Snow, C. C. Schulman, M. Marberger, and B. Djavan, “An artificial neural network for prostate cancer staging when serum prostate specific antigen is 10 ng./ml. or less,” Journal of Urology, vol. 169, no. 5, pp. 1724–1728, 2003. View at Publisher · View at Google Scholar · View at PubMed