skip to main content
10.1145/1143844.1143957acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmlConference Proceedingsconference-collections
Article

Full Bayesian network classifiers

Published:25 June 2006Publication History

ABSTRACT

The structure of a Bayesian network (BN) encodes variable independence. Learning the structure of a BN, however, is typically of high computational complexity. In this paper, we explore and represent variable independence in learning conditional probability tables (CPTs), instead of in learning structure. A full Bayesian network is used as the structure and a decision tree is learned for each CPT. The resulting model is called full Bayesian network classifiers (FBCs). In learning an FBC, learning the decision trees for CPTs captures essentially both variable independence and context-specific independence. We present a novel, efficient decision tree learning, which is also effective in the context of FBC learning. In our experiments, the FBC learning algorithm demonstrates better performance in both classification and ranking compared with other state-of-the-art learning algorithms. In addition, its reduced effort on structure learning makes its time complexity quite low as well.

References

  1. Cheng, J., Greiner, R., Kelly, J., Bell, D., & Liu, W. (2002). Learning Bayesian networks from data: An information-theory based approach. Artificial Intelligence Journal, 137:1--2, 43--90. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Chickering, D., Heckerman, D., & Meek, C. (1997). A Bayesian approach to learning Bayesian networks with local structure. In Proceedings of Thirteenth conference on Uncertainty in Artificial Intelligence, 80--89. Morgan Kaufmann. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Cooper, G., & Herskovits, E. (1992). A Bayesian method for the induction of probabilistic networks from data. Machine Learning, 9, 309--347. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Friedman, N., Geiger, D., & Goldszmidt, M. (1997). Bayesian network classifiers. Machine Learning, 29, 131--163. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Friedman, N., & Goldszmidt, M. (1996). Learning Bayesian networks with local structure. In Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence, 252--262. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Friedman, N., & Yakhini, Z. (1996). On the sample complexity of learning Bayesian networks. In Proceedings of twelfth conference on uncertainty in artificial intelligence, 274--282. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Heckerman, D. (1991). Probabilistic similarity networks. MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Heckerman, D. (1999). A tutorial on learning with Bayesian networks. In M. I. Jordan (Ed.), Learning in Graphical Models, 301--354. MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Heckerman, D., Geiger, D., & Chickering, D. M. (1995). Learning Bayesian networks. The combination of knowledge and statistical data. Machine Learning, 20, 197--243. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Keerthi, S., Shevade, S., Bhattacharyya, C., & Murthy, K. (2001). Improvements to Platt's SMO algorithm for SVM classifier design. Neural Computation, 13(3), 637--649. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Kohavi, R. (1996). Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, 202--207. AAAI Press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Lam, W., & Bacchus, F. (1994). Learning Bayesian belief networks: an approach based on the MDL principle. Computational Intelligence, 10(4), 269--293.Google ScholarGoogle ScholarCross RefCross Ref
  13. Pearl, J. (1988). Probabilistic reasoning in intelligent systems:networks of plausible inference. Morgan Kauhmann. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Platt, J. C. (1998). Fast training of support vector machines using sequential minimal optimization. Advances in Kernel Methods-Support Vector Learning. MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Provost, F., & Fawcett, T. (1997). Analysis and visualization of classifier performance: comparison under imprecise class and cost distribution. In Proceedings of the Third International Conference on Knowledge Discovery and Data Mining, 43--48. AAAI Press.Google ScholarGoogle Scholar
  16. Provost, F. J., & Domingos, P. (2003). Tree induction for probability-based ranking. Machine Learning, 52(3), 199--215. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Quinlan, J. (1993). C4.5: Programs for machine learning. Morgan Kaufmann: San Mateo, CA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Teyssier, M., & Koller, D. (2005). Ordering-based search: A simple and effective algorithm for learning bayesian networks. In Proceedings of the Twenty-first Conference on Uncertainty in Artificial Intelligence, 584--590.Google ScholarGoogle Scholar
  19. Webb, G. I., Boughton, J., & Wang. Z. (2005). Not so naive Bayes: Aggregating one-dependence estimators. Journal of Machine Learning, 58(1), 5--54. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Witten, I. H., & Frank, E. (2000). Data mining practical machine learning tools and techniques with Java implementation. Morgan Kaufmann. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Full Bayesian network classifiers

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        ICML '06: Proceedings of the 23rd international conference on Machine learning
        June 2006
        1154 pages
        ISBN:1595933832
        DOI:10.1145/1143844

        Copyright © 2006 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 25 June 2006

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • Article

        Acceptance Rates

        ICML '06 Paper Acceptance Rate140of548submissions,26%Overall Acceptance Rate140of548submissions,26%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader