ABSTRACT
Two-dimensional contingency or co-occurrence tables arise frequently in important applications such as text, web-log and market-basket data analysis. A basic problem in contingency table analysis is co-clustering: simultaneous clustering of the rows and columns. A novel theoretical formulation views the contingency table as an empirical joint probability distribution of two discrete random variables and poses the co-clustering problem as an optimization problem in information theory---the optimal co-clustering maximizes the mutual information between the clustered random variables subject to constraints on the number of row and column clusters. We present an innovative co-clustering algorithm that monotonically increases the preserved mutual information by intertwining both the row and column clusterings at all stages. Using the practical example of simultaneous word-document clustering, we demonstrate that our algorithm works well in practice, especially in the presence of sparsity and high-dimensionality.
- P. Bradley, U. Fayyad, and C. Reina. Scaling clustering algorithms to large databases. In KDD'03. AAAI Press, 1998.]]Google Scholar
- Y. Cheng and G. Church. Biclustering of expression data. In Proceedings ISMB, pages 93-103. AAAI Press, 2000.]] Google ScholarDigital Library
- T. Cover and J. Thomas. Elements of Information Theory. John Wiley & Sons, New York, USA, 1991.]] Google ScholarDigital Library
- I. S. Dhillon. Co-clustering documents and words using bipartite spectral graph partitioning. In Proceedings of The 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(KDD-2001), pages 269-274, 2001.]] Google ScholarDigital Library
- I. S. Dhillon, J. Fan, and Y. Guan. Efficient clustering of very large document collections. In R. Grossman, C. Kamath, P. Kegelmeyer, V. Kumar, and R. Namburu, editors, Data Mining for Scientific and Engineering Applications, pages 357-381. Kluwer Academic Publishers, 2001.]]Google ScholarCross Ref
- I. S. Dhillon, S. Mallela, and R. Kumar. A divisive information-theoretic feature clustering algorithm for text classification. Journal of Machine Learning Research(JMLR): Special Issue on Variable and Feature Selection, 3:1265-1287, March 2003.]] Google ScholarDigital Library
- I. S. Dhillon and D. S. Modha. Concept decompositions for large sparse text data using clustering. Machine Learning, 42(1):143-175, January 2001.]] Google ScholarDigital Library
- R. El-Yaniv and O. Souroujon. Iterative double clustering for unsupervised and semi-supervised learning. In ECML-01, pages 121-132, 2001.]] Google ScholarDigital Library
- R. A. Fisher. On the interpretation of x2 from the contingency tables, and the calculation of p. J. Royal Stat. Soc., 85:87-94, 1922.]]Google ScholarCross Ref
- N. Friedman, O. Mosenzon, N. Slonim, and N. Tishby. Multivariate information bottleneck. In UAI-2001, pages 152-161, 2001.]] Google ScholarDigital Library
- J. A. Hartigan. Direct clustering of a data matrix. Journal of the American Statistical Association, 67(337):123-129, March 1972.]]Google ScholarCross Ref
- T. Hofmann. Probabilistic latent semantic indexing. In Proc. ACM SIGIR. ACM Press, August 1999.]] Google ScholarDigital Library
- T. Hofmann and J. Puzicha. Latent class models for collaborative filtering. In Proceedings of the International Joint Conference in Artificial Intelligence (IJCAI), 1999.]] Google ScholarDigital Library
- A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Prentice-Hall, Englewood Cliffs, NJ, 1988.]] Google ScholarDigital Library
- Ken Lang. News Weeder: Learning to filter netnews. In ICML-95, pages 331-339, 1995.]]Google Scholar
- A. K. McCallum. Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering. www.cs.cmu.edu/ mccallum/bow, 1996.]]Google Scholar
- B. Mirkin. Mathematical Classification and Clustering. Kluwer Academic Publishers, 1996.]]Google ScholarCross Ref
- N. Slonim, N. Friedman, and N. Tishby. Agglomerative multivariate information bottleneck. In NIPS-14, 2001.]]Google Scholar
- N. Slonim and N. Tishby. Document clustering using word clusters via the information bottleneck method. In ACM SIGIR, pages 208-215, 2000.]] Google ScholarDigital Library
- N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. In Proc. of the 37-th Annual Allerton Conference on Communication, Control and Computing, pages 368-377, 1999.]]Google Scholar
Index Terms
- Information-theoretic co-clustering
Recommendations
Spectral co-clustering ensemble
The goal of co-clustering is to simultaneously cluster the rows and columns of an input data matrix. It overcomes several limitations associated with traditional clustering methods by allowing automatic discovery of similarity based on a subset of ...
Non-Exhaustive, Overlapping Co-Clustering
CIKM '17: Proceedings of the 2017 ACM on Conference on Information and Knowledge ManagementThe goal of co-clustering is to simultaneously identify a clustering of the rows as well as the columns of a two dimensional data matrix. Most existing co-clustering algorithms are designed to find pairwise disjoint and exhaustive co-clusters. However, ...
Weighted Co-clustering Based Clustering Ensemble
NCVPRIPG '11: Proceedings of the 2011 Third National Conference on Computer Vision, Pattern Recognition, Image Processing and GraphicsConsensus clustering has emerged as an important elaboration of classical clustering problem that improves quality and robustness in clustering by optimally combining the results of different clustering process. In this paper we propose a new method of ...
Comments