Abstract
Randomization offers new benefits for large-scale linear algebra computations.
- Achlioptas, D., Karnin, Z., Liberty, E. Near-optimal entrywise sampling for data matrices. In Annual Advances in Neural Information Processing Systems 26: Proceedings of the 2013 Conference, 2013.Google Scholar
- Achlioptas, D., McSherry, F. Fast computation of low-rank matrix approximations. J. ACM 54, 2 (2007), Article 9. Google ScholarDigital Library
- Ailon, N., Chazelle, B. Faster dimension reduction. Commun. ACM 53, 2 (2010), 97--104. Google ScholarDigital Library
- Avron, H., Maymounkov, P., Toledo, S. Blendenpik: Supercharging LAPACK's least-squares solver. SIAM J. Sci. Comput. 32 (2010), 1217--1236.Google ScholarDigital Library
- Batson, J., Spielman, D.A., Srivastava, N., Teng, S.-H. Spectral sparsification of graphs: Theory and algorithms. Commun. ACM 56, 8 (2013), 87--94. Google ScholarDigital Library
- Belkin, M., Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 15, 6 (2003), 1373--1396. Google ScholarDigital Library
- Boutsidis, C., Mahoney, M.W., Drineas, P. An improved approximation algorithm for the column subset selection problem. In Proceedings of the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (2009), 968--977. Google ScholarDigital Library
- Candes, E.J., Recht, B. Exact matrix completion via convex optimization. Commun. ACM 55, 6 (2012), 111--119. Google ScholarDigital Library
- Chatterjee, S., Hadi, A.S. Influential observations, high leverage points, and outliers in linear regression. Stat. Sci. 1, 3 (1986), 379--393.Google Scholar
- Chen, Y., Bhojanapalli, S., Sanghavi, S., Ward, R. Coherent matrix completion. In Proceedings of the 31st International Conference on Machine Learning (2014), 674--682.Google Scholar
- Clarkson, K. Subgradient and sampling algorithms for l<sup>1</sup> regression. In Proceedings of the 16th Annual ACM-SIAM Symposium on Discrete Algorithms (2005), 257--266. Google ScholarDigital Library
- Clarkson, K.L., Woodruff, D.P. Low rank approximation and regression in input sparsity time. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (2013), 81--90. Google ScholarDigital Library
- Drineas, P., Kannan, R., Mahoney, M.W. Fast Monte Carlo algorithms for matrices I: approximating matrix multiplication. SIAM J. Comput. 36 (2006), 132--157. Google ScholarDigital Library
- Drineas, P., Magdon-Ismail, M., Mahoney, M.W., Woodruff, D.P. Fast approximation of matrix coherence and statistical leverage. J. Mach. Learn. Res. 13 (2012), 3475--3506. Google ScholarDigital Library
- Drineas, P., Mahoney, M.W., Muthukrishnan, S. Sampling algorithms for l<sup>2</sup> regression and applications. In Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms (2006), 1127--1136. Google ScholarDigital Library
- Drineas, P., Mahoney, M.W., Muthukrishnan, S. Relative-error CUR matrix decompositions. SIAM J. Matrix Anal. Appl. 30 (2008), 844--881. Google ScholarDigital Library
- Drineas, P., Mahoney, M.W., Muthukrishnan, S., Sarlós, T. Faster least squares approximation. Numer. Math. 117, 2 (2010), 219--249. Google ScholarDigital Library
- Drineas, P., Zouzias, A. A note on element-wise matrix sparsification via a matrix-valued Bernstein inequality. Inform. Process. Lett. 111 (2011), 385--389. Google ScholarDigital Library
- Frieze, A., Kannan, R., Vempala, S. Fast Monte-Carlo algorithms for finding low-rank approximations. J. ACM 51, 6 (2004), 1025--1041. Google ScholarDigital Library
- Füredi, Z., Komlós, J. The eigenvalues of random symmetric matrices. Combinatorica 1, 3 (1981), 233--241.Google ScholarCross Ref
- Gittens, A. Mahoney, M.W. Revisiting the Nyström method for improved large-scale machine learning. J. Mach. Learn Res. In press.Google Scholar
- Golub, G.H., Van Loan, C.F. Matrix Computations. Johns Hopkins University Press, Baltimore, 1996.Google Scholar
- Gross, D. Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inform. Theory 57, 3 (2011), 1548--1566. Google ScholarDigital Library
- Gu, M. Subspace iteration randomization and singular value problems. Technical report, 2014. Preprint: arXiv:1408.2208.Google Scholar
- Halko, N., Martinsson, P.-G., Tropp, J.A. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53, 2 (2011), 217--288. Google ScholarDigital Library
- Koren, Y., Bell, R., Volinsky, C. Matrix factorization techniques for recommender systems. IEEE Comp. 42, 8 (2009), 30--37. Google ScholarDigital Library
- Koutis, I., Miller, G.L., Peng, R. A fast solver for a class of linear systems. Commun. ACM 55, 10 (2012), 99--107. Google ScholarDigital Library
- Kundu, A., Drineas, P. A note on randomized elementwise matrix sparsification. Technical report, 2014. Preprint: arXiv:1404.0320.Google Scholar
- Le, Q.V., Sarlós, T., Smola, A.J. Fastfood---approximating kernel expansions in loglinear time. In Proceedings of the 30th International Conference on Machine Learning, 2013.Google Scholar
- Ma, P., Mahoney, M.W., Yu, B. A statistical perspective on algorithmic leveraging. J. Mach. Learn. Res. 16 (2015), 861--911. Google ScholarDigital Library
- Mackey, L., Talwalkar, A., Jordan, M.I. Distributed matrix completion and robust factorization. J. Mach. Learn. Res. 16 (2015), 913--960. Google ScholarDigital Library
- Mahoney, M.W. Randomized Algorithms for Matrices and Data. Foundations and Trends in Machine Learning. NOW Publishers, Boston, 2011. Google ScholarDigital Library
- Mahoney, M.W., Drineas, P. CUR matrix decompositions for improved data analysis. Proc. Natl. Acad. Sci. USA 106 (2009), 697--702.Google ScholarCross Ref
- Meng, X., Mahoney, M.W. Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (2013), 91--100. Google ScholarDigital Library
- Meng, X., Saunders, M.A., Mahoney, M.W. LSRN: A parallel iterative solver for strongly over- or underdetermined systems. SIAM J. Sci. Comput. 36, 2 (2014), C95--C118.Google ScholarCross Ref
- Nelson, J., Huy, N.L. OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings. In Proceedings of the 54th Annual IEEE Symposium on Foundations of Computer Science (2013), 117--126. Google ScholarDigital Library
- Oliveira, R.I. Sums of random Hermitian matrices and an inequality by Rudelson. Electron. Commun. Prob. 15 (2010) 203--212.Google ScholarCross Ref
- Paschou, P., Ziv, E., Burchard, E.G., Choudhry, S., Rodriguez-Cintron, W., Mahoney, M.W., Drineas, P. PCA-correlated SNPs for structure identification in worldwide human populations. PLoS Genet. 3 (2007), 1672--1686.Google ScholarCross Ref
- Rahimi, A., Recht, B. Random features for large-scale kernel machines. In Annual Advances in Neural Information Processing Systems 20: Proceedings of the 2007 Conference, 2008.Google Scholar
- Recht, B. A simpler approach to matrix completion. J. Mach. Learn. Res. 12 (2011), 3413--3430. Google ScholarDigital Library
- Rokhlin, V., Szlam, A., Tygert, M. A randomized algorithm for principal component analysis. SIAM J. Matrix Anal. Appl. 31, 3 (2009), 1100--1124.Google Scholar
- Rudelson, M., Vershynin, R. Sampling from large matrices: an approach through geometric functional analysis. J. ACM 54, 4 (2007), Article 21. Google ScholarDigital Library
- Sarlós, T., Improved approximation algorithms for large matrices via random projections. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (2006), 143--152. Google ScholarDigital Library
- Smale, S. Some remarks on the foundations of numerical analysis. SIAM Rev. 32, 2 (1990), 211--220. Google ScholarDigital Library
- Spielman, D.A., Srivastava, N. Graph sparsification by effective resistances. SIAM J. Comput. 40, 6 (2011), 1913--1926. Google ScholarDigital Library
- Stigler, S.M. The History of Statistics: The Measurement of Uncertainty before 1900. Harvard University Press, Cambridge, 1986.Google Scholar
- Tropp, J.A. User-friendly tail bounds for sums of random matrices. Found. Comput. Math. 12, 4 (2012), 389--434.Google ScholarCross Ref
- Turing, A.M. Rounding-off errors in matrix processes. Quart. J. Mech. Appl. Math. 1 (1948), 287--308.Google ScholarCross Ref
- von Neumann, J., Goldstine, H.H. Numerical inverting of matrices of high order. Bull. Am. Math. Soc. 53 (1947), 1021--1099.Google ScholarCross Ref
- Wigner, E.P. Random matrices in physics. SIAM Rev. 9, 1 (1967), 1--23.Google ScholarDigital Library
- Woodruff, D.P. Sketching as a Tool for Numerical Linear Algebra. Foundations and Trends in Theoretical Computer Science. NOW Publishers, Boston, 2014. Google ScholarDigital Library
- Yang, J., Meng, X., Mahoney, M.W. Implementing randomized matrix algorithms in parallel and distributed environments. Proc. IEEE 104, 1 (2016), 58--92.Google ScholarCross Ref
- Yang, J., Rübel, O., Prabhat, Mahoney, M.W., Bowen, B.P. Identifying important ions and positions in mass spectrometry imaging data using CUR matrix decompositions. Anal. Chem. 87, 9 (2015), 4658--4666.Google ScholarCross Ref
- Yip, C.-W., Mahoney, M.W., Szalay, A.S., Csabai, I., Budavari, T., Wyse, R.F.G., Dobos, L. Objective identification of informative wavelength regions in galaxy spectra. Astron. J. 147, 110 (2014), 15.Google ScholarCross Ref
Index Terms
- RandNLA: randomized numerical linear algebra
Recommendations
Commutative pseudo-equality algebras
Pseudo-equality algebras were initially introduced by Jenei and Kóródi as a possible algebraic semantic for fuzzy-type theory, and they have been revised by Dvureăźenskij and Zahiri under the name of JK-algebras. In this paper, we define and study the ...
Congruences and ideals in pseudo effect algebras as total algebras
Congruences and ideals in pseudo-effect algebras and their total algebra versions are studied. It is shown that every congruence of the total algebra induces a Riesz congruence in the corresponding pseudo-effect algebra. Conversely, to every normal ...
Strong Pseudo-De Morgan Algebras and Pseudo-Involutive Psuedo-BCK Algebras
AICI '10: Proceedings of the 2010 International Conference on Artificial Intelligence and Computational Intelligence - Volume 02The well-known R0 implication is developed to pseudo-De Morgan algebras, which is called generalized pseudo-R0 implication. The notion of strong pseudo-De Morgan algebras is introduced, and its elementary properties are discussed. Secondly, two ...
Comments