Abstract
KBLAS is an open-source, high-performance library that provides optimized kernels for a subset of Level 2 BLAS functionalities on CUDA-enabled GPUs. Since performance of dense matrix-vector multiplication is hindered by the overhead of memory accesses, a double-buffering optimization technique is employed to overlap data motion with computation. After identifying a proper set of tuning parameters, KBLAS efficiently runs on various GPU architectures while avoiding code rewriting and retaining compliance with the standard BLAS API. Another optimization technique allows ensuring coalesced memory access when dealing with submatrices, especially for high-level dense linear algebra algorithms. All KBLAS kernels have been leveraged to a multi-GPU environment, which requires the introduction of new APIs. Considering general matrices, KBLAS is very competitive with existing state-of-the-art kernels and provides a smoother performance across a wide range of matrix dimensions. Considering symmetric and Hermitian matrices, the KBLAS performance outperforms existing state-of-the-art implementations on all matrix sizes and achieves asymptotically up to 50% and 60% speedup against the best competitor on single GPU and multi-GPUs systems, respectively. Performance results also validate our performance model. A subset of KBLAS high-performance kernels have been integrated into NVIDIA's standard BLAS implementation (cuBLAS) for larger dissemination, starting from version 6.0.
- Ahmad Abdelfattah, Jack Dongarra, David Keyes, and Hatem Ltaief. 2013a. Optimizing memory-bound SYMV kernel on GPU hardware accelerators. In High Performance Computing for Computational Science (VECPAR'12), Michel Dayd, Osni Marques, and Kengo Nakajima (Eds.). Lecture Notes in Computer Science, Vol. 7851. Springer, Berlin, 72--79. DOI:http://dx.doi.org/10.1007/978-3-642-38718-0_10Google Scholar
- Ahmad Abdelfattah, Eric Gendron, Damien Gratadour, David Keyes, Hatem Ltaief, Arnaud Sevin, and Fabrice Vidal. 2014. High performance pseudo-analytical simulation of multi-object adaptive optics over multi-GPU systems. In Euro-Par 2014 Parallel Processing, Fernando Silva, I. Dutra, and V. Santos Costa (Eds.). Lecture Notes in Computer Science, Vol. 8632. Springer International Publishing, 704--715. DOI:http://dx.doi.org/10.1007/978-3-319-09873-9_59Google Scholar
- Ahmad Abdelfattah, David Keyes, and Hatem Ltaief. 2013b. Systematic approach in optimizing numerical memory-bound kernels on GPU. In Euro-Par 2012: Parallel Processing Workshops, Ioannis Caragiannis, Michael Alexander, RosaMaria Badia, Mario Cannataro, Alexandru Costan, Marco Danelutto, F. Desprez, Bettina Krammer, Julio Sahuquillo, StephenL. Scott, and Josef Weidendorfer (Eds.). Lecture Notes in Computer Science, Vol. 7640. Springer, Berlin, 207--216. DOI:http://dx.doi.org/10.1007/978-3-642-36949-0_23 Google ScholarDigital Library
- Emmanuel Agullo, Jim Demmel, Jack Dongarra, Bilel Hadri, Jakub Kurzak, Julien Langou, Hatem Ltaief, Piotr Luszczek, and Stanimire Tomov. 2009. Numerical linear algebra on emerging architectures: The PLASMA and MAGMA projects. Journal of Physics: Conference Series 180 (2009), 012037.Google ScholarCross Ref
- E. Anderson, Z. Bai, C. Bischof, Suzan L. Blackford, James W. Demmel, Jack J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and Danny C. Sorensen. 1999. LAPACK User's Guide (3rd ed.). Society for Industrial and Applied Mathematics, Philadelphia. Google ScholarDigital Library
- BLAS. 1979. Basic Linear Algebra Subprograms. Retrieved from http://www.netlib.org/blas/.Google Scholar
- Ian Buck, Tim Foley, Daniel Horn, Jeremy Sugerman, Kayvon Fatahalian, Mike Houston, and Pat Hanrahan. 2004. Brook for GPUs: Stream computing on graphics hardware. In ACM SIGGRAPH 2004 Papers (SIGGRAPH'04). ACM, New York, NY, 777--786. DOI:http://dx.doi.org/10.1145/1186562.1015800 Google ScholarDigital Library
- cuBLAS-XT. 2014. Accelerate BLAS calls with multiple GPUs. Retrieved from https://developer.nvidia.com/cublasxt.Google Scholar
- J. R. Humphrey, D. K. Price, K. E. Spagnoli, A. L. Paolini, and E. J. Kelmelis. 2010. CULA: Hybrid GPU accelerated linear algebra routines. In Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7705. 1.Google Scholar
- KBLAS. 2014. KAUST Basic Linear Algebra Subprograms. Available at http://cec.kaust.edu.sa/Pages/kblas.aspx. (2014).Google Scholar
- David B. Kirk and Wen-mei W. Hwu. 2010. Programming Massively Parallel Processors: A Hands-on Approach. Morgan Kaufmann Publishers, San Francisco, CA. Google ScholarDigital Library
- MAGMA. 2009. Matrix Algebra on GPU and Multicore Architectures. Innovative Computing Laboratory, University of Tennessee. Retrieved from http://icl.cs.utk.edu/magma/.Google Scholar
- John D. McCalpin. 1991-2007. STREAM: Sustainable Memory Bandwidth in High Performance Computers. Technical Report. University of Virginia, Charlottesville, Virginia. Retrieved from http://www.cs.virginia.edu/stream/.Google Scholar
- John D. McCalpin. 1995. Memory bandwidth and machine balance in current high performance computers. IEEE Computer Society Technical Committee on Computer Architecture (TCCA) Newsletter (Dec. 1995), 19--25.Google Scholar
- Rajib Nath, Stanimire Tomov, Tingxing “Tim” Dong, and Jack Dongarra. 2011b. Optimizing symmetric dense matrix-vector multiplication on GPUs. In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC'11). ACM, New York, NY, Article 6, 10 pages. DOI:http://dx.doi.org/10.1145/2063384.2063392 Google ScholarDigital Library
- Rajib Nath, Stanimire Tomov, and Jack Dongarra. 2010a. An improved magma gemm for fermi graphics processing units. Internaitonal Journal on High Performance Computing Applications 24, 4 (Nov. 2010), 511--515. DOI:http://dx.doi.org/10.1177/1094342010385729 Google ScholarDigital Library
- Rajib Nath, Stanimire Tomov, and Jack Dongarra. 2010b. BLAS for GPUs. CRC Press, 57--80. DOI:http://dx.doi.org/doi:10.1201/b10376-6Google Scholar
- Rajib Nath, Stanimire Tomov, and Jack Dongarra. 2011a. Accelerating GPU kernels for dense linear algebra. In Proceedings of the 9th International Conference on High Performance Computing for Computational Science (VECPAR'10). Springer-Verlag, Berlin, 83--92. Google ScholarDigital Library
- NVIDIA. 2009. NVIDIA Fermi Compute Architecture Whitepaper. Retrieved from http://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIA_Fermi_Compute_Architecture_Whitepaper.pdf.Google Scholar
- NVIDIA. 2012. NVIDIA Kepler GK110 Architecture Whitepaper. Retrieved from http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf.Google Scholar
- NVIDIA. 2014a. CUDA C Programming Guide. Retrieved from http://docs.nvidia.com/cuda/cuda-c-programming-guide/.Google Scholar
- NVIDIA. 2014b. The NVIDIA CUDA Basic Linear Algebra Subroutines. Retrieved from https://developer.nvidia.com/cublas/.Google Scholar
- NVIDIA. 2014c. cuBLAS::CUDA Toolkit Documentation. http://docs.nvidia.com/cuda/cublas/#appendix-acknowledgements.Google Scholar
- OPENACC. 2011. Directives for Accelerators. Retrieved from http://www.openacc-standard.org/.Google Scholar
- OPENCL. 2009. The open standard for parallel programming of heterogeneous systems. Retrieved from http://www.khronos.org/opencl/.Google Scholar
- Guangming Tan, Linchuan Li, Sean Triechle, Everett Phillips, Yungang Bao, and Ninghui Sun. 2011. Fast implementation of DGEMM on fermi GPU. In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC'11). ACM, New York, NY, Article 35, 11 pages. DOI:http://dx.doi.org/10.1145/2063384.2063431 Google ScholarDigital Library
- Stanimire Tomov, Rajib Nath, and Jack Dongarra. 2010. Accelerating the reduction to upper Hessenberg, tridiagonal, and bidiagonal forms through hybrid GPU-based computing. Parallel Computing 36, 12 (Dec. 2010), 645--654. DOI:http://dx.doi.org/10.1016/j.parco.2010.06.001 Google ScholarDigital Library
- V. Volkov and J. W. Demmel. 2008. Benchmarking GPUs to tune dense linear algebra. In International Conference for High Performance Computing, Networking, Storage and Analysis, 2008 (SC'08). 1--11. DOI:http://dx.doi.org/10.1109/SC.2008.5214359 Google ScholarDigital Library
- Samuel Williams, Andrew Waterman, and David Patterson. 2009. Roofline: An insightful visual performance model for multicore architectures. Communications of the ACM 52, 4 (April 2009), 65--76. DOI:http://dx.doi.org/10.1145/1498765.1498785 Google ScholarDigital Library
- Ichitaro Yamazaki, Tingxing Dong, Raffaele Solc, Stanimire Tomov, Jack Dongarra, and Thomas Schulthess. 2013. Tridiagonalization of a dense symmetric matrix on multiple GPUs and its application to symmetric eigenvalue problems. Concurrency and Computation: Practice and Experience 26, 16 (2013), 2652--2666. DOI:http://dx.doi.org/10.1002/cpe.3152 Google ScholarDigital Library
Index Terms
- KBLAS: An Optimized Library for Dense Matrix-Vector Multiplication on GPU Accelerators
Recommendations
An insightful program performance tuning chain for GPU computing
ICA3PP'12: Proceedings of the 12th international conference on Algorithms and Architectures for Parallel Processing - Volume Part IIt is challenging to optimize GPU kernels because this progress requires deep technical knowledge of the underlying hardware. Modern GPU architectures are becoming more and more diversified, which further exacerbates the already difficult problem of ...
Combining multi-core and GPU computing for solving combinatorial optimization problems
In this paper, we revisit the design and implementation of Branch-and-Bound (B&B) algorithms for solving large combinatorial optimization problems on GPU-enhanced multi-core machines. B&B is a tree-based optimization method that uses four operators (...
Tridiagonalization of a dense symmetric matrix on multiple GPUs and its application to symmetric eigenvalue problems
For software to fully exploit the computing power of emerging heterogeneous computers, not only must the required computational kernels be optimized for the specific hardware architectures but also an effective scheduling scheme is needed to utilize the ...
Comments