A continuum of matrix multiplications: from scientific computing to deep learning

Fecha de publicación

2023-03-21



Resumen

Matrix multiplication (GEMM) is a key, pervasive computational kernel that spans across multiple domains. On the one hand, many applications arising in scientific computing require the solution of linear systems of equations, least-square problems, and eigenvalue problems. For portability, these applications often rely on linear algebra routines from LAPACK (linear algebra package). In turn, in order to deliver high performance, LAPACK heavily relies on GEMM and other Basic Linear algebra subroutines (BLAS). On the other hand, to a large extent, the computational cost for the convolutional neural networks (CNNs) that dominate machine learning algorithms for signal processing and computer vision tasks, as well as the transformers behind recent deep learning (DL) applications, such as ChatGPT, is largely determined by the performance of GEMM. In this talk we will first expose caveats of current instances of GEMM in linear algebra libraries for conventional multicore architectures: suboptimal performance and missing support for DL-oriented data types. Starting from that point, we will then demonstrate how these problems can be overcome via tools for the (semi-)automatic generation of the only architecturespecific piece of GEMM, known as micro-kernel, together with an analytical-based model to capture the cache hierarchy configuration. In addition, we will show that this approach carries over to more "exotic" architectures, from high-end vector accelerators and the Xilinx artificial intelligence engine (AIE) to low-power designs such as RISC-V processors and ARM-based (Arduino) micro-controllers.

Tipo de documento

Conference report

Lengua

Inglés

Publicado por

Barcelona Supercomputing Center

Citación recomendada

Esta citación se ha generado automáticamente.

Derechos

http://creativecommons.org/licenses/by-nc-nd/4.0/

Open Access

Attribution-NonCommercial-NoDerivatives 4.0 International

Este ítem aparece en la(s) siguiente(s) colección(ones)

Congressos [11156]