Graph neural networks on CPUS: enabling affordable and distributed training and inference

Publication date

2023-04-17



Abstract

Graph Neural Networks (GNNs) have gained significant popularity in computer vision and natural language processing (NLP) for their ability to model complex relationships and dependencies among entities in data. However, the high cost of GPUs and TPUs has made it difficult to deploy GNNs on a large scale. On the other hand, CPUs are widely available and more affordable, making them an attractive alternative. In this talk, we will discuss the potential of using CPUs for GNN training and inference. We will explore different techniques for optimizing GNNs on CPUs, including parallelization and vectorization. Furthermore, we will discuss the potential advantages of using multi-CPUs for GNN training, including lower costs, improved scalability, and faster computation times. We will also examine several applications of GNNs in computer vision and NLP, highlighting their potential for realworld solutions. Finally, we will discuss future research directions in this field, including the development of new techniques for optimizing GNNs on CPUs and the integration of GNNs with other machine learning models.

Document Type

Conference report

Language

English

Recommended citation

This citation was generated automatically.

Rights

http://creativecommons.org/licenses/by-nc-nd/4.0/

Open Access

Attribution-NonCommercial-NoDerivatives 4.0 International

This item appears in the following Collection(s)

Congressos [11156]