The rapid adoption of Artificial Intelligence (AI) is reshaping the design and operation of cloud and edge infrastructures. In this evolving landscape, containerization has become essential for managing distributed microservice-based applications, enabling greater flexibility and efficiency. Simultaneously, the emergence of the Cloud Continuum—from centralized data centers to highly distributed edge environments—introduces new challenges for the seamless orchestration of both computational and networking resources. This talk will explore the pressing challenges and opportunities in achieving efficient network and service management across the cloud continuum in the AI era. It will highlight recent advances in container orchestration and explore how AI-driven techniques, particularly Reinforcement Learning (RL), can enable proactive and adaptive orchestration strategies. In addition, this session will emphasize how AI workloads, such as inference and distributed training, introduce diverse network demands—ranging from low-latency requirements to high-throughput communication patterns—that can strain existing cloud infrastructures. It will discuss how orchestration strategies and topology-aware deployment can drastically affect network congestion, throughput, and completion time, especially in modern data center networks. The talk will conclude by identifying key open research questions and emerging directions for building sustainable, scalable, and intelligent nextgeneration cloud and edge systems in the age of AI.
Conference report
English
Àrees temàtiques de la UPC::Informàtica::Arquitectura de computadors; High performance computing; Càlcul intensiu (Informàtica)
Barcelona Supercomputing Center
http://creativecommons.org/licenses/by-nc-nd/4.0/
Open Access
Congressos [11156]