Universitat Politècnica de Catalunya. Doctorat en Intel·ligència Artificial
Universitat Politècnica de Catalunya. Departament de Ciències de la Computació
Barcelona Supercomputing Center
Universitat Politècnica de Catalunya. IDEAI-UPC - Intelligent Data sciEnce and Artificial Intelligence Research Group
2025
Q-learning played a foundational role in the field reinforcement learning (RL). However, TD algorithms with off-policy data, such as -learning, or nonlinear function approximation like deep neural networks require several additional tricks to stabilise training, primarily a large replay buffer and target networks. Unfortunately, the delayed updating of frozen network parameters in the target network harms the sample efficiency and, similarly, the large replay buffer introduces memory and implementation overheads. In this paper, we investigate whether it is possible to accelerate and simplify off-policy TD training while maintaining its stability. Our key theoretical result demonstrates for the first time that regularisation techniques such as LayerNorm can yield provably convergent TD algorithms without the need for a target network or replay buffer, even with off-policy data. Empirically, we find that online, parallelised sampling enabled by vectorised environments stabilises training without the need for a large replay buffer. Motivated by these findings, we propose PQN, our simplified deep online -Learning algorithm. Surprisingly, this simple algorithm is competitive with more complex methods like: Rainbow in Atari, PPO-RNN in Craftax, QMix in Smax, and can be up to 50x faster than traditional DQN without sacrificing sample efficiency. In an era where PPO has become the go-to RL algorithm, PQN reestablishes off-policy -learning as a viable alternative.
Mattie Fellows is funded by a generous grant from the UKRI Engineering and Physical Sciences Research Council EP/Y028481/1. Jakob Nicolaus Foerster is partially funded by the UKRI grant EP/Y028481/1 (originally selected for funding by the ERC). Jakob Nicolaus Foerster is also supported by the JPMC Research Award and the Amazon Research Award. Matteo Gallici was partially founded by the FPI-UPC Santander Scholarship FPI-UPC_93. Ivan Masmitja is partially founded by the European Union’s Horizon Europe programme under grant agreement No 101112883, as part of DIGI4ECO. This work also acknowledges the Spanish Ministerio de Ciencia, Innovacion y Universidades (BITER-ECO: PID2020- 114732RBC31), the the Spanish National Program Ramon y Cajal RYC2022-038056-I (IM) and "Severo Ochoa Centre of Excellence" accreditation (CEX2019-000928-S).
Peer Reviewed
Postprint (published version)
Conference report
English
Àrees temàtiques de la UPC::Informàtica::Informàtica teòrica::Algorísmica i teoria de la complexitat; Reinforcement learning; TD; Theory; Q-learning; Parallelisation; Network normalisation
OpenReview.net
https://openreview.net/forum?id=7IzeL0kflu
info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PID2020-114732RB-C31/ES/ESFUERZO CONJUNTO ENTRE BIOLOGIA Y TECNOLOGIA PARA MONITOREAR Y RECUPERAR ESPECIES Y ECOSISTEMAS IMPACTADOS POR LA PESCA: CONECTIVIDAD ESPACIAL E INDICADORES ECOLOGICOS/
http://creativecommons.org/licenses/by/4.0/
Open Access
Attribution 4.0 International
E-prints [73026]