dc.contributor.author
Pérez Carrasco, David
dc.date.accessioned
2026-01-08T20:14:02Z
dc.date.available
2026-01-08T20:14:02Z
dc.date.issued
2025-04-08T13:26:32Z
dc.date.issued
2024-10-16T13:49:37Z
dc.date.issued
2025-04-08T13:26:32Z
dc.identifier
http://hdl.handle.net/10230/61500.2
dc.identifier.uri
http://hdl.handle.net/2072/489059
dc.description.abstract
Tutor: Anders Johnson
dc.description.abstract
Treball de fi de grau en Enginyeria Matemàtica en Ciència de Dades
dc.description.abstract
In recent years, Reinforcement Learning (RL) has emerged as a powerful paradigm for sequential decision making under uncertainty. Within this framework, Markov Decision Processes (MDPs) serve as a fundamental model, defining the dynamics of state transitions and rewards. However, traditional RL algorithms, like Q-Learning, often struggle with large or continuous state spaces due to computa-
tional complexity. Linearly-solvable Markov Decision Processes (LMDPs) offer a promising alternative, leveraging linear programming techniques for efficient planning and value function approximation.
The focus of this work is on evaluating and benchmarking state-of-the-art RL models against algorithms for continuous MDPs leveraging LMDPs, such as Z-Learning. The aim is to improve the performance and scalability of these algorithms in larger and more intricate domains. We investigate efficient methods for optimal action selection and value function approximation within the linear framework. To enable a fair comparison with traditional MDP-based RL, we develop methods for embedding MDPs into LMDPs and vice versa. This allows us
to benchmark state-of-the-art RL models against algorithms designed for continuous MDPs using LMDPs.
Furthermore, our research rigorously explores various factors that influence the learning behavior of algorithms in the context of Linearly-solvable MDPs.
Particularly, we focus on analyzing the impact of different exploration strategies, aiming to uncover their effectiveness across diverse scenarios. By delving into these aspects, our study contributes valuable insights into the optimization and enhancement of reinforcement learning algorithms.
dc.format
application/pdf
dc.format
application/pdf
dc.rights
Llicència CC Reconeixement-NoComercial-SenseObraDerivada 4.0 Internacional (CC BY-NC-ND 4.0)
dc.rights
https://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rights
info:eu-repo/semantics/openAccess
dc.subject
Aprenentatge per reforç
dc.title
Efficient algorithms for linearly solvable Markov decision processes
dc.type
info:eu-repo/semantics/bachelorThesis