Efficient algorithms for linearly solvable Markov decision processes

Publication date

2025-04-08T13:26:32Z

2024-10-16T13:49:37Z

2025-04-08T13:26:32Z

2024



Abstract

Tutor: Anders Johnson


Treball de fi de grau en Enginyeria Matemàtica en Ciència de Dades


In recent years, Reinforcement Learning (RL) has emerged as a powerful paradigm for sequential decision making under uncertainty. Within this framework, Markov Decision Processes (MDPs) serve as a fundamental model, defining the dynamics of state transitions and rewards. However, traditional RL algorithms, like Q-Learning, often struggle with large or continuous state spaces due to computa- tional complexity. Linearly-solvable Markov Decision Processes (LMDPs) offer a promising alternative, leveraging linear programming techniques for efficient planning and value function approximation. The focus of this work is on evaluating and benchmarking state-of-the-art RL models against algorithms for continuous MDPs leveraging LMDPs, such as Z-Learning. The aim is to improve the performance and scalability of these algorithms in larger and more intricate domains. We investigate efficient methods for optimal action selection and value function approximation within the linear framework. To enable a fair comparison with traditional MDP-based RL, we develop methods for embedding MDPs into LMDPs and vice versa. This allows us to benchmark state-of-the-art RL models against algorithms designed for continuous MDPs using LMDPs. Furthermore, our research rigorously explores various factors that influence the learning behavior of algorithms in the context of Linearly-solvable MDPs. Particularly, we focus on analyzing the impact of different exploration strategies, aiming to uncover their effectiveness across diverse scenarios. By delving into these aspects, our study contributes valuable insights into the optimization and enhancement of reinforcement learning algorithms.

Document Type

Project / Final year job or degree

Language

English

Subjects and keywords

Aprenentatge per reforç

Recommended citation

This citation was generated automatically.

Rights

Llicència CC Reconeixement-NoComercial-SenseObraDerivada 4.0 Internacional (CC BY-NC-ND 4.0)

https://creativecommons.org/licenses/by-nc-nd/4.0/

This item appears in the following Collection(s)