Title:
|
Distributed Q-Learning for energy harvesting heterogeneous networks
|
Author:
|
Miozzo, Marco; Giupponi, Lorenza; Rossi, Michele; Dini, Paolo
|
Other authors:
|
Centre Tecnològic de Telecomunicacions de Catalunya |
Abstract:
|
© 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. |
Abstract:
|
We consider a two-tier urban Heterogeneous Net- work where small cells powered with renewable energy are deployed in order to provide capacity extension and to offloa d macro base stations. We use reinforcement learning techniq ues to concoct an algorithm that autonomously learns energy inflow and traffic demand patterns. This algorithm is based on a decentr al- ized multi-agent Q-learning technique that, by interactin g with the environment, obtains optimal policies aimed at improvi ng the system performance in terms of drop rate, throughput and ene rgy efficiency. Simulation results show that our solution effec tively adapts to changing environmental conditions and meets most of our performance objectives. At the end of the paper we identi fy areas for improvement. |
Abstract:
|
Peer Reviewed |
Subject(s):
|
-Àrees temàtiques de la UPC::Enginyeria de la telecomunicació::Telemàtica i xarxes d'ordinadors -Renewable energy -Energy conservation -Computer networks -Sustainability -Mobile networks, HetNet, Sustainability, Renewable energy, Energy efficiency, Q-Learning -Energies renovables -Energia -- Estalvi -Ordinadors, Xarxes d' -Desenvolupament sostenible |
Rights:
|
|
Document type:
|
Article - Submitted version Conference Object |
Published by:
|
Institute of Electrical and Electronics Engineers
|
Share:
|
|