AG Kommunikationstheorie


Thema:

Deep Reinforcement Learning based Resource Allocation in Low Latency Edge Computing Networks

Abstract:

In this paper, we investigate strategies for the allocation of computational resources using deep reinforcement learning in mobile edge computing (MEC) networks that operate with finite blocklength codes to support low latency communications. The end-to-end (E2E) reliability of the service is addressed, while both the delay violation probability and the decoding error probability are taken into account. By employing a deep reinforcement learning method, namely deep Q-learning, we design an intelligent agent at the edge computing node to develop a real-time adaptive policy for computational resource allocation for offloaded tasks of multiple users in order to improve the average E2E reliability. Via simulations, we show that under different task arrival rates, the realized policy serves to increase the task number that decreases the delay violation rate while guaranteeing an acceptable level of decoding error probability. Moreover, we show that the proposed deep reinforcement learning approach outperforms the random and equal scheduling benchmarks.