Tutorial 1

Tutorial 12023-05-09T11:41:23+00:00

Reinforcement Learning for 5G and beyond radio access networks: from design to implementation

Tuesday, 6 June 2023, 14:00-15:30/16:00-17:30, Room R22-R23
  • Irene Vilà Muñoz (Universitat Politècnica de Catalunya, ES)

Motivation and Context

5G systems offer increased flexibility and efficiency through the introduction of new features, but their growing complexity requires automation tools. Artificial Intelligence (AI), and more specifically, Machine Learning (ML) mechanisms have been identified as key enablers for 5G networks and beyond [1]. Standardization initiatives like the O-RAN alliance [2], 3GPP [3] or ITU [4] have already considered incorporating AI tools into mobile network architecture. For the Radio Access Network (RAN), Reinforcement Learning (RL) techniques are of special interest due to their capability to optimally solve decision-making problems [5]. The applicability of RL solutions in the RAN embraces radio resource management and self-organizing functions, and network radio network management. Designing and implementing RL solutions for the RAN requires the use of several tools and technologies, embracing notable complexity. The programming of these solutions requires RL software libraries for RL algorithms (e.g., TensorFlow Agents, Keras RL, etc.). Furthermore, the training and validation must be performed on simulated environments of the RAN, either available online (e.g., Gym environments) or self-developed. Besides, the use of network digital twins for training, performance evaluation and benchmarking is gaining momentum. Further challenges arise when a given RL solution is to be
implemented: RL solutions require to be packaged appropriately to run on realsystems (e.g., Docker containers), and the interfaces and protocols specified by standards need to be implemented (e.g., NETCONF for parameter configuration). Therefore, the pathway from the conception of a RL solution to its design, evaluation and, eventually, implementation is challenging and can pose entry barriers to researchers in the field of beyond 5G networks. This tutorial aims at facilitating the introduction to the implementation aspects of such solutions, presenting the main concepts, tools and technologies involved in the stages from design to implementation and using a specific use case to provide some hands-on experience.

Structure and Content

In the above context, the tutorial will cover the contents described and structured in the following:

1 – Role of Artificial Intelligence (AI) in 5G and beyond (10 min):

The tutorial will start by introducing and motivating the need to integrate AI in 5G and beyond networks, followed by describing the vision and work of different standardization bodies on the integration of AI in networks.

Speakers: Dr. Valerio Frascolla (Intel Deutschland GmbH), Dr. Irene Vilà (UPC)

2 – Machine Learning (ML) algorithms(20 minutes):

An overview of ML algorithms will be given, introducing the principles of supervised, unsupervised and reinforcement learning (RL) subtypes. A special focus will be given to RL due to its relevance to the RAN, and different RL solution types and algorithms will be presented, such as the Deep Q-Network (DQN) algorithm.

Speakers: Dr. Yansha Deng (King’s College London), Dr. Irene Vilà (UPC)

3 – Applicability of Reinforcement Learning algorithms for the Radio Access Network (25 minutes):

A discussion of RL applicability in the different layers of the next generation RAN architecture and the associated Operations and Support Systems (OSS) for network management will follow. Also, edge computing will be introduced as key technology for the applicability of such solutions for the RAN, discussing its enabling role in distributed AI solutions that allow reduced latency, increased privacy, high accuracy, etc. The explanation will be supported by various illustrative application examples.

Speakers: Dr. Valerio Frascolla (Intel Deutschland GmbH), Dr. Irene Vilà (UPC)

4 – Road from design to production for RL solutions (80 minutes):

The process that includes the design, programming, evaluation and implementation stages of RL solutions for the RAN will be described from a practical perspective. Regarding the design stage of such solutions, some considerations on modelling them to be compatible with the standards will be given (3GPP, O-RAN). Next, an overview of available software tools for the development of RL solutions will be given (e.g., TensorFlow Agents, Keras RL), as well as their operation principles and requirements. This will be followed by some guidelines and considerations for the training of these solutions, covering aspects such as the development of simulated environments of the RAN for training and evaluation, the requirements of training data according to the expected inference data, the need to incorporate retraining capabilities in the solution or the role of network digital twins in this context. Finally, the tools and technologies needed to integrate the RL-based solutions with the platform where the solution will be executed in the real network will be described. This will embrace details on the implementation of the interfaces according to the technologies specified in O-RAN (e.g., NETCONF) and the containerization of RL-based solutions.

Speakers: Dr. Irene Vilà (UPC)

5 – Use case example: Capacity sharing solution for RAN slicing (45 minutes):

A specific deep RL-based solution to the capacity sharing problem for RAN slicing will be presented [7]. This will include both the algorithmic definition and the implementation description [8], supported by the demonstration of the software developed for the solution [9], its containerization using Docker and the implemented O-RAN interfaces.

Speakers: Dr. Irene Vilà (UPC)

The format of the tutorial will mainly consist of an oral presentation of the contents proposed above with the support of slides. In addition, the tutorial will include the demonstration of the developed software for the presented use case and its implementation. Both are available in the Github repository in [9]. Note that the attendants will be able to follow the demonstration in detail by following the Github, containing hands-on materials on the concepts throughout the tutorial.

Go to Top