{"id":1088,"date":"2021-09-08T16:32:59","date_gmt":"2021-09-08T15:32:59","guid":{"rendered":"https:\/\/new.eucnc.eu\/?page_id=1088"},"modified":"2023-05-09T11:41:23","modified_gmt":"2023-05-09T11:41:23","slug":"tutorial-1","status":"publish","type":"page","link":"https:\/\/www.eucnc.eu\/programme\/tutorials\/tutorial-1\/","title":{"rendered":"Tutorial 1"},"content":{"rendered":"<div class=\"fusion-fullwidth fullwidth-box fusion-builder-row-1 fusion-flex-container nonhundred-percent-fullwidth non-hundred-percent-height-scrolling\" style=\"--awb-border-radius-top-left:0px;--awb-border-radius-top-right:0px;--awb-border-radius-bottom-right:0px;--awb-border-radius-bottom-left:0px;--awb-flex-wrap:wrap;\" ><div class=\"fusion-builder-row fusion-row fusion-flex-align-items-flex-start fusion-flex-content-wrap\" style=\"max-width:1248px;margin-left: calc(-4% \/ 2 );margin-right: calc(-4% \/ 2 );\"><div class=\"fusion-layout-column fusion_builder_column fusion-builder-column-0 fusion_builder_column_1_1 1_1 fusion-flex-column\" style=\"--awb-bg-size:cover;--awb-width-large:100%;--awb-margin-top-large:0px;--awb-spacing-right-large:1.92%;--awb-margin-bottom-large:20px;--awb-spacing-left-large:1.92%;--awb-width-medium:100%;--awb-order-medium:0;--awb-spacing-right-medium:1.92%;--awb-spacing-left-medium:1.92%;--awb-width-small:100%;--awb-order-small:0;--awb-spacing-right-small:1.92%;--awb-spacing-left-small:1.92%;\"><div class=\"fusion-column-wrapper fusion-column-has-shadow fusion-flex-justify-content-flex-start fusion-content-layout-column\"><div class=\"fusion-text fusion-text-1\" style=\"--awb-text-transform:none;\"><div class=\"vc_row wpb_row vc_row-fluid\">\n<div class=\"wpb_column vc_column_container vc_col-sm-12\">\n<div class=\"vc_column-inner\">\n<div class=\"wpb_wrapper\">\n<div class=\"wpb_text_column wpb_content_element \">\n<div class=\"wpb_wrapper\">\n<div class=\"content node-page\">\n<div class=\"field field-name-body field-type-text-with-summary field-label-hidden\">\n<div class=\"field-items\">\n<div class=\"field-item even\">\n<h2><strong>Reinforcement Learning for 5G and beyond radio access networks: from design to implementation<\/strong><\/h2>\n<h6>Tuesday, 6 June 2023, 14:00-15:30\/16:00-17:30, Room R22-R23<\/h6>\n<h5>Speaker:<\/h5>\n<ul>\n<li>Irene Vil\u00e0 Mu\u00f1oz (Universitat Polit\u00e8cnica de Catalunya, ES)<\/li>\n<\/ul>\n<h4><strong>Motivation and Context<\/strong><\/h4>\n<p style=\"text-align: justify;\">5G systems offer increased flexibility and efficiency through the introduction of new features, but their growing complexity requires automation tools. Artificial Intelligence (AI), and more specifically, Machine Learning (ML) mechanisms have been identified as key enablers for 5G networks and beyond [1]. Standardization initiatives like the O-RAN alliance [2], 3GPP [3] or ITU [4] have already considered incorporating AI tools into mobile network architecture. For the Radio Access Network (RAN), Reinforcement Learning (RL) techniques are of special interest due to their capability to optimally solve decision-making problems [5]. The applicability of RL solutions in the RAN embraces radio resource management and self-organizing functions, and network radio network management. Designing and implementing RL solutions for the RAN requires the use of several tools and technologies, embracing notable complexity. The programming of these solutions requires RL software libraries for RL algorithms (e.g., TensorFlow Agents, Keras RL, etc.). Furthermore, the training and validation must be performed on simulated environments of the RAN, either available online (e.g., Gym environments) or self-developed. Besides, the use of network digital twins for training, performance evaluation and benchmarking is gaining momentum. Further challenges arise when a given RL solution is to be<br \/>\nimplemented: RL solutions require to be packaged appropriately to run on realsystems (e.g., Docker containers), and the interfaces and protocols specified by standards need to be implemented (e.g., NETCONF for parameter configuration). Therefore, the pathway from the conception of a RL solution to its design, evaluation and, eventually, implementation is challenging and can pose entry barriers to researchers in the field of beyond 5G networks. This tutorial aims at facilitating the introduction to the implementation aspects of such solutions, presenting the main concepts, tools and technologies involved in the stages from design to implementation and using a specific use case to provide some hands-on experience.<\/p>\n<h4><strong>Structure and Content<\/strong><\/h4>\n<p>In the above context, the tutorial will cover the contents described and structured in the following:<\/p>\n<p><strong>1 &#8211; Role of Artificial Intelligence (AI) in 5G and beyond (10 min):<\/strong><\/p>\n<p>The tutorial will start by introducing and motivating the need to integrate AI in 5G and beyond networks, followed by describing the vision and work of different standardization bodies on the integration of AI in networks.<\/p>\n<p><em>Speakers: Dr. Valerio Frascolla (Intel Deutschland GmbH), Dr. Irene Vil\u00e0 (UPC)<\/em><\/p>\n<p><strong>2 &#8211; Machine Learning (ML) algorithms(20 minutes): <\/strong><\/p>\n<p>An overview of ML algorithms will be given, introducing the principles of supervised, unsupervised and reinforcement learning (RL) subtypes. A special focus will be given to RL due to its relevance to the RAN, and different RL solution types and algorithms will be presented, such as the Deep Q-Network (DQN) algorithm.<\/p>\n<p><em>Speakers: Dr. Yansha Deng (King\u2019s College London), Dr. Irene Vil\u00e0 (UPC)<\/em><\/p>\n<p><strong>3 &#8211; Applicability of Reinforcement Learning algorithms for the Radio Access Network (25 minutes): <\/strong><\/p>\n<p>A discussion of RL applicability in the different layers of the next generation RAN architecture and the associated Operations and Support Systems (OSS) for network management will follow. Also, edge computing will be introduced as key technology for the applicability of such solutions for the RAN, discussing its enabling role in distributed AI solutions that allow reduced latency, increased privacy, high accuracy, etc. The explanation will be supported by various illustrative application examples.<\/p>\n<p><em>Speakers: Dr. Valerio Frascolla (Intel Deutschland GmbH), Dr. Irene Vil\u00e0 (UPC) <\/em><\/p>\n<p><strong>4 &#8211; Road from design to production for RL solutions (80 minutes): <\/strong><\/p>\n<p>The process that includes the design, programming, evaluation and implementation stages of RL solutions for the RAN will be described from a practical perspective. Regarding the design stage of such solutions, some considerations on modelling them to be compatible with the standards will be given (3GPP, O-RAN). Next, an overview of available software tools for the development of RL solutions will be given (e.g., TensorFlow Agents, Keras RL), as well as their operation principles and requirements. This will be followed by some guidelines and considerations for the training of these solutions, covering aspects such as the development of simulated environments of the RAN for training and evaluation, the requirements of training data according to the expected inference data, the need to incorporate retraining capabilities in the solution or the role of network digital twins in this context. Finally, the tools and technologies needed to integrate the RL-based solutions with the platform where the solution will be executed in the real network will be described. This will embrace details on the implementation of the interfaces according to the technologies specified in O-RAN (e.g., NETCONF) and the containerization of RL-based solutions.<\/p>\n<p><em>Speakers: Dr. Irene Vil\u00e0 (UPC)<\/em><\/p>\n<p><strong>5 &#8211; Use case example: Capacity sharing solution for RAN slicing (45 minutes): <\/strong><\/p>\n<p>A specific deep RL-based solution to the capacity sharing problem for RAN slicing will be presented [7]. This will include both the algorithmic definition and the implementation description [8], supported by the demonstration of the software developed for the solution [9], its containerization using Docker and the implemented O-RAN interfaces.<\/p>\n<p><em>Speakers: Dr. Irene Vil\u00e0 (UPC) <\/em><\/p>\n<p>The format of the tutorial will mainly consist of an oral presentation of the contents proposed above with the support of slides. In addition, the tutorial will include the demonstration of the developed software for the presented use case and its implementation. Both are available in the Github repository in [9]. Note that the attendants will be able to follow the demonstration in detail by following the Github, containing hands-on materials on the concepts throughout the tutorial.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":4,"featured_media":0,"parent":517,"menu_order":5,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"_links":{"self":[{"href":"https:\/\/www.eucnc.eu\/wp-json\/wp\/v2\/pages\/1088"}],"collection":[{"href":"https:\/\/www.eucnc.eu\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.eucnc.eu\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.eucnc.eu\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.eucnc.eu\/wp-json\/wp\/v2\/comments?post=1088"}],"version-history":[{"count":20,"href":"https:\/\/www.eucnc.eu\/wp-json\/wp\/v2\/pages\/1088\/revisions"}],"predecessor-version":[{"id":6677,"href":"https:\/\/www.eucnc.eu\/wp-json\/wp\/v2\/pages\/1088\/revisions\/6677"}],"up":[{"embeddable":true,"href":"https:\/\/www.eucnc.eu\/wp-json\/wp\/v2\/pages\/517"}],"wp:attachment":[{"href":"https:\/\/www.eucnc.eu\/wp-json\/wp\/v2\/media?parent=1088"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}