NET32023-09-04T13:29:43+00:00

NET3 – AI/ML in service provisioning (2)

Thursday, 8 June 2023, 11:00-12:30, Room R2

Session Chair: Brigitte Jaumard (Concordia University, Canada)

AI-Powered Edge Computing Evolution for Beyond 5G Communication Networks
Elli Kartsakli (Barcelona Supercomputing Center, Spain); Jordi Pérez-Romero (Universitat Politècnica de Catalunya (UPC), Spain); Oriol Sallent (Universitat Politècnica de Catalunya, Spain); Nikolaos Bartzoudis (CTTC, Spain); Valerio Frascolla (Intel Deutschland Gmbh, Germany); Swarup Kumar Mohalik (Ericsson Research, India); Thijs Metsch (Intel, Ireland); Angelos Antonopoulos (Nearby Computing, Spain); Ömer Faruk Tuna (Ericsson Research, Turkey); Yansha Deng (King’s College London, United Kingdom (Great Britain)); Xin Tao (Ericsson Research, Sweden); Maria A. Serrano (Nearby Computing, Spain); Eduardo Quiñones (Barcelona Supercomputing Center, Spain)
Edge computing is a key enabling technology that is expected to play a crucial role in beyond 5G (B5G) and 6G communication networks. By bringing computation closer to where the data is generated, and leveraging Artificial Intelligence (AI) capabilities for advanced automation and orchestration, edge computing can enable a wide range of emerging applications with extreme requirements in terms of latency and computation, across multiple vertical domains. In this context, this paper first discusses the key technological challenges for the seamless integration of edge computing within B5G/6G and then presents a roadmap for the edge computing evolution, proposing a novel design approach for an open, intelligent, trustworthy, and distributed edge architecture.

Cooperative Action Branching Deep Reinforcement Learning for Uplink Power Control
Petteri Kela and Teemu Veijalainen (Nokia, Finland)
Wireless networks are expected to move towards self-sustaining networks in 6G, where machine learning plays a critical role in maintaining high performance in dynamically changing environment. It is expected that in such environment machine learning algorithms need to be able to optimize multiple correlated network parameters simultaneously. One example of such correlated parameters, that exist already in 5G, are outerloop power control (OLPC) parameters. In this paper we show how a single double deep Q network (DDQN) can optimize both OLPC parameters, P0 and αpl, simultaneously with two output branch dimensions. In order to tackle multi-agent problems, caused by interference, we describe how multiple machine learning agents have to cooperate to optimize the performance of the whole network. Extensive simulation studies confirm that provided reinforcement learning methods can outperform the traditional way of setting OLPC parameters, where global suboptimal parametrization is hand-picked. Because cooperating agents learned to discriminate some cells to maximize overall throughput, we additionally experiment methods that equalize the performance between learning agents. Thus, it can be envisioned that the introduced methods can be harnessed to learn also other combinations of multiple simultaneous optimization actions taken in the future 6G networks.

Distributed Learning-Based Intrusion Detection in 5G and Beyond Networks
Cheolhee Park (ETRI, Korea (South)); Kyungmin Park (Electronics and Telecommucations Institute, Korea (South)); Jihyeon Song (Electronics and Telecommunications Research Institute, Korea (South)); Jonghyun Kim (ETRI, Korea (South))
As mobile technology has evolved over generations, communication systems have advanced along with it. Moreover, the 6th generation (6G) of mobile networks is expected to evolve into a more decentralized and open environment. Meanwhile, with these advancements in network systems, the attack surface that can be exposed to adversaries has expanded, and potential threats have become more sophisticated. To secure network systems from these potential attacks, various studies have focused on intrusion detection systems. In particular, studies on artificial intelligence-based network intrusion detection systems have been actively conducted and have shown remarkable results. However, most of these studies concentrate on centralized environments and may not be suitable for deployment in distributed systems. In this paper, we propose a distributed learning-based intrusion detection system that can efficiently train predictive models in a decentralized environment and enable learning in systems with varying computing capabilities. We leveraged a state-of-the-art split learning approach, which allows for models to be trained in distributed systems with different computing resources. In our experiments, we evaluate the models using data collected in a 5G mobile network environment and demonstrate that the proposed system can be applied for network security in the next-generation mobile environment.

MLOps Meets Edge Computing: An Edge Platform with Embedded Intelligence Towards 6G Systems
Nikos Psaromanolakis and Vasileios Theodorou (Intracom S.A. Telecom Solutions, Greece); Dimitrios Laskaratos (Intracom SA Telecom Solutions, Greece); Ioannis Kalogeropoulos, Maria Eleftheria Vlontzou and Eleni Zarogianni (Intracom S.A. Telecom Solutions, Greece); Georgios Samaras (Intracom Telecom, Greece)
The evolution towards more human-centered 6G networks requires the extension of network functionalities with advanced, pervasive automation features. In this direction, cloud-native, softwarized network functions and integration of extreme/far edge devices shall be supported by even more distributed and decomposable systems, such as Edge Cloud environments, while building on AI/ML data-driven mechanisms to improve their performance and resilience for the stringent requirements of next-generation applications. In this work, we propose an intelligence-native Edge Management Platform coupled with MLOps functionalities—the π-Edge Platform— which encompasses automation features for cloud-native lifecycle management of Edge Services. Our introduced architecture incorporates MLOps services and processes, operating as integrated micro-services with the rest of the π-Edge architectural components, ensuring the reliable operation and QoS of Edge network and application services. We experimentally validate our approach with a prototypical implementation of key πEdge features, including the incorporation of state-of-the-art ML models for performance prediction and anomaly detection, on a multi-media streaming use case based on scenarios from real production environment.

ML KPI Prediction in 5G and B5G Networks
Phuc N Tran, Oscar Delgado and Brigitte Jaumard (Concordia University, Canada); Fadi Bishay (CIENA, Canada)
Network operators are facing new challenges when meeting the needs of their customers. The challenges arise due to the rise of new services, such as HD video streaming, IoT, autonomous driving, etc., and the exponential growth of network traffic. In this context, 5G and B5G networks have been evolving to accommodate a wide range of applications and use cases. Additionally, this evolution brings new features, like the ability to create multiple end-to-end isolated virtual networks using network slicing. Nevertheless, to ensure the quality of service, operators must maintain and optimize their networks in accordance with the key performance indicators (KPIs) and the slice service-level agreements (SLAs). In this paper, we introduce a machine learning (ML) model used to estimate throughput in 5G and B5G networks with endto-end (E2E) network slices. Then, we combine the predicted throughput with the current network state to derive an estimate of other network KPIs, which can be used to further improve service assurance. To assess the efficiency of our solution, a performance metric was proposed. Numerical evaluations demonstrate that our KPI prediction model outperforms those derived from other methods with the same or nearly the same computational time.

Go to Top