NET22023-09-04T11:40:36+00:00

NET2 – AI/ML in service provisioning (1)

Wednesday, 7 June 2023, 16:00-17:30, Room R2

Session Chair: Panagiotis Demestichas (University of Piraeus, Greece)

Quantum Classifiers for Video Quality Delivery 
Tautvydas Lisas and Ruairí de Fréin (Technological University Dublin, Ireland)
Classical classifiers such as the Support Vector Classifier (SVC) struggle to accurately classify video Quality of Delivery (QoD) time-series due to the challenge in constructing suitable decision boundaries using small amounts of training data. We develop a technique that takes advantage of a quantumclassical hybrid infrastructure called Quantum-Enhanced Codecs (QEC). We evaluate a (1) purely classical, (2) hybrid kernel, and (3) purely quantum classifier for video QoD congestion classification, where congestion is either low, medium or high, using QoD measurements from a real networking test-bed. Findings show that the SVC performs the classification task 4% better in the low congestion state and the kernel method performs 6.1% and 10.1% better for the medium and high congestion states. Empirical evidence suggests that when the SVC is trained on a very low amount of data, the classification accuracy varies significantly depending on the quality of the training data, however, the variance in classification accuracy of quantum models is significantly lower. Classical video QoD classifiers benefit from the quantum data embedding techniques. They learn better decision regions using less training data.

A Service-Aware Autoscaling Strategy for Container Orchestration Platforms with Soft Resource Isolation 
Federico Tonini and Carlos Natalino (Chalmers University of Technology, Sweden); Dagnachew Azene Temesgene (Ericsson, Sweden); Zere Ghebretensae (Ericcson, Sweden); Lena Wosinska and Paolo Monti (Chalmers University of Technology, Sweden)
Container orchestration platforms like Kubernetes (K8s) allow easy deployment and management of cloud native services. When deploying their services, service providers need to specify a proper amount of resources to K8s, so that the desired Quality of Service (QoS) to their users can be maintained. To cope with the varying traffic demand coming from users, they can rely on the K8s Horizontal Pod Autoscaling (HPA) mechanism. To ensure that enough resources are available when needed, the standard HPA mechanism relies on resource overprovisioning. In this way, the required QoS is achieved most of (or all) the time but at the expense of additional resources that are allocated (and charged for), while they may stay idle for significant periods of time. A way to reduce overprovisioning is provided by the soft resource isolation of K8s, which allows services to compensate for a temporary lack of resources with shared resources, i.e., idle resources of the machines where services are running. However, during traffic spikes, these idle resources may not be enough to serve the whole demand, degrading the QoS. The HPA, which is not aware of how much demand could not be served, is not always able to correctly estimate the required additional resources, further degrading the QoS. To overcome this, service providers need to leverage overprovisioning, limiting the use of shared resources. In this paper, we propose a novel mechanism for autoscaling resources in K8s that relies on service-related data to avoid the additional degradation introduced by the HPA. The proposed strategy also offers a way to tune overprovisioning and shared resources. Simulation results show that our approach can reduce idle resources by up to 60% compared with the HPA mechanism.

DQN-Based Intelligent Application Placement with Delay-Priority in Multi MEC Systems 
Juan Sebastian Camargo (i2CAT Foundation, Spain); Estefania Coronado (Fundació i2CAT, Internet i Innovació Digital a Catalunya, Spain); Claudia Torres-Pérez (i2CAT Foundation, Spain); Javier Palomares, Jr. (Fundació i2CAT, Internet i Innovació Digital a Cataluya, Spain); Muhammad Shuaib Siddiqui (Fundació i2CAT, Internet i Innovació Digital a Catalunya, Spain)
In 5G Multi-access Edge Computing (MEC) is critical to bring computing and processing closer to users and enable ultra-low latency communications. When instantiating an application, selecting the MEC host that minimizes the latency but still fulfills the application’s requirements is critical. However, as future 6G networks are expected to become even more geo-distributed, and handled by multiple levels of management entities, this labor becomes extremely difficult and Machine Learning (ML) is meant to be a native part of this process. In this context, we propose a Reinforcement Learning model that selects the best possible host to instantiate a MEC application, looking to minimize the end-to-end delay while fulfilling the application requirements. The proposed ML method uses Deep Q-Learning through several stages of environment state, taking an action and rewarding the model when it chooses correctly and penalizing it otherwise. By modifying the reward incentives, we have successfully trained a model that chooses the best host possible delay-wise on a multi-level orchestration scenario, while meeting the applications’ requirements. The results obtained via simulation over a series of MEC scenarios show a success rate of up to 96%, optimizing the delay in the long term.

Fully Homomorphic Encryption: Precision Loss in Wireless Mobile Communication 
Sogo Pierre Sanon (DFKI, Germany); Christoph Lipps (German Research Center for Artificial Intelligence, Germany); Hans D. Schotten (University of Kaiserslautern, Germany)
Fully Homomorphic Encryption (FHE) is a cryptographic technique that enables secure computation over encrypted data. It has been considered as a promising solution to provide secure and privacy-preserving Fifth Generation (5G) wireless network traffic prediction. However, one of the main
challenges of using FHE is the precision loss occurring during the homomorphic computations which can have an impact on network planning and optimization, Quality of Service (QoS) management, and security monitoring. Therefore, this paper discusses the effect of precision loss in 5G wireless network traffic prediction. The result of the underlying study provides experimental upper and lower bounds of the precision loss as well
as the selection of an appropriate precision parameter to balance the trade-off between performance and computational cost. All practical FHE schemes are based on a mathematical problem that appears to be resistant to quantum computers meaning that the work in this paper will be valid for future wireless generations even in the quantum era.

The 3GPP Common API Framework: Open-Source Release and Application Use Cases 
Anastasios-Stavros Charismiadis (National And Kapodistrian University of Athens, Greece); Jorge Moratinos Salcines (Telefonica, Spain); Dimitris Tsolkas (National and Kapodistrian University of Athens & Fogus Innovations and Services, Greece); David Artuñedo Guillen (Telefónica, Spain); Javier Garcia Rodrigo (Telefónica I+D, Spain)
The 3GPP Common API Framework (CAPIF) has been an integral part of the 3GPP SA6 specifications. It has been defined to facilitate the network core exposure, towards new application enablers of various vertical industries (including, Unmanned aerial systems, Edge data networks, Factories of the future, and Vehicular communication systems). Beyond its initial target, we believe that CAPIF can be used as a key standardized API-management framework for secure and interoperable interaction among any API providers and API consumers. In this direction, we developed the CAPIF services, and we provide them as open-source code. Beyond its full compliance with the specifications, our implementation is accompanied by test plans and ready to use templates. Finally, as a proof-of-concept evaluation, we describe how CAPIF services have been applied successfully to an event management system.

Go to Top