AI4C5 – Security and Federated Learning
Friday, 6 June 2025, 9:00-10:30, room 1.G
Session Chair: Manuel Gil Pérez (University of Murcia, ES)
GenFV: AIGC-Assisted Federated Learning for Vehicular Edge Intelligence
Xianke Qiang (UESTC, China); Zheng Chang (University of Jyväskylä, Finland); Adrian Kliks (Poznan University of Technology, Poland)
Federated Learning (FL) has emerged as a promising technology for privacy-preserving vehicular applications. However, its performance is frequently limited by challenges such as vehicle mobility, unstable wireless channels, and heterogeneous data distributions. To address these challenges, we propose GenFV, a novel Artificial Intelligence-Generated Content (AIGC)-assisted Federated Learning for Vehicular Edge Intelligence (VEI) framework. By leveraging AIGC for data synthesis, GenFV enhances the performance of FL models in dynamic vehicular environments. We introduce a weighted policy based on Earth Mover’s Distance (EMD) to quantify data heterogeneity, and formulate a system time minimization problem, which is a mixed-integer non-linear programming (MINLP) problem. To solve this, we first tackle vehicle selection, and then transform and decompose the problem to optimize resource allocation for bandwidth, transmission power, and generated data. Experimental results show that GenFV significantly outperforms existing approaches, improving both the performance and robustness of FL in resource-constrained vehicular networks.
TinyML NLP Scheme for Semantic Wireless Sentiment Classification with Privacy Preservation
Ahmed Y. Radwan (York University, Canada); Mohammad Shehab (KAUST, Saudi Arabia); Mohamed-Slim Alouini (King Abdullah University of Science and Technology (KAUST), Saudi Arabia)
Natural Language Processing (NLP) operations, such as semantic sentiment analysis and text synthesis, often raise privacy concerns and demand significant on-device computational resources. Centralized Learning (CL) on the edge provides an energy-efficient alternative but requires collecting raw data, compromising user privacy. While Federated Learning (FL) enhances privacy, it imposes high computational energy demands on resource-constrained devices. We introduce Split Learning (SL) as an energy-efficient, privacy-preserving Tiny Machine Learning (TinyML) framework and compare it to FL and CL in the presence of Rayleigh fading and additive noise. Our results show that SL significantly reduces computational power and CO2 emissions while enhancing privacy, as evidenced by a fourfold increase in reconstruction error compared to FL and nearly eighteen times that of CL. In contrast, FL offers a balanced trade-off between privacy and efficiency. This study provides insights into deploying privacy-preserving, energy-efficient NLP models on edge devices.
Fully-Decentralized Consensus-Based Federated Learning for Cell Outage Detection in Cellular Networks
Andrea Wrona and Simone Gentile (Sapienza University of Rome, Italy); Emanuele De Santis (University of Rome “La Sapienza”, Italy)
Cell Outage Detection (COD) mechanisms in 5G and beyond cellular networks play an increasingly important role in ensuring uninterrupted services to end users by promptly identifying possible outages at the radio and cell levels. Traditionally, COD algorithms have used aggregated data at the core network level to detect anomalies, but there have been scalability and data confidentiality issues. This work proposes a novel fully-decentralized consensus-based Federated Learning approach. This approach utilizes Random Trees and federated feature removals to identify anomalies at the cell level. It is based only on data available locally at the Base Station (BS), but relies on knowledge acquired by all BSs participating in the federation. The approach is fully decentralized in the sense that it does not involve a central entity responsible for aggregating the knowledge of the learning agents. A set of simulations based on a dataset with real cell data has been employed to demonstrate the effectiveness of the proposed approach in comparison to other baseline approaches, even in the presence of malicious agents attempting to disrupt the learning process.
Robust Intrusion Detection System with Explainable Artificial Intelligence
Betül Güvenç Paltun (Ericsson Research, Turkey); Ramin Fuladi (Ericsson & Boğaziçi, Turkey); Rim El Malki (Ericsson, R&D, France)
Machine learning (ML) models serve as powerful tools for threat detection and mitigation; however, they also introduce potential new risks. Adversarial input can exploit these models through standard interfaces, thus creating new attack pathways that threaten critical network operations. As ML advancements progress, adversarial strategies become more advanced, and conventional defenses such as adversarial training are costly in computational terms and often fail to provide real-time detection. These methods typically require a balance between robustness and model performance, which presents challenges for applications that demand instant response. To further investigate this vulnerability, we suggest a novel strategy for detecting and mitigating adversarial attacks using eXplainable Artificial Intelligence (XAI). This approach is evaluated in real time within intrusion detection systems (IDS), leading to the development of a zero-touch mitigation strategy. Additionally, we explore various scenarios in the Radio Resource Control (RRC) layer within the Open Radio Access Network (O-RAN) framework, emphasizing the critical need for enhanced mitigation techniques to strengthen IDS defenses against advanced threats and implement a zero-touch mitigation solution. Extensive testing across different scenarios in the RRC layer of the O-RAN infrastructure validates the ability of the framework to detect and counteract integrated RRC-layer attacks when paired with adversarial strategies, emphasizing the essential need for robust defensive mechanisms to strengthen IDS against complex threats.
Enhanced Protection of 5G-IoT and Beyond Infrastructures: Evolving Intelligent Strategies for DDoS Attack Multiclass Classification
Pablo Benlloch-Caballero (University of West Scotland, United Kingdom (Great Britain)); Jose Maria Alcaraz Calero (University of the West of Scotland & School of Engineering and Computing, United Kingdom (Great Britain)); Qi Wang (University of the West of Scotland, United Kingdom (Great Britain))
In the evolving landscape of next-generation networks beyond the 5th Generation (5G), the persistent threat of cyber-attacks remains a significant concern. 5G-IoT networks facilitate the deployment of numerous constrained and vulnerable IoT devices, making them attractive targets for hackers exploiting Distributed Denial of Service (DDoS) attacks (e.g., botnets), thereby increasing the attack surface. As a result, 5G infrastructures and service providers must develop robust systems for detecting and mitigating these threats.
This research paper addresses these challenges by introducing a novel dataset collected from monitoring 5G-IoT multi-tenant traffic with multiple nested encapsulation headers. The dataset features six distinct network traffic classes tailored for Machine Learning (ML) model classification, offering a comprehensive understanding of network behaviour through aggregated features and metrics of 5G-IoT network flows across various topological scenarios. Among the multiple ML models evaluated, the HistGradBoost Classifier (HGBC) model excelled, known for its resilience in different network topology scenarios, effectively classifying network flows and enhancing defence mechanisms against potential attacks. The HGBC achieved F1 scores of 99.42% and 98.62% in the two scenarios presented in this study.