SAQ22025-07-11T14:29:06+00:00

SAQ2 – Security threats and AI/ML

Friday, 6 June 2025, 9:00-10:30, room 1.A

Session Chair: Krzysztof Wesolowski (Poznan Univ. Technology, PL)

DDFL: a Robust Clustering-Based Defense Against Poisoning Attacks in Decentralized Federated Learning
Asanka Amarasinghe (University of Oulu, Finland & Centre for Wireless Communication (CWC), Finland); Yushan Siriwardhana, Tharaka Mawanane Hewa and Mika E Ylianttila (University of Oulu, Finland)
Federated Learning (FL) is a privacy-preserving decentralized Machine Learning (ML) paradigm where the users are massively distributed and coordinated by a central server during the training process. The FL server is a single point of failure, therefore, an absence or a delay with the server severely affects the training process. This leads to further extending FL into Decentralized Federated Learning (DFL) where the learning happens in a peer-to-peer manner without the involvement of a central server. The peer-to-peer sharing of model updates further increases the threat of poisoning attacks, which is an inherent vulnerability in FL systems. Defenses against poisoning attacks in DFL systems are not extensively discussed in the state-of-the-art research, and the existing algorithms perform poorly when the client data distribution is non-IID. In this work, we extensively evaluate the performance of the existing defenses with non-IID client data distribution, and propose DDFL, a novel clustering-based defense mechanism for DFL systems. DDFL does not rely on empirical parameters but considers model update history for the defense. Moreover, we provide a comprehensive analysis for DDFL’s effectiveness with non-IID data under various poisoning attack scenarios. Our results show that DDFL performs better in removing poisoners compared with the state-of-the-art techniques in non-IID scenarios.

SecFLH- Defending Federated Learning-Based IoT Health Prediction Systems Against Poisoning Attacks
Sanoj Bhanuka Liyanage and Venuranga Weerawardhane (University of Ruhuna, Sri Lanka); Jalitha Pramod Kheminda (University of Ruhuna Galle, Sri Lanka & University of Ruhuna, Sri Lanka); Yushan Siriwardhana (University of Oulu, Finland); Thilina Weerasinghe (University of Ruhuna, Sri Lanka); Madhusanka Liyanage (University College Dublin, Ireland)
Poisoning attacks in Federated Learning (FL) trains the model to learn towards a malicious objective. While existing defenses against poisoning attacks are effective, their performance substantially degrades with the presence of non-IID data. This paper introduces SecFLH, a novel defense mechanism for FL systems designed to counter targeted model poisoning attacks, particularly in non-IID data environments often encountered in healthcare IoT applications. Unlike traditional aggregation defenses, SecFLH employs a multi-step approach, incorporating cosine distance analysis, HDBSCAN clustering, centroid selection, and adaptive clipping to effectively isolate and exclude malicious client updates. Experimental results on benchmark datasets, including MNIST, CIFAR-10, and real-world healthcare data, validate SecFLH’s robustness in maintaining model accuracy even with a high percentage of malicious clients. The proposed algorithm demonstrates resilience across varying non-IID scenarios, highlighting its practical potential for secure FL applications in dynamic, distributed environments.

AI Model Signing for Integrity Verification
Adrian Brodzik (Warsaw University of Technology & Orange Innovation Poland, Poland); Wojciech Mazurczyk (Warsaw University of Technology, Poland)
The availability of open-source resources, datasets, and pre-trained Artificial Intelligence (AI) models has increased the overall adoption of AI systems across various sectors. However, this growth has introduced significant security challenges similar to those faced in software development. Threat actors can infiltrate public and private AI model repositories and introduce malicious models, leading to arbitrary code execution upon deserialization. To address this issue, this paper highlights the importance of model signing, a method that uses digital signatures to confirm where AI models come from and ensure that they have not been tampered with. Taking advantage of Public Key Infrastructure (PKI), model signing builds trust in the development and use of AI, especially in critical areas where malicious alternations could cause serious problems. In addition, we investigate four novel cryptographic signing approaches for AI models: Single Hash Single Signature (SHSS), Multiple Hashes Multiple Signatures (MHMS), Multiple Hashes Single Signature (MHSS), and Multiple Hashes Single Signature Chain (MHSSC), assessing their security, performance, and usability. The suggested methods aim to enhance the security of AI model repositories by ensuring that distributed models are traceable and verifiably safe for deployment.

A Zero-Knowledge-Based Approach for Secure Inter-Slice Communication in 6G Networks
Alfonso Egio (Fundació i2CAT, Internet i Innovació Digital a Catalunya, Spain); Alvaro Le Monnier (i2CAT, Spain); Muhammad Asad (i2CAT Foundation Barcelona & Universitat Pompeu Fabra Barcelona, Spain); Maxime Compastie (i2CAT Foundation, Spain); Muhammad Shuaib Siddiqui (Fundació i2CAT, Internet i Innovació Digital a Catalunya, Spain)
As the sixth generation of cellular network is set to be deployed around 2030, the needs for ubiquitous connectivity and increased resource demand to facilitate resource will press for further collaboration between different stakeholders to constitute an efficient and resilient network fabric. In practice, the deployment of multiple network slices in multiple domains is one of the promising approach to reach this vision. However, from a privacy standpoint, this introduces additional risks for the customers, as a malicious slices may contemplate the impersonation of a legit one for exfiltrating network traffic. It is there necessary to proceed with slice identification and authentication to prevent any collaboration with a non-legit network domain while avoiding exchanging sensitive data before the authentication is validated. In response to this challenge, this paper presents a privacy-preserving authentication framework for inter-slice communication. The framework integrates Zero-Knowledge Proofs (ZKPs) for privacy-preserving authentication and Public Key Cryptography (PKC) for secure identity management, ensuring that no sensitive information is jeopardized before a slice can be trusted. We expose an implementation prototype and evaluate it in controlled slicing environment, demonstrating its ability to maintain performance under varying operational constraints. Quantitative results highlight the efficiency limited resource consumption of the authentication model, its scalability in distributed environments, and robustness against security threats.

Mitigating Evasion Attacks in Fog Computing Resource Provisioning Through Proactive Hardening
Younes Salmi and Hanna Bogucka (Poznan University of Technology, Poland)
This paper investigates the susceptibility to model integrity attacks that overload virtual machines assigned by the k-means algorithm used for resource provisioning in fog networks. The considered k-means algorithm runs two phases iteratively: offline clustering to form clusters of requested workload
and online classification of new incoming requests into offline-created clusters. First, we consider an evasion attack against the classifier in the online phase. A threat actor launches an exploratory attack using query-based reverse engineering to discover the Machine Learning (ML) model (the clustering scheme). Then, a passive causative (evasion) attack is triggered in the offline phase. To defend the model, we suggest a proactive method using adversarial training to introduce attack robustness into the classifier. Our results show that our mitigation technique effectively maintains the stability of the resource provisioning system against attacks.

Go to Top