PHY3 – AI-based PHY design

Wednesday, 5 June 2024, 11:00-13:00, room Galapagos

Session Chair: Sofie Pollin (KU Leuven, BE)

Multi-Objective Deep Reinforcement Learning for 5G Base Station Placement to Support Localisation for Future Sustainable Traffic
Ahmed Al-Tahmeesschi (University of York, UK); Jukka Talvitie (Tampere University, Finland); Miguel López-Benítez (University of Liverpool, United Kingdom (Great Britain)); Hamed Ahmadi (University of York, United Kingdom (Great Britain)); Laura Ruotsalainen (University of Helsinki, Finland)
Millimeter-wave (mmWave) is a key enabler for next-generation transportation systems. However, in an urban city scenario, mmWave is highly susceptible to blockages and shadowing. Therefore, base station (BS) placement is a crucial task in the infrastructure design where coverage requirements need to be met while simultaneously supporting localisation. This work assumes a pre-deployed BS and another BS is required to be added to support both localisation accuracy and coverage rate in an urban city scenario. To solve this complex multi-objective optimisation problem, we utilise deep reinforcement learning (DRL). Concretely, this work proposes: 1) a three-layered grid for state representation as the input of the DRL, which enables it to adapt to the changes in the wireless environment represented by changing the position of the pre-deployed BS, and 2) the design of a suitable reward function for the DRL agent to solve the multi-objective problem. Numerical analysis shows that the proposed deep Q-network (DQN) model can learn/adapt from the complex radio environment represented by the terrain map and provides the same/similar solution to the exhaustive search, which is used as a benchmark. In addition, we show that an exclusive optimisation of coverage rate does not result in improved localisation accuracy, and thus there is a trade-off between the two solutions.

Enhancement of Transient-Based Radio Frequency Fingerprinting with Smoothing and Gradient Functions
Gianmarco Baldini (Joint Research Centre – European Commission, Italy)
Radio frequency (RF) fingerprinting identification (RFFI) is a promising physical layer classification and authentication technique based on the intrinsic hardware defects of electronic systems in general and wireless communication emitters in particular as investigated in this paper. The hardware differences in the electronic components of the radio frequency front end of the wireless emitters propagate in the structure and shape (the fingerprints) of the signal transmitted in the space. After signal digitization, the analysis of the signal can be used to distinguish the source emitters. The presence of noise or signal attenuation may obfuscate the RF fingerprints (RFF) and significant research efforts in literature focused on removing the disturbances from the signal or enhancing the fingerprints. One potential issue is that de-noising techniques may remove the same fingerprints needed for the emitter identification. This paper proposes a combination of denoising and enhancing functions, whose contribution to the signal analysis are weighted using a feature selection algorithm applied to the spectral representation of the signal. A machine learning algorithm is used to implement the RFFI. The approach is applied on a recently published data set with 10 ZigBee devices where only the transient portion of the signal is used to implement the RFFI. The results show that the proposed approach outperforms the direct application of the machine learning algorithm on the spectral representation of the signal for different values of the Signal to Noise Ratio (SNR) in dB.

Goal-Oriented and Semantic Communication in 6G AI-Native Networks: The 6G-GOALS Approach
Emilio Calvanese Strinati (CEA-LETI, France); Paolo Di Lorenzo (Sapienza University of Rome, Italy); Vincenzo Sciancalepore (NEC Laboratories Europe GmbH, Germany); Adnan Aijaz (Toshiba Europe Ltd, United Kingdom (Great Britain)); Marios Kountouris (University of Granada, Spain & EURECOM, France); Deniz Gündüz (Imperial College London, United Kingdom (Great Britain)); Petar Popovski (Aalborg University, Denmark); Mohamed Sana (CEA LETI Grenoble, France); Photios A. Stavrou (EURECOM, France); Beatriz Soret (Universidad de Malaga, Spain); Nicola Cordeschi (University of Surrey, United Kingdom (Great Britain)); Simone Scardapane (Sapienza University of Rome, Italy); Mattia Merluzzi (CEA-Leti, France); Lanfranco Zanzi (NEC Laboratories Europe, Germany); Mauro Boldi (Telecom Italia, Italy); Tony Q. S. Quek (Singapore University of Technology and Design, Singapore); Nicola di Pietro (Hewlett Packard Enterprise, Italy); Olivier Forceville (HPE FRANCE, France); Francesca Costanzo (CEA Leti, France); Peizheng Li (Toshiba Europe Ltd., United Kingdom (Great Britain))
Recent advances in AI technologies have notably expanded device intelligence, fostering federation and cooperation among distributed AI agents. These advancements impose new requirements on future 6G mobile network architectures. To meet these demands, it is essential to transcend classical boundaries and integrate communication, computation, control, and intelligence. This paper presents the 6G-GOALS approach to goal-oriented and semantic communications for AI-Native 6G Networks. The proposed approach incorporates semantic, pragmatic, and goal-oriented communication into AI-native technologies, aiming to facilitate information exchange between intelligent agents in a more relevant, effective, and timely manner, thereby optimizing bandwidth, latency, energy, and electromagnetic field (EMF) radiation. The focus is on distilling data to its most relevant form and terse representation, aligning with the source’s intent or the destination’s objectives and context, or serving a specific goal. 6G-GOALS builds on three fundamental pillars: i) AI-enhanced semantic data representation, sensing, compression, and communication, ii) foundational AI reasoning and causal semantic data representation, contextual relevance, and value for goal-oriented effectiveness, and iii) sustainability enabled by more efficient wireless services. Finally, we illustrate two proof-of-concepts implementing semantic, goal-oriented, and pragmatic communication principles in near-future use cases. Our study covers the project’s vision, methodologies, and potential impact.

Indoor Positioning with Probabilistic Graphical Models in RIS-Enhanced mmWave MIMO Systems
Leonardo Terças and Rafaela Schroeder (University of Oulu, Finland); Jiguang He (Technology Innovation Institute, United Arab Emirates); Markku Juntti (University of Oulu, Finland)
We apply a Bayesian-based framework to address indoor positioning in a reconfigurable intelligent surface (RIS) assisted MIMO communications system. By employing a two-stage estimation approach, we first estimate the channel parameters using atomic norm minimization (ANM) via downlink training. In the second stage, we model the system as a probabilistic graphical model (PGM) and utilize the No-U-Turn Sampler (NUTS) algorithm to approximate the posterior distribution of the coordinates based on the estimated channel parameters. Unlike existing algorithms, our framework does not require additional pilot acquisition in the second stage. The numerical results are presented using kernel density estimation, cumulative distribution function (CDF), and root mean square error (RMSE). Our findings are compared with the non-linear least squares (NLS) estimator and the Cramér-Rao lower bound to evaluate its performance. In particular, when the signal-to-noise ratio (SNR) exceeds -15 dB, our proposed method significantly outperforms the NLS estimator, bringing an accuracy percentage improvement of over 50% in the considered set-up.

Machine Vision Aided Adaptive Beamforming Decision for IRS-Assisted Wireless Networks
Muteen Munawar (Ghent University & IMEC, Belgium); Mamoun Guenach (Imec, Leuven, Belgium); Ingrid Moerman (Ghent University – IMEC, Belgium)
This study leverages machine vision to assist communication in wireless networks, with a specific focus on intelligent reflecting surface (IRS)-assisted wireless networks. Instead of depending on traditional schemes such as alternating optimization or semidefinite relaxation to maximize signal strength in an IRS-assisted network, which are computationally expensive and impractical, we use visual data to make low-complexity beamforming decisions for users. Our approach involves employing a ceiling camera with a fish-eye view, covering a wide communication area. The user within the network is initially detected using the YOLOv2 object detection method. Subsequently, we propose closed-form analytical expressions to determine the distances between the access point (AP), user, and IRS. Accounting for the non-uniform nature of the fish-eye image, we introduce a novel method to determine non-uniform pixel weightages using trigonometric techniques. Based on the calculated distances, we make beamforming decisions depending on the user’s proximity to the AP or IRS. The proposed method significantly reduces computational complexity, making it nearly independent of the number of reflecting elements at the IRS. Simulation results indicate that the proposed approach exhibits extremely lower computational costs compared not only to conventional schemes such as alternating optimization and semidefinite relaxation-based convex solvers but also to low-complexity heuristic schemes.

A Deep Learning Approach in RIS-Based Indoor Localization
Rafael A. P. Aguiar (INESC TEC & Faculty of Engineering, University of Porto, Portugal); Nuno M. Paulino and Luis M. Pessoa (INESC TEC & Faculty of Engineering, University of Porto, Portugal)
In the domain of RIS-based indoor localization, our work introduces two distinct approaches to addressing real-world challenges. The first method is based on deep learning, employing a Long Short-Term Memory (LSTM) network. The second, a novel LSTM-PSO hybrid, strategically takes advantage of deep learning and optimization techniques. Our simulations encompass practical scenarios, including variations in RIS placement and the intricate dynamics of multipath effects, all in Non-Line-of-Sight conditions. Our methods can achieve very high reliability obtaining centimeter-level accuracy for the 98th percentile (worst case) in a different set of conditions including the presence of the multipath effect. Furthermore, our hybrid approach showcases remarkable resolution, achieving sub-millimeter-level accuracy in numerous scenarios.

Turbo-AI, Part VI: Achieving Robust Downlink AI-Based Channel Prediction
Yejian Chen (Bell Laboratories, Nokia, Germany); Thorsten Wild (Nokia Bell Labs, Germany); Christophe Henry (Nokia Networks, France)
In previous paper series, we proposed a Machine Learning (ML) based channel estimator and investigated it from different aspects. For the nature of its iterative architecture, it was named as Turbo-AI and can fully exploit the high dimensional granularity of wireless channel to approach the channel estimation bound with low complexity. Turbo-AI establishes a beneficial performance-complexity trade-off for enabling challenging use cases toward future 6th Generation (6G) wireless communications. In this paper, we target to the challenging channel prediction problem in a Time Division Duplex (TDD) system. We propose a concept named Turbo-Predictor, which concatenates two Neural Networks (NN), in which the first one provides an initialized channel prediction, and the second stage passes the approximation as virtual observation to Turbo-AI, in which both the channel estimation part and the prediction part will be jointly and iteratively processed. Furthermore, we exhibit potential enhancement in channel prediction, after selecting independent prediction paths through the multi-dimensional Sounding Reference Signal (SRS) structures. 5G New Radio (NR) compliant link level simulations demonstrate that Turbo-Predictor can support different SRS configurations and reach high prediction quality for the terminals with medium to high mobility up to 120km/h, which can be fundamentally regarded as a solid prerequisite for the robust downlink performance in a TDD system.

Go to Top