PHY22023-09-04T10:37:36+00:00

PHY2 – AI/ML in the PHY layer

Wednesday, 7 June 2023, 16:00-17:30, Room G2

Session Chair: Yejian Chen (Bell Laboratories, Nokia, Germany)

Unsupervised ANN-Based Equalizer and Its Trainable FPGA Implementation
Jonas Ney (RPTU Kaiserslautern-Landau, Germany); Vincent Lauinger (Karlsruhe Institute of Technology, Germany); Laurent Schmalen (Karlsruhe Institute of Technology (KIT), Germany); Norbert Wehn (RPTU Kaiserslautern-Landau, Germany)
In recent years, communication engineers put strong emphasis on artificial neural network (ANN)-based algorithms with the aim of increasing the flexibility and autonomy of the system and its components. In this context, unsupervised training is of special interest as it enables adaptation without the overhead of transmitting pilot symbols. In this work, we present a novel ANN-based, unsupervised equalizer and its trainable field programmable gate array (FPGA) implementation. We demonstrate that our custom loss function allows the ANN to adapt for varying channel conditions, approaching the performance of a supervised baseline. Furthermore, as a first step towards a practical communication system, we design an efficient FPGA implementation of our proposed algorithm, which achieves a throughput in the order of Gbit/s, outperforming a high-performance GPU by a large margin.

Temporal Self-Organizing Maps for Prediction of Feature Evolution
Prayag Gowgi and Vijaya Parampalli Yajnanarayana (Ericsson Research, India)
The future wireless network deployments in 5G and beyond are dense, operating at multiple frequency bands, and support higher capacity by opportunistically selecting among multiple frequency bands. This results in frequent measurement across different frequency bands, increased battery draining of user equipment (UE), excessive traffic in the control plane and higher latency. In this study, we propose spatio-temporal selforganizing maps for predicting the time evolution of multiple downlink and uplink features by computing the Markov order of the multi-variate time series. We develop an algorithm to estimate the Markov order and use it in conjunction with spatio-temporal self-organizing maps to predict signal dynamics. The proposed algorithm is validated against the publicly available data-sets and Ericsson’s 5G test-bed data-sets. The proposed algorithm is able to predict signals up to 13 and 28 seconds into the future for fast and slow-moving UEs.

Prediction-Based Physical Layer Base Station Switching Using Imaging Data
Khanh Nam Nguyen (National Institute of Information and Communications Technology (NICT) & Resilient ICT Research Center, Japan); Kenichi Takizawa (National Institute of Information and Communications Technology, Japan)
Deep learning is applied to implement base station switching in physical layer using imaging data for 60 GHz millimeter-wave communications where the received signal is susceptible to blockage. In particular, a predictive model is trained from video frames and received signal data. Accordingly, the video frames are used to predict received power two seconds ahead using three-dimensional convolutional neural networks and long short-term memories, followed by proactive switching decisions. The model can predict the future received power with rootmean-square errors under 2 dB. The proposed prediction-based proactive switching method surpasses the reactive approach in terms of connected duration, maintaining a stable connection in various blockage moving trajectories.

Turbo AI, Part V: Verifying AI-Enhanced Channel Estimation for RAN from System Level
Yejian Chen (Bell Laboratories, Nokia, Germany); Stefan Wesemann (Nokia Bell Labs & Nokia, Germany); Thorsten Wild (Nokia Bell Labs, Germany)
Turbo-AI is an iterative Machine-Learning (ML) based channel estimator as an additional option to balance the performance-complexity trade-off and realize challenging scenarios towards future 6th Generation (6G) wireless communications in a complementary manner. In this paper, we move the focus from algorithmic aspects and pure level link investigations, addressed in previous Turbo-AI paper series, to system level exploitation and hardware-related implementation. We integrate Turbo-AI to a 5G compliant system level simulator, through which the performance gap between 5G legacy channel estimator and ML-based channel estimator can be described and quantized to the distributions of channel estimation Mean Squared Error (MSE). Finally, Turbo-AI, representing an important component of future AI/ML-enhanced Physical Layer (PHY), is realized in a hardware platform implementing the New Radio (NR) compliant gNB-functionality, which satisfies the L1 processing latency of the whole Physical Uplink Shared Channel (PUSCH) chain. The performance advantage of Turbo-AI over the 5G legacy channel estimator is not only reflected in the improved coded Block Error Rate (BLER) curves, but also exhibited by the realtime Radio Frequency (RF) experimentation towards an early adoption of AI/ML-enhanced RAN, seamless model management and performance-latency trade-off, depending on computational resources and system constraints.

Meta-Learning Based Few Pilots Demodulation and Interference Cancellation for NOMA Uplink
Hebatalla Issa, Mohammad Shehab and Hirley Alves (University of Oulu, Finland)
Non-Orthogonal Multiple Access (NOMA) is at the heart of a paradigm shift towards non-orthogonal communication due to its potential to scale well in massive deployments. Nevertheless, the overhead of channel estimation remains a key challenge in such scenarios. This paper introduces a data-driven, metalearning-aided NOMA uplink model that minimizes the channel estimation overhead and does not require perfect channel knowledge. Unlike conventional deep learning successive interference cancellation (SICNet), Meta-Learning aided SIC (meta-SICNet) is able to share experience across different devices, facilitating learning for new incoming devices while reducing training overhead. Our results confirm that meta-SICNet outperforms classical SIC and conventional SICNet as it can achieve a lower symbol error rate with fewer pilots.

Go to Top