“Semantic Communication for the Internet of Things: Frameworks, Optimization, and Open Challenges”
Date, hour and room to be defined
Speaker:
- Samer Lahoud (Dalhousie Univ.- Halifax, CA)
Motivation and Context
Channel Coding and task-oriented communication has demonstrated that learning-based transceivers can significantly improve robustness and spectral efficiency when the objective is task performance rather than bit-perfect reconstruction. As a result, SemCom now appears in 6G roadmaps and white papers as a candidate technology for future wireless systems, especially in scenarios involving rich sensing data and intelligent edge processing.
At the same time, most existing SemCom prototypes are developed in centralized, resource-rich settings: models are trained on global datasets, executed on powerful hardware, and evaluated under idealized channel assumptions. This stands in sharp contrast with wireless Internet of Things deployments, where data is distributed across many devices, energy and computation budgets are limited, connectivity is intermittent, and radio resources must serve a very large number of nodes. Moving SemCom from centralized laboratories to distributed, resource-constrained IoT networks therefore raises fundamental challenges in model training, runtime adaptation, and resource allocation.
This tutorial focuses on these challenges and is organized around the following guiding questions:
- Which architectural and training choices enable SemCom under wireless IoT constraints such as limited energy, bandwidth, and intermittent connectivity?
- How can federated and split learning be adapted to train semantic encoders and decoders over wireless networks while respecting data locality and heterogeneity?
- How can SemCom systems adapt at runtime to varying channel conditions, device capabilities, and traffic patterns using multi-configuration models and semantic-aware control?
- How should radio resources be allocated when semantic quality, latency, and energy consumption are primary objectives, rather than bit throughput alone?
By explicitly tracing the path from centralized SemCom models to distributed IoT deployments, the tutorial aims to provide attendees with a coherent view of the field, a set of reusable building blocks, and a structured perspective on the most pressing open research directions.
Structure and Content
The tutorial is organized in four parts that move from centralized semantic communication (SemCom) models to distributed, resource-constrained IoT deployments. Each part combines a compact survey of existing work with design insights on distributed training, adaptive runtime operation, and semantic-aware resource allocation.
Part 1 – Semantic communication foundations and Deep JSCC for IoT.
This part introduces SemCom from a system perspective, starting from centralized architectures and then motivating IoT deployments.
- From classical source–channel separation to semantic and task-oriented communication, and when semantic approaches provide advantages over bit-level transmission
- Conceptual definition of SemCom and distinction between reconstruction-oriented Deep Joint Source–Channel Coding (Deep JSCC) and task-oriented SemCom
- Survey and classification of SemCom approaches by objective (distortion, task accuracy, hybrid) and modality (images, text, speech, sensor data)
- Deep JSCC architectures: encoder–channel–decoder autoencoders, typical loss functions, and illustrative examples for image and sensor data
- Assumptions of centralized SemCom models (global datasets, powerful hardware, idealized channels) and why IoT, with its task orientation, sporadic traffic, and tight constraints on bandwidth, latency, and energy, requires new frameworks
Part 2 – Training and operating semantic encoders in IoT: federated, split, and adaptive models.
This part focuses on distributed training and adaptive runtime operation of SemCom models in heterogeneous IoT environments.
- IoT constraints for learning and inference: data locality, statistical and systems heterogeneity, intermittent connectivity, and strict energy budgets
- Federated learning for SemCom: central ideas, heterogeneity-aware clustering and client selection, and trade-offs between semantic fidelity, communication overhead, and fairness across devices
- Split learning and split inference: partitioning semantic encoders between devices and edge servers, static versus adaptive cut selection, and impact on latency, energy consumption, and privacy
- Adaptive runtime mechanisms: multi-configuration or slimmable Deep JSCC models, SNR-conditioned encoders, and semantic-aware control that selects configurations based on channel state and device capabilities without full retraining
Part 3 – Semantic-aware resource allocation for SemCom IoT.
This part addresses radio resource management when semantic quality, latency, and energy are primary objectives.
- Limitations of bit-centric schedulers and link adaptation when applied directly to SemCom-enabled IoT systems
- Definition of semantic utility that combines semantic fidelity, latency, and energy consumption, and its role in decision making
- Design patterns for semantic-aware scheduling: prioritizing high-value semantic updates under congestion, mapping feature importance to time–frequency resources, and coordinating multiple semantic flows
- High-level use of optimization and reinforcement learning for semantic-aware radio resource management, and relation to existing work on learning-based scheduling and control in wireless networks
Part 4 – Open challenges and research directions in SemCom for IoT.
The final part synthesizes the tutorial and highlights open problems that structure future research.
- Reliability and guarantees for semantic channels, including semantic outage definitions, performance bounds, and interaction with classical quality of service metrics
- Multi-modal SemCom, privacy and security issues, and robustness to adversarial behaviour and poisoned or unreliable devices
- Fairness and inclusion in distributed semantic training and scheduling across heterogeneous IoT nodes, including bias in data and resource allocation
- Experimental challenges and standardization aspects on the path from laboratory prototypes to AI-native wireless IoT deployments



















