VAP4: IoT for Industrial and Business Applications

Thursday, 10 June 2021, 16:00-17:30, Zoom Room

Session Chair: Kamran Sayrafian (NIST, USA)

Scalable Storage Scheme for Blockchain-Enabled IoT Equipped Food Supply Chains

Janitha Pranath Rupasena (University of Moratuwa, Sri Lanka); Tharaka Mawanane Hewa (University of Oulu, Finland); Kasun T. Hemachandra (University of Moratuwa, Sri Lanka); Madhusanka Liyanage (University College Dublin, Ireland & University of Oulu, Finland)
Blockchain is an innovative technology which enabled new applications for solving numerous problems in distributed environments such as the internet of things (IoT) equipped food supply chains (FSCs). In FSCs, the large volume of IoT data such as audio, video, images, and sensor data will be transferred to ensure the traceability of the food to its source. When blockchain technology is used in an FSC, the storage requirements in the nodes will grow with time, since blockchain only allows information adding, without deleting the already stored information. Therefore, it is evident that offchain storage offers more flexibility than onchain storage in IoT equipped supply chains. This paper proposes a scalable storage scheme using offchain storage, where the data from IoT devices in the supply chain will store offchain. To reduce the growth of offchain storage, we exploit the fact that some information regarding a particular food item may not be required after the expiration date. The scalability of the proposed scheme is validated through numerical and experimental results.


Measured Distributed Vs Co-Located Massive MIMO in Industry 4.0 Environments

Maximilian Arnold (Nokia Stuttgart, Germany); Paolo Baracca, Thorsten Wild and Frank Schaich (Nokia Bell Labs, Germany); Stephan ten Brink (University of Stuttgart, Germany)
Massive multiple-input multiple-output (MIMO), the cornerstone of 5G, is in theory well understood, but many parts of the practical challenge remain unclear, e.g. what antenna configuration is suitable for which scenario. Inheriting from the over-provisioning of antennas and thus the possibility of creating an ultra-reliable wireless link, Massive MIMO is also intended for industrial scenarios. In these environments, a large number of reflectors, movements, and distortions are to be expected, which results in a high variation of fading and propagation delay. We show that in the case of distributed antennas these channel outages through fading/blockage are minimized. Comparing this case to the standard co-located approach, the link-reliability is increased by more than 3 dB for the same amount of antennas. This is verified in two different, yet typical, future factory environments by using standard channel parameters. Through these measurements we show that the requirement of perfectly syncing the distributed antenna arrays can be relaxed, still achieving reasonable gains.


Empirical Investigation of Offloading Decision Making in Industrial Edge Computing Scenarios

Alexander Artemenko (Robert Bosch GmbH, Germany); Ismail Mehrez (University of Stuttgart, Stuttgart, Germany); Keerthana Govindaraj (Robert Bosch GmbH & COMSYS, RWTH Aachen, Germany); Andreas Kirstaedter (University of Stuttgart, Germany); Mykola Kuznietsov (Odessa National Polytechnic University, Ukraine)
Edge Computing (EC) is a paradigm introduced to support the end devices with the execution of computation intensive tasks while maintaining their intended Quality of Service (QoS). To achieve this, one or more Edge Servers (ES) with powerful capabilities are placed in a close proximity to the edge devices to provide the assistance needed. In order to do that, EC utilizes the concept of application offloading, which is the idea of moving the computation intensive tasks to be computed on a more powerful server. The act of offloading is not always a beneficial choice due to the aspect of availability of involved resources and data communication between devices. Therefore, to achieve a successful offloading process, the offloading decision making needs to answer the four questions of when, what, where and how to offload. In this paper, we investigate the “When to offload?” question, which is concerned with whether the offloading process results in a positive gain in performance or not. To strengthen our conclusions, we use empirical observations in a real setup running a set of emulated applications.


Weathering the Reallocation Storm: Large-Scale Analysis of Edge Server Workload

Lauri Lovén, Ella Peltonen and Erkki Harjula (University of Oulu, Finland); Susanna Pirttikangas (University of oulu, Finland)
Efficient service placement and workload allocation methods are necessary enablers for the actively studied topic of edge computing. In this paper, we show that under certain circumstances, the number of superfluous workload reallocations from one edge server to another may grow to a significant proportion of all user tasks – a phenomenon we present as a reallocation storm. We showcase this phenomenon on a city-scale edge server deployment by simulating the allocation of user task workloads in a number of scenarios capturing likely edge computing deployments and usage patterns. The simulations are based on a large real-world data set of city-wide Wi-Fi network connections in 2013-2014, with more than 47M connections over ca. 800 access points. We identify the conditions for avoiding the reallocation storm for three common edge-based reallocation strategies, and study the latency-capacity trade-off related to each strategy. As a result, we find that the superfluous reallocations vanish when the edge server capacity is increased above a certain threshold, unique for each reallocation strategy, peaking at ca. 35% of top ES workload. Further, while a reallocation strategy aiming to minimize reallocation distance consistently resulted in the worst reallocation storms, the two other strategies, namely, a random reallocation strategy and a bottom-up strategy which always chooses the edge server with the lowest workload as a reallocation target, behave nearly identically in terms of latency as well as the reallocation storm in dense edge deployments. Since the random strategy requires much less coordination, we recommend it over the bottom-up one in dense ES deployments.


A 5G Health Use Case Calling for Ecosystem Strategies. Resolving Technology and Business Dependencies Necessary to Kick off the Market

Ewout Brandsma (Philips, The Netherlands); Hanne Kristine Hallingby (Telenor, Norway); Per H. Lehne (Telenor Research, Norway)
5G-HEART, as one of the 5G PPP Phase 3 (ICT-19) projects, deploys innovative digital use cases involving healthcare, transport, and aquaculture. This article focuses on one healthcare use case; Remote Patient Monitoring for transitional care, a compelling use case that could be enabled by 5G connectivity. In general, connectivity is imperative to realize economic and welfare benefits and 5G-HEART studies whether 5G can provide capabilities which current connectivity solutions cannot. We describe and analyze this use case according to technology and business status and gaps, revealing dependencies and challenges between technologies and business partners in their value creation and capturing. This approach corresponds to perceiving the 5G market as an ecosystem. Based on our analyses we indicate scenarios for resolution in line with ecosystem strategies.