NET 4

NET4 – Fog, edge and cloud computing

Wednesday, 20 June 2018, 16:00-17:30, E2 hall
Session chairLeonardo Goratti (Zodiac Aerospace, Germany)

 

16:00 – Modelling of Computational Resources for 5G RAN

Sina Khatibi and Kunjan Shah (NOMOR Research GmbH, Germany); Mustafa Roshdi (Nomor Research GmbH, Germany)
The future mobile networks have to be flexible and dynamic to address the exponentially increasing demand with the scarce available radio resources. 5G networks are going to be virtualised and implanted over data clouds. While elastic computation resource management is a well-studied concept in IT domain, it is a fairly new topic in Telco-cloud environment. Studying the computational complexity of mobile networks is the first step toward enabling computational resource management in telco environment. This paper presents a brief overview of the latency requirement of Radio Access Networks (RANs) and virtualisation techniques in addition to experimental results for a full virtual physical layer in a container-based virtual environment. The novelty this paper is presenting complexity study of virtual RAN through experimental results besides a model for estimating the processing time of each functional block. The measured processing times show that the computational complexity of PHY layer increases as the Modulation and Coding Scheme (MCS) index increase. The processes in uplink such as decoding take almost twice time comparing the relative functions in downlink. The proposed model for computational complexity is the missing link for joint radio resource and computational resource management. Using the presented complexity model, one can estimate the computational requirement for provisioning a virtual RAN as well as designing the elastic computational resource management.

 

 

16:18 – Analyzing the Deployment Challenges of Beacon Stuffing as a Discovery Enabler in Fog-to-Cloud Systems

Zeineb Rejiba and Xavier Masip-Bruin (Universitat Politècnica de Catalunya (UPC) & Advanced Network Architectures Lab (CRAAX), Spain); Eva Marín-Tordera (Technical University of Catalonia UPC, Spain)
In order to meet the needs of emerging IoT applications having tight QoS constraints, new computing paradigms have been proposed, bringing computation resources closer to the edge of the network, where IoT resides. One of such paradigms is Fog-to-Cloud (F2C), defined as a framework where the combined use of fog and cloud resources is coordinated and managed in an optimized manner to achieve the desired service requirements. Unlike cloud computing, the fog provides a heterogeneous set of resources, possibly within fixed deployments provided by city managers, or that could even be contributed by end users. This brings in many challenges yet to be addressed such as resource discovery, which is the focus of this paper. This paper digs into the utilization of 802.11 beacon stuffing as a possible solution allowing the discovery of nearby devices in an F2C system, particularly dealing with specific design and implementation details of the proposed solution and more importantly its real applicability to an F2C system through the analysis of several experiments carried out on a real world testbed.

 

 

16:36 – Multi-access Edge Computing: A 5G Technology

Carlos Parada, Francisco Fontes and Carlos Marques (Altice Labs, Portugal); Cristina Leitão (Altice PT, Portugal); Vitor A Cunha (Instituto de Telecomunicações, Portugal)
One of the most challenging KPIs defined by the ITU-T (IMT-2020) for 5G is “latency below 1 ms”. This is a key requirement for an emerging era of new applications, such as virtual and augmented reality, video analytics, or Industry 4.0. Those applications require low latency to enable the so-called tactile Internet. However, relying on large centralized datacenters as today, this requirement defies the laws of physics. The solution is edge computing, a technology aiming to deploy applications at the edge of the network, closer to end users. Altice Labs has been contributing to the standardization of this technology from the beginning, and is currently prototyping a fully functional and ETSI standards-compliant solution. The first version of this prototype has been already released and a proof-of-concept (PoC) has been deployed at the Altice PT Labs. This paper describes the edge computing technology, the prototype under development, the PoC integration and the results of the preliminary evaluation already performed.

 

16:54 –  V-PMP: a VLIW Packet Manipulator Processor

Marco Spaziani Brunella (University of Rome “Tor Vergata” & CNIT, Italy); Salvatore Pontarelli (National Inter-University Consortium for Telecommunications (CNIT), Italy); Marco Bonola and Giuseppe Bianchi (University of Rome “Tor Vergata”, Italy)
5G networks are called to efficiently and flexibly support an ever growing variety of heterogeneous middlebox-type functions such as network address translation, tunneling, load balancing, traffic engineering, monitoring, intrusion detection, and so on. This flexibility, together with the increasing demands in terms of packet processing throughput, is extremely difficulty to achieve. This is even more difficult if we need also a user-friendly environment that allows the network programmer to efficiently reconfigure the device when novel functionalities are required. One of the best candidates to provide such flexibility is a programmable dataplane based on a pipeline of match/action stages, similar to those proposed for Openflow. Designing an high speed programmable dataplane requires to have both flexible and efficient match tables and high speed programmable blocks for action execution. In particular, the most demanding actions, in terms of processing power, are those related to the manipulation of the data inside the packets, i.e. the set of operations (such as encapsulation or header manipulation) performed on packets {\em after} the forwarding decision. These encapsulation and header rewriting actions are key elements to put together heterogeneous networks and to enable flexible slicing, as envisioned by the forthcoming 5G network architecture. In the work presented in this paper we focused on the design of a Very Long Instruction Word (VLIW) processor, based on a custom instruction set, able to perform efficient packet processing operations granting multi-Gbps throughput. In particular, we provide details of the Packet Manipulator Processor (PMP) architecture and its I/O interfaces, which have been designed to accomplish packet manipulation tasks, we discuss the throughput analysis of three use cases (tunneling, NAT, and ARP reply generation) and present FPGA synthesis results.

 

 

17:12 – A Performance Benchmarking Analysis of Hypervisors Containers and Unikernels on ARMv8 and X86 CPUs

Ashijeet Acharya (Virtual Open Systems SAS); Jérémy Fanguède, Michele Paolino and Daniel Raho (Virtual Open Systems SAS, France)
Network Functions Virtualization paradigm has emerged as a new concept in networking which aims at cost reduction and ease of network scalability by leveraging on virtualization technologies and commercial-off-the-shelf hard- ware to disintegrate the software implementation of network functions from the underlying hardware. Recently, lightweight virtualization techniques have emerged as efficient alternatives to traditional Virtual Network Functions (VNFs) developed as VMs. At the same time ARMv8 servers are gaining traction in the server world, mostly because of their interesting performance per watt characteristics. In this paper, the CPU, memory and Input/Output (I/O) performance of such lightweight techniques are compared with that of classic virtual machines on both x86 and ARMv8 platforms. More in particular, we selected KVM as hypervisor solution, Docker and rkt as container engines and finally Rumprun and OSv as unikernels. On x86, our results for CPU and memory related workloads highlight a slightly better performance for containers and unikernels whereas both of them perform almost twice as better as KVM for network I/O operations. This highlights performance issues of the Linux tap bridge with KVM but that can easily be overcome by using a user space virtual switch such as VOSYSwitch and OVS/DPDK. On ARM, both KVM and containers produce similar results for CPU and memory workloads, but have an exception for network I/O operations where KVM proves to be the fastest. We also showcase the several shortcomings of unikernels on ARM which account for their lack of stable support for this architecture.