NET3 – Software-based and Self-driving Networks
Wednesday, 17 June 2020, 12:15-14:30 CEST, Recommended re-viewing, https://www.youtube.com/playlist?list=PLjQu6nB1DfNBOrnpJ9fJmXRM0acK6q0QT
Wednesday, 17 June 2020, 12:15-16:00 CEST, Non-Live interaction (Chat), link sent only to Registered people
Enrico Reticcioli, Giovanni Domenico Di Girolamo, Francesco Smarra, Alessio Carmenini and Alessandro D’Innocenzo (University of L’Aquila, Italy); Fabio Graziosi (University of l’Aquila, Italy)
Software Defined Network (SDN) architectures decouple control and forwarding functionalities by enabling the network devices to be remotely configurable/programmable runtime by a controller. As a direct consequence identifying an accurate model of a network and forwarding devices is crucial in order to apply advanced control techniques to optimize the network performance. An enabling factor in this direction is given by recent results that appropriately combine System Identification and Machine Learning techniques to obtain predictive models using historical data retrieved from a network. In this paper we propose a novel methodology to learn, starting from historical data and appropriately combining ARX identification with Regression Trees and Random Forests, an accurate model of the dynamical input-output behavior of a network device that can be directly and efficiently used to optimally and dynamically control the bandwidth of the queues of switch ports, within the SDN paradigm. We compare our predictive model with Neural Network predictors and demonstrate the benefits in terms of
Packet Losses reduction and Bandwidth savings in the Mininet network emulator environment.
Mojgan Barahman and Luis M. Correia (INESC-ID / INOV / IST, University of Lisbon); Lúcio Studer Ferreira (ISTEC / ULHT COPELABS / INESC-ID,Lisbon)
This paper presents a dynamic resource sharing approach aiming at optimizing computational resource performance of a baseband unit (BBU) pool in a cloud radio access network. Based on the bargaining concept in game theory, resource sharing is formulated as an optimization problem considering quality of service, real-time demand and the minimum resources that are required to prevent BBU crashes. The performance of the proposed model is evaluated in terms of BBU fulfilment level, resource usage and efficiency over time. Simulation results, for heterogeneous services in a tidal traffic environment, demonstrate that the proposed model allocates computational resources in proportion to the instantaneous demand of BBUs and the priority of the ongoing services. Results also show a minimum 97% enhancement in the efficiency of resource allocation in off-peak hours, compared to fixed allocation strategies based on peak-hour traffic demand.
Maxime Labonne (CEA LIST & Institut Polytechnique de Paris, France); Charalampos Chatzinakis (Communicating Systems Laboratory CEA, France); Alexis Olivereau (CEA, LIST, France)
Predicting the bandwidth utilization on network links can be extremely useful for detecting congestion in order to correct them before they occur. In this paper, we present a solution to predict the bandwidth utilization between different network links with a very high accuracy. A simulated network is created to collect data related to the performance of the network links on every interface. These data are processed and expanded with feature engineering in order to create a training set. We evaluate and compare three types of machine learning algorithms, namely ARIMA (AutoRegressive Integrated Moving Average), MLP (Multi Layer Perceptron) and LSTM (Long Short-Term Memory), in order to predict the future bandwidth consumption. The LSTM outperforms ARIMA and MLP with very accurate predictions, rarely exceeding a 3\% error (40\% for ARIMA and 20\% for the MLP). We then show that the proposed solution can be used in real time with a reaction managed by a Software-Defined Networking (SDN) platform.
Nitin Varyani and Zhi-Li Zhang (University of Minnesota, USA)
The inter-data-center backbone networks initially carried bandwidth-intensive traffic which does not have stringent latency service-level-objectives (SLOs). Fair allocation policies were used in such networks to achieve equitable distribution of bandwidth to the flows. However, these networks have started carrying traffic that is significantly tied to the end-user experience and thus have stringent latency SLOs. But, the literature lacks routing algorithms for inter-data-center backbone networks which impose latency SLOs on its traffic in addition to achieving fair allocation of bandwidth. We, therefore, introduce a concept called “fair share of latency” that involves routing traffic for different flows such that the violation of latency SLOs is minimum. We propose a linear-programming based routing algorithm for inter-data-center backbone networks that incorporates both “fair share of latency” and fair allocation of bandwidth. We also introduce latency utility curves that depict the perceived worth of different latencies to an application. Simulation results on the topologies of inter-data-center networks of Google, Microsoft, Amazon, and IBM reveal that our routing algorithm achieves significant improvement in meeting the latency SLOs of different traffic classes with a slight reduction in the fairness of bandwidth allocation.
Gino Carrozzo (Nextworks, Italy); Muhammad Shuaib Siddiqui (Fundació i2CAT, Internet i Innovació Digital a Catalunya, Spain); August Betzler (i2CAT Foundation, Spain); Jose Bonnet (Altice Labs, Portugal); Gregorio Martinez Perez (University of Murcia, Spain); Aurora Ramos (Atos, Spain); Tejas Subramanya (University of Trento & FBK CREATE-NET, Italy)
The 5G network solutions currently standardised and deployed do not yet enable the full potential of pervasive networking and computing envisioned in 5G initial visions: network services and slices with different QoS profiles do not span multiple operators; security, trust and automation is limited. The evolution of 5G towards a truly production-level stage needs to heavily rely on automated end-to-end network operations, use of distributed Artificial Intelligence (AI) for cognitive network orchestration and management and minimal manual interventions (zero-touch automation). All these elements are key to implement highly pervasive network infrastructures.
Moreover, Distributed Ledger Technologies (DLT) can be adopted to implement distributed security and trust through Smart Contracts among multiple non-trusted parties.
In this paper, we propose an initial concept of a zero-touch security and trust architecture for ubiquitous computing and connectivity in 5G networks. Our architecture aims at crossdomain security & trust orchestration mechanisms by coupling DLTs with AI-driven operations and service lifecycle automation
in multi-tenant and multi-stakeholder environments. Three representative use cases are identified through which we will validate the work which will be validated in the test facilities at 5GBarcelona and 5TONIC/Madrid.