Skip to main content
eScholarship
Open Access Publications from the University of California

UC Davis

UC Davis Previously Published Works bannerUC Davis

A reinforcement learning based network scheduler for deadline-driven data transfers

Published Web Location

https://thakur.cs.ucdavis.edu/assets/pubs/globecom2019.pdf
No data is associated with this publication.
Abstract

We consider a science network that runs applications requiring data transfers to be completed within a given deadline. The underlying network is a software defined network (SDN) that supports fine grain real-time network telemetry. Deadline-aware data transfer requests are made to a centralized network controller that schedules the flows by setting pacing rates of the deadline flows and metering the background traffic at the ingress routers. The goal of the scheduling algorithm is to maximize the number of flows that meet the deadline while maximizing the network utilization. In this paper, we develop a Reinforcement Learning (RL) agent based network controller and compare its performance with well-known heuristics. For a network consisting of a single bottleneck link, we show that the RL-agent based network controller performs as well as Earliest Deadline First (EDF), which is known to be optimal. We also show that the RL-agent performs significantly better than an idealized TCP protocol in which the bottleneck link capacity is equally shared among the competing flows. We also study the sensitivity of the RL-agent controller for different parameter settings and reward functions.

Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.

Item not freely available? Link broken?
Report a problem accessing this item