Datacenters are a critical part of the Internet infrastructure as key enablers of cloud computing and web services like search, social networking, and advertising. Quick response to users is essential as a mere hundred milliseconds may lead to substantial revenue losses and drop in customer traffic. However, bursty traffic with high fan-in, oversubscribed links and constrained switch buffers make this a challenging problem. Current approaches to minimize response times range from arbiter based approaches where senders collectively obey global scheduling decisions, to self-adjusting end-host based approaches where senders independently adjust transmission rate based on network congestion. The former incurs greater overhead compared to the latter, which trades off complexityforsuboptimality. Ourworkseeksamiddleground-optimalityofarbiterbased approaches with the simplicity of self-adjusting end-host based approaches. Our thesis is that rather than having a centralized arbiter schedule flows or the senders making independent scheduling decisions, the receiver can coordinate the various flows destined for it to achieve quick response times. We observe that since the receiver has complete knowledge of the flows destined for it, it can coordinate between theses flows without incurring flow switching costs and the overhead of accessing an arbiter. We demonstrate the advantage of receiver driven flow scheduling by addressing two important problems in datacenter networks. First, we address TCP incast, which is the throughputcollapseassociatedwithseveralparallelflowsoverwhelmingtheswitchbuffers. State-of-the-artsolutionsworkwhenuserrequestsareservedbyrack-localmachines. Nowadays, inter-rack communication is also important. We propose RecFlow, which works in both scenarios. A RecFlow receiver spaces ACKs in order to maintain levels of inflight traffic which prevent switch buffer overflows. It calculates its share of the capacity based on the bottleneck information received using OpenFlow. Using packet level simulations, we show that compared to state-of-the-art, RecFlow achieves up to 6x and 1.5x goodput improvement in the inter-rack and the intra-rack scenario respectively. Next, we seek to minimize the number of missed deadlines. Preemptive Earliest DeadlineFirst(EDF)schedulingisconsideredsuitabletoachievethisobjective. However,stateof-the-art approaches that emulate preemptive EDF rely on global flow scheduling which requires customized hardware or a separate control plane. We propose WARDS, a receiver side scheduler which implements preemptive EDF among its flows, through the use of switch priority queues. By using apriori workload information, we probabilistically promote a small number of nearest deadline flows to the top switch queue, thereby reducing the level of multiplexing between nearest deadline flows and those with deadlines further away. Using packet level simulations, we show that WARDS is able to achieve up to 5x performance improvement over state-of-the-art end-host based solutions.
Chapters
Title |
Author |
Supervisor |
Degree |
Institute |
Title |
Author |
Supervisor |
Degree |
Institute |
Title |
Author |
Supervisor |
Degree |
Institute |
Title |
Author |
Supervisor |
Degree |
Institute |
Book |
Author(s) |
Year |
Publisher |
Book |
Author(s) |
Year |
Publisher |
Chapter |
Author(s) |
Book |
Book Authors |
Year |
Publisher |
Chapter |
Author(s) |
Book |
Book Authors |
Year |
Publisher |
Similar News
Headline |
Date |
News Paper |
Country |
Headline |
Date |
News Paper |
Country |
Similar Articles
Article Title |
Authors |
Journal |
Vol Info |
Language |
Article Title |
Authors |
Journal |
Vol Info |
Language |
Similar Article Headings
Heading |
Article Title |
Authors |
Journal |
Vol Info |
Heading |
Article Title |
Authors |
Journal |
Vol Info |