Affiliation:
1. University of Southern California
Abstract
It has been recently suggested that uncongested links could be completely ignored when evaluating Internet's performance. In particular, based on the observation that only the congested links along the path of each flow introduce sizable queueing delays and dependencies among flows, it has been shown that one can infer the performance of the larger Internet by creating and observing a suitably scaled-down replica, consisting of the congested links only. Given that the majority of Internet links are uncongested, it has been demonstrated that this approach can be used to greatly simplify and expedite performance prediction.
However, an important open problem, directly related to the practicability of such an approach, is whether there exist efficient and scalable ways for identifying uncongested links, in large and complex Internet-like networks. Of course, such a question is not only very important for scaling down Internet's topology, but also in many other contexts, e.g. such as in traffic engineering and capacity planning.
In this paper we present simple rules that can be used to efficiently identify uncongested Internet links. In particular, we first identify scenarios under which one can easily deduce whether a link is uncongested by inspecting the network topology. Then, we identify scenarios in which this is not possible, and propose an efficient methodology, based on the large deviations theory and flow-level statistics, to approximate the queue length distribution,and in turn, to deduce the congestion level of a link. We also demonstrate how simple commonly used metrics, such as the link utilization, can be quite misleading in classifying an Internet link.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Software
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献