Affiliation:
1. Univ. of California, Berkeley
Abstract
In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and three IMP hops) dropped from 32 Kbps to 40 bps. Mike Karels
1
and I were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. We wondered, in particular, if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was “yes”.
Since that time, we have put seven new algorithms into the 4BSD TCP:
round-trip-time variance estimation
exponential retransmit timer backoff
slow-start
more aggressive receiver ack policy
dynamic window sizing on congestion
Karn's clamped retransmit backoff
fast retransmit Our measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet.
This paper is a brief description of (
i
) - (
v
) and the rationale behind them. (
vi
) is an algorithm recently developed by Phil Karn of Bell Communications Research, described in [KP87]. (
viii
) is described in a soon-to-be-published RFC.
Algorithms (
i
) - (
v
) spring from one observation: The flow on a TCP connection (or ISO TP-4 or Xerox NS SPP connection) should obey a 'conservation of packets' principle. And, if this principle were obeyed, congestion collapse would become the exception rather than the rule. Thus congestion control involves finding places that violate conservation and fixing them.
By 'conservation of packets' I mean that for a connection 'in equilibrium', i.e., running stably with a full window of data in transit, the packet flow is what a physicist would call 'conservative': A new packet isn't put into the network until an old packet leaves. The physics of flow predicts that systems with this property should be robust in the face of congestion. Observation of the Internet suggests that it was not particularly robust. Why the discrepancy?
There are only three ways for packet conservation to fail:
The connection doesn't get to equilibrium, or
A sender injects a new packet before an old packet has exited, or
The equilibrium can't be reached because of resource limits along the path. In the following sections, we treat each of these in turn.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Software
Reference18 articles.
1. Ultimate instability of exponential back-off protocol for acknowledgment-based transmission control of random access communication channels
2. William Feller Probability Theory and its Applications volume II. John Wiley & Sons second edition 1971.]] William Feller Probability Theory and its Applications volume II. John Wiley & Sons second edition 1971.]]
Cited by
1179 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献