[pilc] A few comments on LINK version 11

Spencer Dawkins <spencer_dawkins@yahoo.com> Tue, 11 June 2002 18:06 UTC

Received: from optimus.ietf.org (ietf.org [132.151.1.19] (may be forged)) by ietf.org (8.9.1a/8.9.1a) with ESMTP id OAA06004 for <pilc-archive@odin.ietf.org>; Tue, 11 Jun 2002 14:06:20 -0400 (EDT)
Received: from optimus.ietf.org (localhost [127.0.0.1]) by optimus.ietf.org (8.9.1a/8.9.1) with ESMTP id NAA09051; Tue, 11 Jun 2002 13:50:56 -0400 (EDT)
Received: from ietf.org (odin [132.151.1.176]) by optimus.ietf.org (8.9.1a/8.9.1) with ESMTP id NAA08991 for <pilc@optimus.ietf.org>; Tue, 11 Jun 2002 13:50:52 -0400 (EDT)
Received: from web10908.mail.yahoo.com (web10908.mail.yahoo.com [216.136.131.44]) by ietf.org (8.9.1a/8.9.1a) with SMTP id NAA05220 for <pilc@ietf.org>; Tue, 11 Jun 2002 13:50:16 -0400 (EDT)
Message-ID: <20020611175050.93710.qmail@web10908.mail.yahoo.com>
Received: from [199.106.222.146] by web10908.mail.yahoo.com via HTTP; Tue, 11 Jun 2002 10:50:50 PDT
Date: Tue, 11 Jun 2002 10:50:50 -0700
From: Spencer Dawkins <spencer_dawkins@yahoo.com>
To: pilc@ietf.org
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Subject: [pilc] A few comments on LINK version 11
Sender: pilc-admin@ietf.org
Errors-To: pilc-admin@ietf.org
X-Mailman-Version: 1.0
Precedence: bulk
List-Id: Performance Implications of Link Characteristics IETF Working Group <pilc.ietf.org>
X-BeenThere: pilc@ietf.org

Dear All,

I've been through LINK again. When PILC was formed,
the expectation was that
this document would be our primary output, with the
(aready-approved) BCPs 
helping to bandaid TCP over already-deployed
subnetwork technologies. I
believe that Phil and Mark have met this expectation,
with help from a number
of others.

I do have a few comments on version 11... 

Spencer

------------

1 Introduction and Overview

I think the following is too polite!

   Because IP is so simple, consideration 2 is more of
an issue than
   consideration 1. I.e., subnetwork designers make
many more errors of
   commission than errors of omission.  But certain
enhanced Internet
   features, such as multicasting and
quality-of-service, benefit
   significantly from support from the underlying
subnetworks beyond
   that necessary to carry "traditional" unicast,
best-effort IP.

I propose 

   IP transport requirements are so minimal that
consideration 2 is more 
   of an issue than consideration 1. Subnetwork
designers make many more 
   errors of commission than errors of omission.  

   o A frequent error of commission is designing a
subnetwork that provides 
     a high level of subnetwork-level reliability for
IP datagrams, at a 
     corresponding high cost in latency and jitter -
even though higher-level 
     protocols can't assume this level of reliability
for all IP paths and
     must duplicate this reliability if it's needed.

   o When subnetwork designers do make errors of
omission, it's usually
     by not providing subnetwork-level support for
multicast transport 
     and quality-of-service mechanisms - or anything
else beyond 
     "traditional" unicast, best-effort IP transport. 

Would it make sense to point to RFC 3135 at the end of
the following paragraph?

   However, partial duplication of functionality in a
lower layer can
   *sometimes* be justified by performance, security
or availability
   considerations. Examples include link-layer
retransmission to improve
   the performance of an unusually lossy channel,
e.g., mobile radio;
   link level encryption intended to thwart traffic
analysis; and
   redundant transmission links to improve
availability, increase
   throughput, or to guarantee performance for certain
classes of
   traffic.  Duplication of protocol function should
be done only with
   an understanding of system level implications,
including possible
   interactions with higher-layer mechanisms.

2 Maximum Transmission Units (MTUs) and IP
Fragmentation

I had too may "it is" in this paragraph to parse
quickly:

   If optional header compression [RFC1144] [RFC2507]
[RFC2508]
   [RFC3095] is used, however, it is required that the
link framing
   indicate frame length as it is needed for the
reconstruction of the
   original header.

I propose that we use more nouns, as follows:

   If optional header compression [RFC1144] [RFC2507]
[RFC2508]
   [RFC3095] is used, however, link framing mechanisms
are required to
   indicate frame length as this information is needed
to reconstruct 
   the original header.

Do we still think P-MTU black holes are a sufficent
problem that we
will still include this problem on the following list
of justifications 
for support of large MTU sizes? (If so, should P-MTU
discovery still be 
on the 2.5G3G list of recommendations? But that's not
a LINK question...)

   The Path MTU Discovery procedure, the persistence
of path MTU black
   holes, and the deletion of router fragmentation in
IPv6 reflects a
   consensus of the Internet technical community that
router
   fragmentation is best avoided. This requires that
subnetworks support
   MTUs that are "reasonably" large. The smallest MTU
permitted in IPv4
   by [RFC791] is 68 bytes, but such a small value
would clearly be
   inefficient. Because IPv6 omits fragmentation by
routers, [RFC 2460]
   specifies a larger minimum MTU of 1280 bytes. Any
subnetwork with an
   internal packet payload smaller than 1280 bytes
must implement a
   mechanism that performs fragmentation/reassembly of
IP packets
   to/from subnetwork frames if it is to support IPv6.

Is there an interaction here between internal
fragmentation and
insertion of a smaller/higher priority packet during
transmission
of a larger/lower priority packet, for reasons
described in 2.1, and using
the multiplexing mechanism described in 2.1?

   If a subnetwork cannot directly support a
"reasonable" MTU with
   native framing mechanisms, it should internally
fragment. That is, it
   should transparently break IP packets into internal
data elements and
   reassemble them at the other end of the subnetwork.

   This leaves the question of what is a "reasonable"
MTU.  Ethernet (10
   and 100 Mb/s) has a MTU of 1500 bytes, and because
of its ubiquity

Isn't 1500 bytes the limit for all 802.3s (including 1
and 10 Gb/s)?

   few Internet paths have MTUs larger than this
value.  This severely
   limits the utility of larger MTUs provided by other
subnetworks.
   Meanwhile larger MTUs are increasingly desirable on
high speed
   subnetworks to reduce the per-packet processing
overhead in host
   computers, and implementers are encouraged to
provide them even
   though they may not be usable when Ethernet is also
in the path.

   Various "tunneling" schemes, such as IP Security
[RFC2406] treat IP
   as a subnetwork for IP.  Since tunneling adds
header overhead, it can

Doesn't insertion of MPLS headers have the same
effect?

   trigger fragmentation even when the same physical
subnetworks (e.g.,
   Ethernet) are used on both sides of the IP router.
Tunneling has made
   it more difficult to avoid router fragmentation and
has increased the
   incidence of path MTU black holes. Larger
subnetwork MTUs may help to
   alleviate this problem.

I think it's also reasonable to point out that P-MTU
discovery uses a
table of "likely" MTU sizes that doesn't map very well
to what used to
be full-sized MSSes that are being tunneled - the
fallback to the first 
working MTU in the table may be quite a bit smaller
than the un-tunneled
P-MTU... I wonder if this table will ever be
updated...

 2.1 Choosing the MTU in Slow Networks

   In slow networks, the largest possible packet may
take a considerable
   time to send.  Interactive response time should not
exceed the well-

Can (should) we say "Round-trip times for interactive
responses should
not exceed"?

Also, can we point to RFC 1144 as a source for the
100-200 ms requirement
for interactive response?

   known human factors limit of 100 to 200 ms. This
includes all sources
   of delay: electromagnetic propagation delay,
queuing delay, and the
   store-and-forward time, i.e,. the time to transmit
a packet at link
   speed.

3 Framing on Connection-Oriented Subnetworks

(no comments in this section)

4 Connection-Oriented Subnetworks

   Because Internet traffic is typically bursty and
transaction-
   oriented, it is often difficult to pick an optimal
idle timeout. If
   the timeout is too short, subnetwork connections
are opened and
   closed rapidly, possibly over-stressing its call
management system
   (especially if was designed for voice traffic
holding times). If the
   timeout is too long, subnetwork connections are
idle much of the
   time, wasting any resources dedicated to them by
the subnetwork.

Can we also add the following:

   Subscribers may also be charged differently
depending on whether
   a subnetwork leaves idle connections in place using
a 
   relatively-long idle timer or tears down idle
connections with 
   little delay and reestablishes these connections
after a short 
   period of time. If it makes sense to do so,
subscribers should be 
   given control over this behavior (for example, PPP
implementations
   might allow subscribers to control an idle timer
for locally-
   initiated dialup connections.

Should be "is used TO support IP" in the following
sentence.

   In any event, if an SNDCF that opens and closes
subnet connections is
   used support IP, care should be taken to make sure
that call
   processing in the subnet can keep up with
relatively short holding
   times.

5 Broadcasting and Discovery

   Subnetworks fall into two categories:
point-to-point and shared.  A
   point-to-point subnet has exactly two endpoint
components (hosts or
   routers); a shared link has more than two, using
either an inherent
   broadcast media (e.g., Ethernet, radio) or on a
switching layer
   hidden from the network layer (switched Ethernet,
Myrinet [MYR],
   ATM).  Switched subnetworks handle broadcast by
copying broadcast

Could this sentence say "by forwarding broadcast
packets on all other 
interfaces to ensure..."?

   packets to ensure each end system receives a copy
of each packet.

Could we insert "Centralized databases also insert new
failure points and 
scaling hot-spots into the network." before the last
sentence in the
following paragraph?

   The lack of broadcast can impede the performance of
these protocols,
   or render them inoperable (e.g. DHCP). ARP-like
link address lookup
   can be provided by a centralized database, but at
the expense of
   potentially higher response latency and the need
for nodes to have
   explicit knowledge of the ARP server address.
Shared links should
   support native, link-layer subnet broadcast.

6 Multicasting

   Receivers also need to be designed to accept
packets addressed to
   some number of multicast addresses in addition to
the unicast packets
   specifically addressed to them. How many multicast
addresses need to
   be supported by a host depends on the requirements
of the associated
   host; at least several dozen will meet most current
needs.

The last phrase doesn't parse well for me. Is it
saying "few hosts
must accept multicast packets from more than a few
dozen multicast 
addresses"?

   On low-speed networks the multicast address
recognition function may
   be readily implemented in host software, but on
high speed networks
   it should be implemented in subnetwork hardware.
This hardware need
   not be complete; for example, many Ethernet
interfaces implement a
   "hashing" function that passes all of the multicast
(and unicast)
   traffic to which the associated host subscribes,
plus some small
   fraction of multicast traffic to which the host
does not subscribe.
   Host/router software then has to discard the
unwanted packets that
   pass the hardware filter.

   There does not need to be a one-to-one mapping
between subnetwork
   multicast address and IP multicast address. An
address overlap may
   significantly degrade the filtering capability of a
receiver's
   hardware multicast address filter. A subnetwork
supporting only

Is this sentence saying "If more than one IP multicast
address makes into
a subnetwork multicast address, this many-to-one
mapping may significantly 
degrade the filtering capacity of a receiver's
hardware multicast address 
filter", or something else? "Address overlap" is too
ambiguous for me...

   broadcast should use this service for multicast and
must rely on
   software filtering.

7 Bandwidth on Demand (BoD) Subnets

   Long delay BoD subnets pose problems similar to
connection oriented
   networks in anticipating traffic. While connection
oriented subnets
   hold idle channels open expecting new data to
arrive, BoD subnets
   request channel access based on buffer occupancy
(or expected buffer
   occupancy) on the sending port. Poor performance
will likely result
   if the sender does not anticipate additional
traffic arriving at that

I'm confused here. Is this sentence saying "Poor
performance will likely
resuly if the sending host does not queue additional
outgoing traffic 
while the sender is waiting for its transmission
request to be granted"? 
If so, I agree. If not, can we talk? I may not
disagree, I just don't 
understand the point being made.

   port during the time it takes to grant a
transmission request. It is
   recommended that the algorithm have the capability
to extend a hold
   on the channel for data that has arrived after the
original request
   was generated (this may done by piggybacking new
requests on user
   data).

   There are a wide variety of BoD protocols
available.  However, there
   has been relatively little comprehensive research
on the interactions
   between the BoD mechanisms and Internet protocol
performance.
   Research on some specific mechanisms is available
(e.g., [AR02]).
   One item that has been studied is TCP's
retransmission timer [KY02].
   BoD systems can cause spurious timeouts when
adjusting from a
   relatively high data rate to a relatively low data
rate.  In this
   case, TCP's transmitted data takes longer to get
through the network
   than predicted by the retransmission timeout (RTO)
and therefore the
   TCP sender is prone to resending a segment
prematurely.

Could we also include the observation that delays to
acquire a shared
control channel may also be reflected as sudden and
unpredictable "spikes" 
in apparent round-trip times, making RTO values
"spike" unpredictably 
as well? I'm trying to say that it's not just the
change in available
bandwidth, but delays encountered while changing
available bandwidth
as well...

8 Reliability and Error Control

   In the Internet architecture, the ultimate
responsibility for error
   recovery is at the end points. The Internet may
occasionally drop,
   corrupt, duplicate or reorder packets, and the
transport protocol
   (e.g., TCP) or application (e.g., if UDP is used as
the transport
   protocol) must recover from these errors on an
end-to-end basis.

I'm not sure this is strictly true - I'm thinking of
RTP over UDP,
where the RTP sender may use RTCP indications of loss
to lower its sending
rates. Is this "recovering" in the "reliability and
error control"
sense? I think it's not "must recover", but "recovers
if necessary".

   Error recovery in the subnetwork is therefore
justified only to the
   extent that it can enhance overall performance.  It
is important to
   recognize that a subnetwork can go too far in
attempting to provide
   error recovery services in the Internet
environment.  Subnet
   reliability should be "lightweight", i.e., it only
has to be "good
   enough", *not* perfect.

Can we add something like:

Applications have no way of accepting a lower level of
reliability 
(perhaps as a tradeoff for lower latency) than the
subnetwork provides. 
"Reliability" is a good thing until it prevents users
from using an 
application over a too-reliable subnetwork.

   In this section we discuss how to analyze
characteristics of a
   subnetwork to determine what is "good enough".  The
discussion below
   focuses on TCP, which is the most widely used
transport protocol in
   the Internet.  It is widely believed (and is a
stated goal within the
   IETF) that non-TCP transport protocols should
attempt to be "TCP-
   friendly" and have many of the same performance
characteristics.

Is this true? I thought the point of SCTP and DCP was
have "safe"
behavior when facing congestion, but to vary other
dimensions of 
performance.

   Thus, the discussion below should be applicable
even to portions of
   the Internet where TCP may not be the predominant
protocol.

I think the discussion is applicable, but not just
because all transport
protocols have the same performance characteristics as
TCP.

 8.1 TCP vs Link-Layer Retransmission

   The use of ECC to detect transmission errors so
that retransmissions
   (hopefully without errors) can be requested is
widely known as "ARQ"
   (Automatic Repeat Request).

Can we make it "Automatic Repeat reQuest", to match
the acronym?

   This inter-layer "competition" might lead to the
following wasteful
   situation. When the link layer retransmits (parts
of) a packet, the
   link latency momentarily increases. Since TCP bases
its
   retransmission timeout on prior measurements of
end-to-end latency,
   including that of the link in question, this sudden
increase in
   latency may trigger an unnecessary retransmission
by TCP of a packet
   that the link layer is still retransmitting.  Such
spurious end-to-
   end retransmissions generate unnecessary load and
reduce end-to-end
   throughput. One may even have multiple copies of
the same packet in
   the same link queue at the same time. In general,
one could say the
   competing error recovery is caused by an inner
control loop (link-
   layer error recovery) reacting to the same signal
as an outer control
   loop (end- to-end error recovery) without any
coordination between
   the loops.  Note that this is solely an efficiency
issue; TCP
   continues to provide reliable end-to-end delivery
over such links.

Can we append ", although this delivery is being
slowed by unnecessary 
link-layer transmissions"?

 8.2 Recovery from Subnetwork Outages

   Under these circumstances TCP will, after each
unsuccessful
   retransmission, wait even longer before trying
again; this is its
   "exponential back-off" algorithm. Furthermore, TCP
will not discover
   that the subnetwork outage has ended until its next
retransmission
   attempt. If TCP has backed off, this may take some
time.  This can

Can we say "this may take a number of minutes."? I'm
still talking to
people who find these long timer backoffs surprising.

   lead to extremely poor TCP performance over such
subnetworks.

   It is therefore highly desirable that a subnetwork
subject to outages
   not silently discard packets during an outage.
Ideally, it should
   define an interface to the next higher layer (i.e.,
IP) that allows
   it to refuse packets during an outage, and to
automatically ask IP
   for new packets when it is again able to deliver
them. If it cannot
   do this, then the subnetwork should hold onto at
least some of the

We're saying "at least some" in this paragraph and
"only a single packet
per TCP connection, including ACKs" in the next
paragraph. Can we make
one recommendation?

   packets it accepts during an outage and attempt to
deliver them when
   the subnetwork comes back up. When packets are
discarded, IP should

This is "When outbound packets are discarded,", right?

   be notified so that the appropriate ICMP messages
can be sent.

   Only a single packet per TCP connection, including
ACKs, need be held
   in this way to cause the TCP sender to recover from
the additional
   losses once the flow resumes [ARQ-DRAFT].

(See previous note about the right number of packets
to hang onto during
outages.)

Is there a maximum period of time that a subnetwork
should hang onto a
packet? I don't feel good about "longer than 2 * MSL"
- would sending
packets that are older than this cause problems?

 8.3 CRCs, Checksums and Error Detection

   The TCP [RFC793], UDP [RFC768], ICMP, and IPv4
[RFC791] protocols all
   use the same simple 16-bit 1's complement checksum
algorithm to
   detect corrupted packets.  The IPv4 checksum
protects only the IPv4
   header, while the TCP, ICMP, and UDP checksums
provide end-to-end
   error detection for both the transport pseudo
header (including
   network and transport layer information) and the
transport payload
   data. Protection of the data is optional for
applications using UDP
   [RFC768].

Do we think UDP checksums are "optional but
recommended", based on the
NFS war stories, or is this going too far? My
understanding is that
UDP checksums are enabled host-wide on most
implementations - would
this make turning UDP checksums off less desirable,
because errored
packets might be delivered to any running application
on the host?

   One way to provide additional protection for an
IPv4 or IPv6 header
   is by the authentication and packet integrity
services of the IP
   Security (IPSEC) protocol [RFC2401]. However, this
may not be a
   choice available to the subnetwork designer.

Is this just saying IPSEC AH is an IP mechanism, or is
it saying more?

 8.4 How TCP Works

   TCP uses sequence numbering and acknowledgments
(ACKs) on an end-to-
   end basis to provide reliable, sequenced, once-only
delivery.  TCP
   ACKs are cumulative, i.e., each implicitly ACKs
every segment
   received so far.  If an ACK is not received, the
acknowledgment value

Isn't this "implicitly ACKs all previously
unacknowledged data in the
sender's outgoing window"?

   carried in the cumulative packet will cease to
advance.

   Since the most common cause of packet loss is
congestion, TCP treats
   packet loss as an Internet congestion indicator.
This happens
   automatically, and the subnetwork need not know
anything about IP or
   TCP. It simply drops packets whenever it must,
though some packet-
   dropping strategies (e.g., RED) are more fair to
competing flows than
   others.

Is this a fairness issue? I might have said "reduce
the effect of 
congestion on competing flows, compared to others
(tail drop)."

The following seems to ignore slow start:

   TCP recovers from packet losses in two different
ways. The most
   important is the retransmission timeout. If an ACK
fails to arrive
   after a certain period of time, TCP retransmits the
oldest unacked
   packet. Taking this as a hint that the network is
congested, TCP
   waits for the retransmission to be ACKed before it
continues, and it
   gradually increases the number of packets in flight
as long as a
   timeout does not occur again.

Restart behavior is more correctly described in the
next paragraph -
can we drop the last clause of the previous paragraph
("... and it
gradually ...")? Then we go straight into:

   A retransmission timeout can impose a significant
performance
   penalty, as the sender is idle during the timeout
interval and
   restarts with a congestion window of 1 following
the timeout. To
   allow faster recovery from the occasional lost
packet in a bulk
   transfer, an alternate scheme known as "fast
recovery" was introduced
   [RFC2581] [RFC2582] [RFC2914] [TCPF98].

This discussion of fast recovery seems so loose that
it's actually about
fast retransmit:

   Fast recovery relies on the fact that when a single
packet is lost in

"Fast retransmit relies on the fact that even when one
or more oackets 
are lost in a bulk transfer, the receiver may continue
to return ACKs 
to ..."

   a bulk transfer, the receiver continues to return
ACKs to subsequent
   data packets that do not actually acknowledge any
newly-received
   data. These are known as "duplicate
acknowledgments" or "dupacks".
   The sending TCP can use dupacks as a hint that a
packet has been lost
   and retransmit it without waiting for a timeout. 
Dupacks effectively
   constitute a negative acknowledgment (NAK) for the
packet sequence
   number in the acknowledgment field.  TCP waits
until a certain number
   of dupacks (currently 3) are seen prior to assuming
a loss has
   occurred; this helps avoid an unnecessary
retransmission during out-
   of-sequence delivery.

   A new technique called "Explicit Congestion
Notification" (ECN)
   [RFC3168] allows routers to directly signal
congestion to hosts
   without dropping packets.  This is done by setting
a bit in the IP
   header.  Since this is currently an optional
behavior (and, longer
   term, there will always be the possibility of
congestion in portions
   of the network which don't support ECN), the lack
of an ECN bit must
   NEVER be interpreted as a lack of congestion. 
Thus, for the

"lack of congestion - especially since ECN-capable
routers still drop
packets at high levels of congestion, instead of
continuing to send
traffic on congested links. Thus, fot the foreseeable
future ..."

   foreseeable future, TCP must interpret a lost
packet as a signal of
   congestion.

   The TCP "congestion avoidance" [RFC2581] algorithm
maintains a
   congestion window (cwnd) controlling the amount of
data TCP may have
   in flight at any moment.  Reducing cwnd reduces the
overall bandwidth
   obtained by the connection; similarly, raising cwnd
increases the
   performance, up to the limit of the available
bandwidth.

"reduces the overall bandwidth that the sender will
attempt to use for
the connection; similarly ..."

   TCP probes for available network bandwidth by
setting cwnd to one
   packet and then increasing it by one packet for
each ACK returned
   from the receiver. This is TCP's "slow start"
mechanism.  When a
   packet loss is detected (or congestion is signaled
by other
   mechanisms), cwnd is reset to one and the slow
start process is
   repeated until cwnd reaches one half of its
previous setting before
   the reset. 

"This is the "slowstart threshold", or ssthresh."

   Cwnd continues to increase past this point, but at
a much
   slower rate than before. If no further losses
occur, cwnd will
   ultimately reach the window size advertised by the
receiver.

 8.5 TCP Performance Characteristics

No comments.

  8.5.1 The Formulae

No comments.

  8.5.2 Assumptions

   Both of these formulae allow BW to become infinite
if there is no
   loss.  This is because an Internet path will drop
packets at
   bottleneck queues if the load is too high.  Thus, a
completely
   lossless TCP/IP network can never occur (unless the
network is being
   underutilized).

I don't understand the last sentence in the previous
paragraph. Is it
saying "A completely lossless TCP/IP connection can
never occur, unless
the receiver is using a receive window smaller than
the bandwidth-delay
product"?

  8.5.3 Analysis of Link-Layer Effects on TCP
Performance

No comments.

9 Quality-of-Service (QoS) considerations

No comments.

10 Fairness vs Performance

No comments.

11 Delay Characteristics

No comments.

12 Bandwidth Asymmetries

   Some subnetworks may provide asymmetric bandwidth
(or may cause TCP
   packet flows to experience asymmetry in the
capacity) and the
   Internet protocol suite will generally still work
fine.  However,
   there is a case when such a scenario reduces TCP
performance.  Since
   TCP data segments are 'clocked' out by returning
acknowledgments TCP
   senders are limited by the rate at which ACKs can
be returned
   [BPK98].  Therefore, when the ratio of the
bandwidth of the
   subnetwork carrying the data to the bandwidth of
the subnetwork
   carrying the acknowledgments is too large, the slow
return of of the
   ACKs directly impacts performance.  Since ACKs are
generally smaller
   than data segments, TCP can tolerate some
asymmetry, but as a general

Should we say "TCPs can tolerate substantial
assymetry, ..."?

   rule designers of subnetworks should be aware that
subnetworks with
   significant asymmetry can result in reduced
performance, unless
   issues are taken to mitigate this [asym-id].

   Several strategies have been identified for
reducing the impact of
   asymmetry of the network path between two TCP end
hosts, e.g. [asym-
   id].  These techniques attempt to reduce the number
of ACKs
   transmitted over the return path (low bandwidth
channel) by changes
   at the end host(s), and/or by modification of
subnetwork packet
   forwarding. While these solutions may mitigate the
performance issues
   caused by asymmetric subnetworks, they do have
associated cost and
   may have other implications. A fuller discussion of
strategies and
   their implications is provided in [asym-id].

The IESG documents under review is showing Allison
holding this token.

13 Buffering, flow & congestion control

No comments.

14 Compression

   We make a stronger recommendation that subnetworks
operating at low
   speed or with small MTUs compress IP and
transport-level headers (TCP
   and UDP) using several header compression schemes
developed within
   the IETF. An uncompressed 40-byte TCP/IP header
takes about 33
   milliseconds to send at 9600 bps.  "VJ" TCP/IP
header compression

Can we introduce this as "Van Jacobson TCP/IP header
compression", to be
consistent with what we settled on in 2.5G/3G?

   [RFC1144] compresses most headers to 3-5 bytes,
reducing transmission
   time to several milliseconds. This is especially
beneficial for
   small, latency-sensitive packets in interactive
sessions.

15 Packet Reordering

No comments.

16 Mobility

No comments.

17 Routing

No comments.

18 Security Considerations

Too many transitions in this paragraph?

   Nonetheless, end-to-end security mechanisms are not
used as widely as
   might be desired. However, the group could not
reach consensus on
   whether subnetwork designers should be actively
encouraged to
   implement mechanisms to protect user data.

I propose:

   Although end-to-end security mechanisms are not
used as widely as
   might be desired, the PILC working group could not
reach consensus on
   whether subnetwork designers should be actively
encouraged to
   implement mechanisms to protect user data.


And can we drop the "however, " in this paragraph? It
doesn't seem to be
in contrast to anything...

   However, traffic analysis is a notoriously subtle
and difficult
   threat to understand and defeat, far more so than
threats to
   confidentiality and integrity.  We therefore urge
extreme care in the
   design of subnetwork security mechanisms
specifically intended to
   thwart traffic analysis.


Not to be pedantic, but [WFBA2000] isn't about buffer
overruns in I/O
system calls - it's about buffer overruns in the
standard C library:

   In addition, well-designed security protocols can
be compromised by
   implementation defects.  Examples of such defects
include use of
   predictable pseudo-random numbers [RFC1750],
vulnerability to buffer
   overflow attacks due to unsafe use of certain I/O
system calls
   [WFBA2000], and inadvertent exposure of secret
data.


References

   [AP99] M. Allman, V. Paxson, On Estimating
End-to-End Network Path
   Properties, In Proceedings of ACM SIGCOMM 99.

A URL for this paper is:
http://www.acm.org/sigcomm/sigcomm99/papers/session7-3.html

   [AR02] G. A,car and C. Rosenberg, Weighted Fair
Bandwidth-on-Demand
   (WFBoD) for Geo-Stationary Satellite Networks with
On-Board
   Processing, Special Issue on Broadband Satellite
Systems: A Network
   Perspective, Computer Networks, accepted on July
13, 2001. To appear
   in 2002.

The best URL I could find for this paper was
http://www.iis.ee.ic.ac.uk/~g.acar/Research.html#WFBoD

   [asym-id] H. Balakrishnan, V. N. Padmanabhan, G.
Fairhurst, M,
   Sooriyabandara. "TCP Performance Implications of
Network Path
   Asymmetry", work in progress as internet-draft,
draft-ietf-pilc-
   asym-07.txt, November 2001.

This draft should have already expired. Aaron - are we
going to progress
it?

   [ARQ-DRAFT] Fairhurst, G., and L. Wood, Advice to
link designers on
   link Automatic Repeat reQuest (ARQ), work in
progress as internet-
   draft, draft-ietf-pilc-link-arq-issues-03.txt,
August 2001.

This is now version 04, from March 2002.

   [ATMFTM] The ATM Forum, "Traffic Management
Specification, Version
   4.0", April 1996, document af-tm-0056.000
(www.atmforum.com).

A more complete URL is 
ftp://ftp.atmforum.com/pub/approved-specs/af-tm-0056.000.pdf

   [BA02] Ethan Blanton, Mark Allman. On Making TCP
More Robust to
   Packet Reordering. ACM Computer Communication
Review, 32(1), January
   2002.

A complete URL is 
http://www.acm.org/sigcomm/ccr/archive/2002/jan02/ccr-200201-allman.pdf

   [BPK98] Hari Balakrishnan, Venkata Padmanabhan,
Randy H. Katz.  'The
   Effects of Asymmetry on TCP Performance."  ACM
Mobile Networks and
   Applications (MONET), 1998.

A complete URL is 
http://daedalus.cs.berkeley.edu/publications/tcpasym-mobicom97.ps.gz

   [Crypto9912] Schneier, Bruce "European Cellular
Encryption
   Algorithms" Crypto-Gram (December 15, 1999) 
www.counterpane.com

A complete URL is 
http://www.counterpane.com/crypto-gram-9912.html

   [DOCSIS3] W.S. Lai, "DOCSIS-Based Cable Networks:
Impact of Large
   Data Packets on Upstream Capacity", 14th ITC
Specialists Seminar on
   Access Networks and Systems, Barcelona, Spain,
April 25-27, 2001.

A complete URL is 
http://www.att.com/networkandperformance/docs/ndrp_itc0401b.doc

   [ES00] David A. Eckhardt and Peter Steenkiste,
"Effort-limited Fair
   (ELF) Scheduling for Wireless Networks, Proceedings
of IEE Infocom
   2000.

The closing quote mark is missing (after "Networks").
A complete URL is
http://www.ieee-infocom.org/2000/papers/266.ps

   [FB00] Firoiu V., and Borden M., "A Study of Active
Queue Management
   for Congestion Control" to appear in Infocom 2000

The paper has already appeared. The URL is 
http://www.ieee-infocom.org/2000/papers/405.pdf

   [IEEE80211] IEEE 802.11 Wireless LAN standard.
Available from
   http://standards.ieee.org/catalog/IEEE802.11.html.

This is now freely downloadable from
http://standards.ieee.org/getieee802/download/802.11-1999.pdf

   [LKJK02] R. Ludwig, A. Konrad, A. D. Joseph, R. H.
Katz, "Optimizing
   the End-to-End Performance of Reliable Flows over
Wireless Links",
   Kluwer/ACM Wireless Networks Journal, Vol. 8, Nos.
2/3, pp. 289-299,
   March-May 2002.

A Complete URL is 
http://iceberg.cs.berkeley.edu/papers/Ludwig-Mobicom99/

   [LRKOJ99] R. Ludwig, B. Rathonyi, A. Konrad, K.
Oden, A. Joseph,
   Multi-Layer Tracing of TCP over a Reliable Wireless
Link, pp.
   144-154, In Proceedings of ACM SIGMETRICS 99.

Need quotes around title. A complete URL is
http://iceberg.cs.berkeley.edu/papers/Ludwig-Sigmetrics99/

   [LS00] R. Ludwig, K. Sklower, The Eifel
Retransmission Timer, ACM
   Computer Communication Review, Vol. 30, No. 3, July
2000.

Need quotes around title. A complete URL is
http://www.acm.org/sigcomm/ccr/archive/2000/july00/ccr_200007-ludwig.html

   [MBB00] May, M., Bonald, T., and Bolot, J-C.,
"Analytic Evaluation of
   RED Performance" to appear INFOCOM 2000

The paper has appeared. A complete URL is
http://www.ieee-infocom.org/2000/papers/369.ps

   [MBDL99] May, M., Bolot, J., Diot, C., and Lyles,
B., "Reasons not to
   deploy RED", technical report, June 1999.

A complete URL is 
http://www-sop.inria.fr/rodeo/personnel/mmay/may_red.html

   [GM02] Luigi Alfredo Grieco1, Saverio Mascolo, "TCP
Westwood and Easy
   RED to Improve Fairness in High-Speed Networks",
Proceedings of the
   7th International Workshop on Protocols for
High-Speed Networks,
   April 2002.

A complete URL is 
http://www.cs.ucla.edu/NRL/hpi/papers/2002-pfhsn-0.pdf.gz

   [MSMO97] M. Mathis, J. Semke, J. Mahdavi, T. Ott,
"The Macroscopic
   Behavior of the TCP Congestion Avoidance
Algorithm", Computer
   Communication Review, volume 27, number 3, July
1997.

A complete URL is
http://www.psc.edu/networking/papers/model_abstract.html

   [PFTK98] Padhye, J., Firoiu, V., Towsley, D., and
Kurose, J.,
   "Modeling TCP Throughput: a Simple Model and its
Empirical
   Validation", UMASS CMPSCI Tech Report TR98-008,
Feb. 1998.

A complete URL is
ftp://gaia.cs.umass.edu/pub/Padhye-Firoiu98-TCP-throughput-TR.ps

   [RF95] Romanow, A., and Floyd, S., "Dynamics of TCP
Traffic over ATM
   Networks".  IEEE JSAC, V. 13 N. 4, May 1995, p.
633-641.

A complete URL is 
http://www-nrg.ee.lbl.gov/papers/tcp_atm.pdf

   [RFC1812]

Needs details!

   [Schneier3] Schneier, Bruce "Why Cryptography is
Harder Than it
   Looks", www.counterpane.com

A complete URL is 
http://www.counterpane.com/whycrypto.html

   [TCPF98] Dong Lin and H.T. Kung, "TCP Fast Recovery
Strategies:
   Analysis and Improvements", IEEE Infocom, March
1998.  Available
   from:
"http://www.eecs.harvard.edu/networking/papers/infocom-tcp-
   final-198.pdf"

This URL is broken:

   [WFBA2000] David Wagner, Jeffrey S. Foster, Eric
Brewer and Alexander
   Aiken, "A First Step Toward Automated Detection of
Buffer Overrun
   Vulnerabilities", Proceedings of NDSS2000, or
   www.berkeley.edu:80/~daw/papers/

A correct URL is http://www.cs.berkeley.edu/~daw/papers/overruns-ndss00.ps

__________________________________________________
Do You Yahoo!?
Yahoo! - Official partner of 2002 FIFA World Cup
http://fifaworldcup.yahoo.com

_______________________________________________
pilc mailing list
pilc@ietf.org
https://www1.ietf.org/mailman/listinfo/pilc
http://www.ietf.org/html.charters/pilc-charter.html
http://pilc.grc.nasa.gov/