Re: TCP question

Gorry Fairhurst <gorry@erg.abdn.ac.uk> Wed, 29 August 2001 12:49 UTC

Message-ID: <3B8CE4E2.DD88D01D@erg.abdn.ac.uk>
Date: Wed, 29 Aug 2001 13:49:39 +0100
From: Gorry Fairhurst <gorry@erg.abdn.ac.uk>
Organization: ERG
X-Mailer: Mozilla 4.75 (Macintosh; U; PPC)
X-Accept-Language: en
MIME-Version: 1.0
To: Sergey Raber <raber@gmd.de>
CC: tcpsat@grc.nasa.gov, cwy@web.xiphos.ca
Subject: Re: TCP question
References: <Pine.LNX.4.21.0108280305120.25269-100000@web.xiphos.ca> <004201c12fcb$67bb4fc0$660d1a81@SERGE>
Content-Type: text/plain; charset="iso-8859-1"
X-ERG-MailScanner: Found to be clean
X-MIME-Autoconverted: from 8bit to quoted-printable by erg.abdn.ac.uk id f7TCnauv006610
Content-Transfer-Encoding: 8bit
X-MIME-Autoconverted: from quoted-printable to 8bit by lombok-fi.lerc.nasa.gov id IAA11685
Sender: owner-tcpsat@grc.nasa.gov
Precedence: bulk
Status: RO
Content-Length: 7006
Lines: 163

I am currently writing an Internet Draft which examines (among other things)
the use of a large MSS over paths supporting a smaller PMTU. The current text
I have proposed in this section is, and although this set in the context
of reducing ACKs I would encourage comments based on your observations:

----

5.2 Use of large MSS

A TCP sender that uses a large Maximum Segment Size (MSS) reduces the
number of ACKs generated per transmitted byte of data.

Although individual subnetworks may support a large MTU, the majority of
current Internet links employ an MTU of approx 1500 B (that of
Ethernet). By setting the Don’t Fragment (DF) bit in the IP header, Path
MTU (PMTU) discovery [RFC1191] may be used to determine the maximum
packet size (and hence MSS) a sender can use on a given network path
without being subjected to IP fragmentation, and provides a way to
automatically select a suitable the MSS for a specific path. This also
guarantees that routers will not perform IP fragmentation of normal data packets.

By electing not to use PMTU discovery, an end host may choose to use IP
fragmentation by routers along the forward path [RFC793].  This allows
an MSS larger than smallest MTU along the path.  However, this increases
the unit of error recovery (TCP segment) above the unit of transmission
(IP packet). This is not recommended, since it can increase the number
of retransmitted packets following loss of a single IP packet, leading
to reduced efficiency, and potentially aggravating network congestion
[Ken87]. Choosing an MSS larger than the forward path minimum MTU also
permits the sender to transmit more initial packets (a burst of IP
fragments for each TCP segment) when a session starts or following RTO
expiry, increasing the aggressiveness of the sender compared to standard
TCP [RFC2581].  This can adversely impact other standard TCP sessions.

RECOMMENDATION: 

A larger forward path MTU is desirable for paths with bandwidth
asymmetry. Network providers may use a large MTU on links in the forward
direction. TCP end hosts using Path MTU discovery may be able to take
advantage of a large MTU by automatically selecting an appropriate
larger MSS, without requiring modification. The use of Path MTU
discovery [RFC1191] is therefore recommended.

Increasing the unit of error recovery and congestion control (MSS) above
the unit of transmission and congestion loss (the IP packet) by using a
larger end host MSS and IP fragmentation in routers is not recommended. 

----

This comes from my latest edit of the ID on which we am currently
working, 
the full (older) version of the ID is available at:


        Title           : TCP Performance Implications of Network Asymmetry
        Author(s)       : H. Balakrishnan, V. Padmanabhan et al.
        Filename        : draft-ietf-pilc-asym-05.txt
        Pages           : 32
        Date            : 24-Jul-01
        

A URL for this Internet-Draft is:
http://www.ietf.org/internet-drafts/draft-ietf-pilc-asym-05.txt


Comments concerning this will be GLADLY received, (and may usefully 
be discussed also in the PILC WG).

best wishes,

Gorry

Sergey Raber wrote:
> 
> OK. Now the full story :)
> We have actually had no goal of testing the link with big TCP MSS size, our
> idea was to compare the performance of TCP over IPsec on the satellite link
> with standard TCP/IP. But as soon as we have realized that TCP over IPsec
> (FreeSWAN on Linux 2.2.18) performed much better then TCP over regular IP
> (how come!) we found out that by default TCP is using 16K MSS if the session
> has to be build via encrypted ipsec interface. The link was not idle during
> the test, but the average load for this period did not exceed 600-800kbps,
> so it was not a problem to reach the speed of 730Kbyte/sec with 16K MSS TCP
> over IPsec on 8Mbps link. Now as far as your reaction is pessimistic should
> we try to reduce default TCP MSS on our server if we want to implement this
> IPsec implementation on our proxy server in order to avoid performance
> degradation in future (than the link would be average loaded at ~30%)?
> 
> Comments?
> 
> With best regards,
> ---
> Sergey Raber
> GMD.NET
> Project Satellite Communication
> 
> ----- Original Message -----
> From: "Charlie Younghusband" <cwy@web.xiphos.ca>
> To: <raber@gmd.de>
> Cc: <tcpsat@grc.nasa.gov>
> Sent: Tuesday, August 28, 2001 9:10 AM
> Subject: Re: TCP question
> 
> > Argh, I just realized that some of my email is bogus.  I was thinking that
> > the MSS was much larger than what you using (or even your 64K trick), not
> simply
> > roughly 10x.  At that point it becomes more an issue of using a larger but
> still
> > reasonable granularity chunk.  It would still scare people running it
> > onto a hybrid network such as the greater Internet or even busy LAN
> > (which has a surprisingly terrible effect on your sat bandwidth from my
> > experience) on one end for classical reasons as already mentioned.   But
> > for a private network satellite hookup, it's more viable as a tuning
> > option given the bandwidth you're playing with.  Interesting...
> >
> > Charlie
> >
> > Charlie Younghusband wrote:
> >
> >   It does not surprise me that you received performance gains.    Looking
> > at
> >   the simple single connection over an uncongested direct satellite link
> > as
> >   part of another project I worked on, we did a simple mathematical
> > formula
> >   where we asked how low does the BER have to go before the largest (RFC
> >   compliant) TCP MSS size available was no longer optimal, and factoring
> > in
> >   header overhead the answer was ridiculously low (like 10e-3).    By
> >   extension, it was clear that a larger MSS was much more efficient on the
> >   higher bandwidth links seen today.  I'm not sure where you got the 16208
> > MSS
> >   size, but I suspect that it is link bandwidth specific and if for some
> >   reason your bandwidth was seriously suddenly reduced (such as a
> > competing
> >   data stream) you'd suddenly have virtually no bandwidth available to
> > either
> >   data stream as neither would complete a TCP segment very often.  Still,
> > it
> >   is an interesting tuning idea for a single TCP connection over a
> > satellite
> >   link with no other competition when the bandwidth is known and
> > fixed.  Other
> >   than that case, things fall apart quickly as there is poor scaling and
> >   little adaption. As Gorry Fairhurst pointed out, run it with anything
> > else
> >   and then see.  There are other better options. (I'll also point out that
> >   you're still only at best about 73% link efficiency with your
> > modifications,
> >   so even if this specific case always applies to your usage, wasting over
> >   2Mbps of satellite bandwidth probably won't please whoever pays for it
> > :))
> >
> >   Cheers,
> >   Charlie
> >
> >   ---
> >   Charlie Younghusband
> >   Network Software Engineering
> >   Xiphos Technologies            http://www.xiphos.ca/
> >   514-848-9640
> >
> >