Re: Draft RFC for the "DF scheme"
Drew Daniel Perkins <email@example.com> Thu, 19 April 1990 23:37 UTC
Received: from decwrl.dec.com by acetes.pa.dec.com (5.54.5/4.7.34)
id AA11675; Thu, 19 Apr 90 16:37:25 PDT
Received: by decwrl.dec.com; id AA27724; Thu, 19 Apr 90 16:37:12 -0700
Received: by po3.andrew.cmu.edu (5.54/3.15) id <AA10360> for firstname.lastname@example.org; Thu, 19 Apr 90 19:36:58 EDT
Received: via switchmail; Thu, 19 Apr 90 19:36:39 -0400 (EDT)
Received: from unix9.andrew.cmu.edu via qmail ID </afs/andrew.cmu.edu/service/mailqs/q002/QF.Qa=Yf3200WB:E0PncM>; Thu, 19 Apr 90 19:33:24 -0400 (EDT)
Received: from unix9.andrew.cmu.edu via qmail ID </afs/andrew.cmu.edu/usr15/ddp/.Outgoing/QF.Aa=YezO00WB:0Vukpr>; Thu, 19 Apr 90 19:33:19 -0400 (EDT)
Received: from BatMail.robin.v2.10.CUILIB.3.45.SNAP.NOT.LINKED.unix9.andrew.cmu.edu.vax.3 via MS.5.6.unix9.andrew.cmu.edu.vax_3; Thu, 19 Apr 90 19:33:13 -0400 (EDT)
Date: Thu, 19 Apr 90 19:33:13 -0400 (EDT)
From: Drew Daniel Perkins <email@example.com>
Subject: Re: Draft RFC for the "DF scheme"
firstname.lastname@example.org (Jeffrey Mogul) writes: > 3.1. TCP MSS Option > ... > Moreover, doing this prevents PMTU Discovery from discovering PMTUs > larger than 576, so hosts SHOULD no longer lower the value they send > in the MSS option. The MSS option should now reflect the size of the > largest datagram the host is able to reassemble (MMS_R, as defined in > ); in many cases, this will be the architectural limit of 65535 > octets. A host MAY send an MSS value derived from the MTU of its > connected network (the maximum MTU over its connected networks, for a > multi-homed host); this should not cause problems for PMTU Discovery, > and may dissuade a broken peer from sending enormous datagrams. I'm a bit uncomfortable with this. I know that "logically" 65535 should be the value to send, but I think that "practically", it would be better to never send anything greater than the network MTU. Sending 65535 just because it is "right" seems to be asking for trouble. Who knows what some strange hosts might do with this... > Alternatively, an implementation can avoid the use of an > asynchronous notification mechanism for PMTU decreases by > postponing notification until the next attempt to send a > datagram larger than the PMTU estimate. In this approach, > when an attempt is made to SEND a datagram with the DF bit > set, and the datagram is larger than the PMTU estimate, the > SEND function should fail and return a suitable error > indication. This approach may be more suitable to a > connectionless packetization layer (such as one using UDP), > which may be hard to ``notify'' from the ICMP layer. In this > case, the normal timeout-based retransmission mechanisms would > be used to recover from the dropped datagrams. Connectionless packetization layers are no harder to notify (atleast not UDP). The returned packet header in the ICMP message tells you exactly where the packet came from. The fact that this problem exists in BSD is solely a BSD design flaw (which I think Mike Karels agreed to fix in the Host Requirements WG). In particular for UDP under BSD, every source of UDP packets has been bind()'d so that there is an established binding between process, socket and UDP source port (even if the process hasn't done an explicit bind() or connect(). > 4408 4Mb IBM Token Ring ref.  How about: 4464 IEEE 802.5 (4Mb max) RFC 1042 > 2002 IEEE 802.5 RFC 1042 How about: 2002 IEEE 802.5 (4Mb recommended) RFC 1042 > 1500 Point-to-Point (max MTU) RFC 1134 How about: 1500 Point-to-Point (default) RFC 1134 > 296 Serial Lines ??? How about: 296 Point-to-Point (low delay) RFC 1144 I also found a number of spelling errors. A good wack with a spelling checker is in order... Drew