Re: [DNSOP] Mirja Kühlewind's Discuss on draft-ietf-dnsop-session-signal-12: (with DISCUSS and COMMENT)

"Mirja Kuehlewind (IETF)" <> Wed, 01 August 2018 12:02 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 36E6913101F for <>; Wed, 1 Aug 2018 05:02:11 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2
X-Spam-Status: No, score=-2 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); domainkeys=pass (1024-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id vdkV3qYRmf04 for <>; Wed, 1 Aug 2018 05:02:08 -0700 (PDT)
Received: from ( []) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 0DC13130FC5 for <>; Wed, 1 Aug 2018 05:02:07 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default;; b=bPb/pjzDXgzclIIIwSz5F7FP6tn5EMM4rqUoe27bWy9/g2/lsH6DNGDJ2rPMVeqZTd+92HBYyiLMq2BNMP8s4rLOgzQL/94GQQ0gveiFkKagBH+ETsH8hSTyGqh4144XvVwNfCZNqMrzjIAnAgH7Sz8D7pHTjLzl8gO6FikZbCI=; h=Received:Received:Content-Type:Mime-Version:Subject:From:In-Reply-To:Date:Cc:Content-Transfer-Encoding:Message-Id:References:To:X-Mailer:X-PPP-Message-ID:X-PPP-Vhost;
Received: (qmail 2474 invoked from network); 1 Aug 2018 14:02:06 +0200
Received: from (HELO ? ( by with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 1 Aug 2018 14:02:06 +0200
Content-Type: text/plain; charset=utf-8
Mime-Version: 1.0 (Mac OS X Mail 11.5 \(3445.9.1\))
From: "Mirja Kuehlewind (IETF)" <>
In-Reply-To: <>
Date: Wed, 1 Aug 2018 14:02:05 +0200
Cc:,,, The IESG <>,
Content-Transfer-Encoding: quoted-printable
Message-Id: <>
References: <> <>
To: Stuart Cheshire <>
X-Mailer: Apple Mail (2.3445.9.1)
X-PPP-Message-ID: <>
Archived-At: <>
Subject: Re: [DNSOP] =?utf-8?q?Mirja_K=C3=BChlewind=27s_Discuss_on_draft-ietf?= =?utf-8?q?-dnsop-session-signal-12=3A_=28with_DISCUSS_and_COMMENT=29?=
X-Mailman-Version: 2.1.27
Precedence: list
List-Id: IETF DNSOP WG mailing list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 01 Aug 2018 12:02:11 -0000

Hi Stuart,

please see below.

> Am 01.08.2018 um 09:48 schrieb Stuart Cheshire <>om>:
> On 30 Jul 2018, at 13:19, Mirja Kühlewind <> wrote:
>> ----------------------------------------------------------------------
>> ----------------------------------------------------------------------
> I’m responding to the “DISCUSS” items right now. I’ll get to the “COMMENT” items shortly.
>> 1) In addition to the bullet point in the 6.2 that was flagged by Spencer, I
>> would like to discuss the content of section 5.4.  (DSO Response Generation). I
>> understand the desire to optimize for the case where the application knows that
>> no data will be sent as reply to a certain message, however, TCP does not have
>> a notion of message boundaries and therefore cannot and should not act based on
>> the reception of a certain message. Indicating to the TCP that an ACK can be
>> set immediately in an specific situation is also problematic as ACK processing
>> is part of the TCP's internal machinery. However, why it is important at all
>> that an TCP-level ACK is send out fast than the delayed ACK timer? The ACK
>> receiver does not expose the information when an ACK is received to the
>> application and the delayed ACK timer only expires if no further data is
>> received/send by the ACK-receiver, therefore this optimization should not have
>> any impact in the application performance. I would just recommend to remove
>> this section and any additional discussion about delayed ACKs.
>> Please note that the problem described in [NagleDA] only occurs for
>> request-response protocols where no further request can be sent before the
>> response is received. This is not the case in this protocol (as pipelining is
>> supported).
> The problem here is not further requests, it’s further responses. Consider a client that subscribes for mDNS relay service <>.
> If the server gets an mDNS packet and relays it, Nagle blocks relaying of a further mDNS packet until an ack is received. On a campus GigE backbone with sub-millisecond round-trip times, this potentially delays the relaying of a subsequent mDNS packet for up to 200 ms. That’s a long time on a sub-millisecond network. If the client were to send a reply to the first relayed mDNS packet, then TCP would piggyback its ack on that data packet, and Nagle would then free the server to relay the next mDNS packet.
> The optimization advocated here is the observation that if a networking API were to allow the server to explicitly indicate an empty reply, then that lets the TCP stack know that it doesn’t need to wait 200 ms in the hope that it can piggyback its ack on an outbound data packet.

I unterstand the point, I just don’t really think that this is a problem that is specific to this use case and therefore should not be necessarily discussed in tis document (given the problem is quite complex). However, I guess that could be good input to taps, given taps is working on a message-based interface on top of TCP.

> Without this, people are tempted to set TCP_NODELAY, which is worse overall for the network.

Not sure. In the described scenarios this might actually not be a bad think to do.

>> 2) Further regarding keep-alives:
>> in sec 6.5.2: "For example, a hypothetical keepalive interval
>>  value of 100ms would result in a continuous stream of at least ten
>>  messages per second, in both directions, to keep the DSO Session
>>  alive."
>> This does not seems correct. There should be at max one keep-alives message in
>> flight. Thus the keep-laives timer should only be restarted after the
>> keep-alive reply was received.
> On a campus GigE backbone with sub-millisecond round-trip times, even a hypothetical keepalive interval value of 100ms would still have only one keep-alive message in flight at a time. But it would still be an unreasonable keepalive interval.

Not sure if that is a unreasonable keep-alive in a GigE backbone. I would actually hope that you don’t need a keep-alive mechanism at all in those scenarios but it depends if there are any middleboxes and how quickly their state expires. Given you have a campus network, you might know what the timeout are and set the keep-alive interval respectively. Maybe that’a better advise to give.

My problem with the example text above it that is seems to indicate that you just send a keep-alives very x time units while you need to wait for the response before restarting the timer. This needs to be clarified. However as I said, I’m not certain about the actual value fo this section as all, as it does not seems the right document to discuss these more general issue.

>>  And, in this extreme example, a single packet loss and
>>  retransmission over a long path could introduce a momentary pause in
>>  the stream of messages, long enough to cause the server to
>>  overzealously abort the connection."
>> This doesn't really make sense to me: As I said, TCP will retransmit and the
>> keep-alive timer should not be running until the reply is received. If you want
>> to abort the connection based on keep-alives quickly before the TCP connection
>> indicates you a failure, you need to wait at minimum for an interval that is
>> larger than the TCP RTO (with is uaually 3 RTTs) which means you basically need
>> to know the RTT.
> The point of this text is to illustrate that a keepalive interval value of 100ms would be unreasonable. I think you would agree with that.

Yes. I understood that, however, for me this illustration was rather confusing. For me something like "the keep-alives interval should not be chosen to low to reduce network load and must be sufficiently larger than the RTT to avoid server termination if the keep-alive gets lost and needs to be retransmitted“ would be enough.

> This is to support why the immediately following text mandates a minimum keepalive interval of ten seconds.
>> Also sec 7.1: "If the client does not generate the
>>     mandated keepalive traffic, then after twice this interval the
>>     server will forcibly abort the connection."
>> Why must the server terminate the connection at all if the client refuses to
>> send keep-alives? Isn't that what the inactivity timer is meant for? Usually
>> only the endpoint that initiates the keep-alive should terminate the connection
>> if no response is received.
> A client cannot refuse to send keep-alives. A connection with an active mDNS relay subscription is never considered “inactive”, but a server may still require reasonable keep-alives to verify that the client is still there.

Ah, thanks I don’t think this case was explain in the text (or I missed it) please clarify. May also provide in general more reasoning why and that the client is required to send the keep-alives as requested. At the beginning of the doc it seemed more that the keep-alive interval is more a recommendation than a requirement and it is important to understand that to correctly understand the rest of the doc.

>> 3) There is another contraction regarding the inactive timer:
>> Sec 6.2 say
>>  "A shorter inactivity timeout with a longer keepalive interval signals
>>  to the client that it should not speculatively keep an inactive DSO
>>  Session open for very long without reason, but when it does have an
>>  active reason to keep a DSO Session open, it doesn't need to be
>>  sending an aggressive level of keepalive traffic to maintain that
>>  session."
>> which indicates that the client may leave the session open longer than
>> indicated by the inactive timer of the server. However section 7.1.1 say that
>> the client MUST close the connection when the timer is expired.
> A connection with an active mDNS relay subscription is never considered “inactive”, because there is still active client/server state, even if no traffic is flowing. A server may still require reasonable keep-alives to verify that the client is still there.

However, the cited text above says 
"should not speculatively keep an inactive DSO Session open for very long without reason, but when it does have an active reason to keep a DSO Session open,“
which explicitly talks about keeping INACTIVE session open speculatively for a longer time than the inactivity timeout.


> Stuart Cheshire