Re: [TLS] Working group last call for draft-ietf-tls-dtls-heartbeat-02

Michael Tuexen <Michael.Tuexen@lurchi.franken.de> Tue, 23 August 2011 09:41 UTC

Return-Path: <Michael.Tuexen@lurchi.franken.de>
X-Original-To: tls@ietfa.amsl.com
Delivered-To: tls@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 47E1821F854F for <tls@ietfa.amsl.com>; Tue, 23 Aug 2011 02:41:00 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.299
X-Spam-Level:
X-Spam-Status: No, score=-2.299 tagged_above=-999 required=5 tests=[AWL=0.300, BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id H+s8tlN3XqgA for <tls@ietfa.amsl.com>; Tue, 23 Aug 2011 02:40:59 -0700 (PDT)
Received: from mail-n.franken.de (drew.ipv6.franken.de [IPv6:2001:638:a02:a001:20e:cff:fe4a:feaa]) by ietfa.amsl.com (Postfix) with ESMTP id AB6FB21F84DB for <tls@ietf.org>; Tue, 23 Aug 2011 02:40:58 -0700 (PDT)
Received: from [192.168.1.104] (p5481D90F.dip.t-dialin.net [84.129.217.15]) (Authenticated sender: macmic) by mail-n.franken.de (Postfix) with ESMTP id B0B551C0C0BD8; Tue, 23 Aug 2011 11:42:03 +0200 (CEST)
Mime-Version: 1.0 (Apple Message framework v1244.3)
Content-Type: text/plain; charset="iso-8859-1"
From: Michael Tuexen <Michael.Tuexen@lurchi.franken.de>
In-Reply-To: <4E52B204.2020707@gnutls.org>
Date: Tue, 23 Aug 2011 11:42:02 +0200
Content-Transfer-Encoding: quoted-printable
Message-Id: <08B3F781-C6F4-46AA-9D0A-4D66D4DE6794@lurchi.franken.de>
References: <67629EB8-CDF5-47B3-BC6E-C1A76E08C294@cisco.com> <4E520063.7020206@gnutls.org> <B07FDAE5-4AB4-4277-8F0A-EA0A190280E1@lurchi.franken.de> <4E52B204.2020707@gnutls.org>
To: Nikos Mavrogiannopoulos <nmav@gnutls.org>
X-Mailer: Apple Mail (2.1244.3)
Cc: tls@ietf.org
Subject: Re: [TLS] Working group last call for draft-ietf-tls-dtls-heartbeat-02
X-BeenThere: tls@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <tls.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tls>, <mailto:tls-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/tls>
List-Post: <mailto:tls@ietf.org>
List-Help: <mailto:tls-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tls>, <mailto:tls-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 23 Aug 2011 09:41:00 -0000

On Aug 22, 2011, at 9:46 PM, Nikos Mavrogiannopoulos wrote:

> On 08/22/2011 11:46 AM, Michael Tüxen wrote:
> 
>>> I believe more rationale is needed in the draft on _why_ should 
>>> these issues be solved at TLS or DTLS level. For (1), TCP has a
>>> keepalive mechanism, so reintroducing it at TLS doesn't make much
>>> sense, and UDP throws the burden to user protocol. So why not throw
>>> the burden on the protocol above?
>>> The only discussion on text is: "Sending HeartbeatRequest messages
>>> allows the sender to make sure that it can reach the peer and the
>>> peer is alive.  Even in case of TLS/TCP this allows this check at a
>>> much higher rate than the TCP keepalive feature would allow." So is
>>> this just a high-rate TCP keepalive? Why do we need that at the 
>>> security layer? Why not propose a high-rate TCP keepalive?
>> Regarding the keepalive feature: Our main motivation is not TCP
>> keepalives. The point is that there are application protocols running
>> over UDP which do not have a
> 
> Why not restrict the scope of this extension to DTLS then? (unless I'm
> missing some other application of this in TLS).
The original ID was only limited to DTLS. But it turns out that
you need nothing specific for TLS. Therefore it was suggested
to make it also available on TLS.
So you can check if your peer is there without relying on TCP keep alive.
Or refresh the NAT state. Up to the app writer.
> 
>> mechanism in the app protocol to do this. This is fine when running 
>> over UDP since the lower layer does not have any state. However, when
>> the lower layer becomes DTLS instead of UDP, there is state, and it
>> is not a good idea to keep this state there forever. So having a way
>> for DTLS to detect that the peer is gone, helps.
> 
> Ok, now I understand what you try to address with this extension (and it
> might be better for the draft text to express it as well). However is
Any suggested text?
> this the right solution for the problem? This does not make DTLS
> stateless, thus it is different in operation as the previous UDP server.
> Anyway you'll have to trigger the keepalive messages manually (with
> input from the application layer) or after some fixed time.
> 
> Wouldn't this be equivalent (in resources and behavior) with trying
> to resume the DTLS connection after a fixed amount of time?
> (the handshake resumption is 3 messages, this is one message more than
> the 2 keep-alive messages).
This was used by the IPFix guys as a workaround. Just sending a
message which gets reflected is much simpler.
It might also be a problem sending data while doing session resumption.
(the node issuing the session resumption might not be the data sender...)

> 
>>> About (2) Both TCP and UDP don't bother with a special PMTU
>>> discovery mechanism. Why should DTLS or TLS provide it? Are there
>>> advantages on making a PMTU discovery at this level?
>> Where else? TLS has no problem since TCP segments user messages.
>> DTLS running over UDP can not rely on it. That is why DTLS should
>> determine the PMTU. For doing this, it needs some messages for
>> testing, preferable not app data which might et lost during the
>> testing. Using the HB, you have a tool to do this without relying on
>> ICMP message.
> 
> Just from curiosity how has this been implemented in a library (or how
> you think of being implemented)? How did the keep-alive messages are
> being exchanged without the application noticing?
The application can call a function which sends a HB. If there is a
response, the app will not get anything. If there is no response,
the app will et notified that the DTLS session is gone...
We have a patch for OpenSSL available at
http://sctp.fh-muenster.de/dtls-patches.html
The retransmissions are handled by the DTLS implementation.
> 
> 
> Nits:
> section 3:
>> "it has to be answered with a corresponding HeartbeatResponse
>> message
> immediately."
> I don't think the immediately adds to the sentence. It just adds
> confusion. What if it was sending an application data packet on a
> different thread? Should I stop?
No. I just wanted to make clear that you should not start a timer
or something like that. We can take out "immediately".
> 
> 
>> "HeartbeatRequest messages from older epochs SHOULD be discarded."
> Why HeartbeatRequest messages are treated differently? Aren't all
> messages with older epochs to be discarded?
No. 
http://tools.ietf.org/html/draft-ietf-tls-rfc4347-bis-06
contains:

   Note that because DTLS records may be reordered, a record from epoch
   1 may be received after epoch 2 has begun. In general,
   implementations SHOULD discard packets from earlier epochs, but if
   packet loss causes noticeable problems MAY choose to retain keying
   material from previous epochs for up to the default MSL specified for
   TCP [TCP] to allow for packet reordering. (Note: the intention here
   is that implementors use the current guidance from the IETF for MSL,
   not that they attempt to interrogate the MSL the system TCP stack is
   using.)  Until the handshake has completed, implementations MUST
   accept packets from the old epoch.


> 
> 5.2:
> "HeartbeatRequest messages SHOULD only be sent after an idle period
> that is at least multiple round trip times long."
> This is more than unclear. Why not specify how many? What if I decide 2
> times and another implementor discards my message because for him
> multiple was > 3?
That text was suggested by the TSV directorate...
It is up to the application writer what he wants to do. The longer
he waits, the longer it takes to detect that the peer is gone.
For congestion control the only thing which is important is that
you do send a not more than one HB per RTT. This is enforced by
the implementation with not having more than one outstanding HB.

Best regards
Michael
> 
> 
> regards,
> Nikos
> 
> 
> 
>