Re: [IPFIX] Call for consensus: changing Intended Status of draft-ietf-ipfix-export-per-sctp-stream from Informational to Standards Track

Gerhard Muenz <muenz@net.in.tum.de> Wed, 19 August 2009 14:29 UTC

Return-Path: <muenz@net.in.tum.de>
X-Original-To: ipfix@core3.amsl.com
Delivered-To: ipfix@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 65CBF3A6E9D for <ipfix@core3.amsl.com>; Wed, 19 Aug 2009 07:29:27 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.113
X-Spam-Level:
X-Spam-Status: No, score=-2.113 tagged_above=-999 required=5 tests=[AWL=0.136, BAYES_00=-2.599, HELO_EQ_DE=0.35]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id TgJV1Gfm2BIt for <ipfix@core3.amsl.com>; Wed, 19 Aug 2009 07:29:26 -0700 (PDT)
Received: from smtp.cs.uni-tuebingen.de (u-173-c156.cs.uni-tuebingen.de [134.2.173.156]) by core3.amsl.com (Postfix) with ESMTP id 6933B3A6F4F for <ipfix@ietf.org>; Wed, 19 Aug 2009 07:29:17 -0700 (PDT)
Received: from [131.159.14.149] by smtp.cs.uni-tuebingen.de with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.69) (envelope-from <muenz@net.in.tum.de>) id 1MdmA8-0006Gc-9C; Wed, 19 Aug 2009 16:29:16 +0200
Message-ID: <4A8C0C38.3070306@net.in.tum.de>
Date: Wed, 19 Aug 2009 16:29:12 +0200
From: Gerhard Muenz <muenz@net.in.tum.de>
User-Agent: Thunderbird 2.0.0.22 (Windows/20090605)
MIME-Version: 1.0
To: Brian Trammell <trammell@tik.ee.ethz.ch>
References: <C6A99016.708D3%Quittek@nw.neclab.eu> <4A8B2FFE.1050303@auckland.ac.nz> <E3A9FA63-7EC5-458C-89FC-AE3B97CEB782@tik.ee.ethz.ch> <4A8BF2EA.4070907@net.in.tum.de> <4EA60422-905F-40F2-A6C8-B82B9F60FD11@tik.ee.ethz.ch>
In-Reply-To: <4EA60422-905F-40F2-A6C8-B82B9F60FD11@tik.ee.ethz.ch>
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 8bit
Cc: Nevil Brownlee <n.brownlee@auckland.ac.nz>, IETF IPFIX Working Group <ipfix@ietf.org>
Subject: Re: [IPFIX] Call for consensus: changing Intended Status of draft-ietf-ipfix-export-per-sctp-stream from Informational to Standards Track
X-BeenThere: ipfix@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: IPFIX WG discussion list <ipfix.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/ipfix>, <mailto:ipfix-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/ipfix>
List-Post: <mailto:ipfix@ietf.org>
List-Help: <mailto:ipfix-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ipfix>, <mailto:ipfix-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 19 Aug 2009 14:29:27 -0000

Hi Brian,

Brian Trammell wrote:
> On Aug 19, 2009, at 2:41 PM, Gerhard Muenz wrote:
> 
>>> More troublingly, SPS-compliant EPs that use the Template Withdrawal
>>> mechanism specified in Section 4.4 will _probably_ interoperate with
>>> 5101-compliant CPs in most cases, but may fail in a very difficult to
>>> detect way. Specifically, since 5101 section 8 paragraph 10 specifies
>>> that "[t]he Template ID from a withdrawn Template MUST NOT be reused
>>> until sufficient time has elapsed to allow for the Collecting Process to
>>> receive and process the Template Withdrawal Message.", 5101-compliant
>>> CPs are _not_ obliged to immediately process template withdrawals in
>>> order on the same stream. Indeed, section 4.4 paragraph 5 _directly_
>>> contradicts this MUST NOT, as "[t]he Template ID from a withdrawn
>>> Template MAY be reused on the same stream immediately after the Template
>>> Withdrawal Message is sent".
>>
>> I assume that IPFIX processes messages in the order provided by the
>> transport layer, just like any other protocol. It is strange that you
>> assume that a CP would process IPFIX Messages in a different order.
> 
> If I have a multithreaded application, with one thread receiving, one
> per stream deframing, one thread handling the control side, and multiple
> threads handling deframed data (for example), and I follow 5101 section
> 8 paragraph 10, I presume explicitly that the EP will hold off on
> immediate reuse. So I don't necessarily have to lock down _the entire
> CP_ (which could be rather expensive) in order to process a withdrawal.

There is no problem to process incoming IPFIX Messages in multiple
threads as long as one SCTP stream is processed by a single thread.

Not sure what you mean by "control side". Do you suggest that Set ID 2
and 3 are processed by different threads than other Set IDs?
This would be non-sense because RFC5101 does not specify a time gap
between Template and associated Data Sets.

The only reason for having specified the gap between withdrawal message
and template id reuse in RFC5101 is the out-of-order problem which
occurs if the messages are sent on different streams. This problem does
not occur with per-stream any more.

> Okay, I would not implement things this way. But, as 5101 does not
> _require_ the immediate processing of templates or withdrawals in order,
> it does not suffice merely to presume an implementation would do so just
> because all the implementations I can imagine writing would. And there
> are a lot of engineers out there younger than I am who are a lot
> thread-happier. :)

It seems very strange to me that you think that we cannot expect
protocol implementations to process incoming messages in order unless
this is explicitly specified in the corresponding RFC. It would be worth
checking if this behavior has been specified for other protocols where
this is necessary.

>> RFC5101 Section 9 says that the CP must delete the Template if it
>> receives the withdrawal messages. It does not give room for an
>> unspecified additional "processing time":
>>
>>   If a Collecting Process receives a Template Withdrawal Message, the
>>   Collecting Process MUST delete the corresponding Template Records
>>   associated with the specific SCTP association and specific
>>   Observation Domain, and stop decoding IPFIX Messages that use the
>>   withdrawn Templates.
> 
> The delay is implicit from 5101 section 8 paragraph 10, which reads, in
> full:
> 
>     The Template ID from a withdrawn Template MUST NOT be reused
>     until sufficient time has elapsed to allow for the Collecting
>     Process to receive and process the Template Withdrawal Message.
> 
> Or are you arguing that 5101 sections 8 and 9 are not consistent? I
> believe they are.

Still, I do not see any hint that allows the CP to process IPFIX
Messages in arbitrary order.

>>> 5. An SPS-compatible Exporting Process MUST provide a way to disable SPS
>>> mode on a per-Collecting Process basis. Otherwise, an administrator has
>>> no way to make that Exporting Process work with 5101 Collecting
>>> Processes that cannot handle fast Template reuse.
>>
>> Not sure because I do not understand the interoperability problem
>> between per-stream-EP and 5101-CP.
> 
> (imagine a sequence diagram here; my ASCII art skills leave something to
> be desired. :) )
> 
> 1. A SPS EP connects to a 5101 CP
> 
> 2. The EP sends a template A tid 257 on stream 1
> 
> 3. The EP sends a template B tid 258 on stream 2
> 
> 4. The EP sends data sets simultaneously on streams 1 and 2, according
> to templates A and B respectively
> 
> 5. The EP sends a Template Withdrawal for 257 followed immediately by a
> template C tid 257 on stream 1, according to SPS section 3.2.2 paragraph
> 2 (and in violation of 5101 section 8 paragraph 10)
> 
> 6. The EP starts sending data according to template C on stream 1
> 
> 7. The CP, not expecting immediate retransmit (because 5101 says it
> doesn't have to), does... what? with the template C data:
> 
> 7a. it decodes it correctly?
> 
> 7b. it buffers it, waiting for the template to be processed?
> 
> 7c. it decodes it according to template A?
> 
> Is case 7c always detectable? What if templates A and C have the same
> length?
> 
> 
>>> Complicated? Yes, very. But I can't see another way to guarantee SPS
>>> interoperability with 5101.
>>>
>>>
>>> A second problem is one of applicability. Presently, the draft is not
>>> particularly clear that it is recommended for use only when ALL of the
>>> following conditions hold:
>>>
>>> 1. There is a definite mapping between applications and templates
>>> (multi-template export for data structure variability and export
>>> efficiency within the same logical application, with sharing of common
>>> templates across logical applications, for example, breaks the
>>> assumptions in perstream).
>>
>> I do not understand this point. per-stream works regardless of any
>> application that uses IPFIX.
> 
> Example: YAF (tools.netsa.cert.org/yaf) uses a stack of... I think 64
> templates (it's been a year since I've hacked it), depending on whether
> the flow is a uniflow or biflow, IPv4 or v6, TCP or not, has payload
> entropy calculation or not, and so on... The relationship among these
> templates may or may not be "application" related. Following the
> per-stream specification for YAF would result in a far more complicated
> implementation (not to mention requiring people to recompile SCTP to
> handle 64 streams).
> 
> There's an implicit assumption in SPS that application <-> template, or
> application <-> set of templates. It does not hold in the general case.

per-stream is not talking about applications, and we do not assume such
a mapping.
per-stream allows calculating loss per Template ID or group of Template
IDs. If this does not make sense for the application, then it does not
have to use per-stream.

>> If the application does not need per-stream, it does not have to use it.
>>
>>> 2. There are few enough applications to fit in the number of streams
>>> supported at each endpoint.
>>
>> That is not a problem because multiple Templates can be grouped on one
>> stream as specified in the draft.
>>
>>> 3. Messages are exported using SCTP partial reliability.
>>>
>>> 4. The underlying PR-SCTP implementation supports SCTP-RESET.
>>
>> I think, this is a MAY, not a MUST.
> 
> Hm, true.
> 
>>> 5. There is a requirement for per-application record loss accounting.
>>>
>>> Therefore, as a new implementor coming upon IPFIX for the first time,
>>> seeing this as a Standards Track draft, I might think that I _must_
>>> implement it in order to be compliant, even if it has no benefits for my
>>> application (or actively complicates it, as with the example 1. above).
>>> The draft needs a very clear, _exclusive_ statement of the situations in
>>> which it applies, and guidance that it should not be applied where not
>>> applicable.
>>
>> per-stream will be standardized as an extension of RFC5101. It does not
>> have to be implemented if the benefits of the extension are not needed.
> 
> Excellent. Make it clear in the draft. It doesn't read as such to me, yet.

Yes, this was also discussed in the IESG evaluation.

>>> (I'm also not entirely convinced that das taRecordsReliability should be
>>> used as an indication that SPS is in effect, as it appears to have some
>>> applicability outside SPS.
>>
>> The definition of dataRecordsReliability does not mention any per-stream
>> context. However, it has a specific meaning if SPS is used. This meaning
>> should probably be mentioned in the definition.
>>
>>> It's also unclear what an EP should do if it
>>> wants to reserve the right to send a Template unreliably in the future,
>>> or what a CP should to if it detects inconsistency in the use of the IE.
>>
>> Sending Templates unreliably is against RFC5101 anyway.
> 
> Of course. Typo. s/Template/Data Sets described by a Template/. The
> point stands...

Ah, ok. Now, I understand this point. In this case, the EP should start
a new Template Session if it does not want to use per-stream any more.

Regards,
Gerhard



-- 
Dipl.-Ing. Gerhard Münz
Chair for Network Architectures and Services (I8)
Technische Universität München - Department of Informatics
Boltzmannstr. 3, 85748 Garching bei München, Germany
Phone:  +49 89 289-18008       Fax: +49 89 289-18033
E-mail: muenz@net.in.tum.de    WWW: http://www.net.in.tum.de/~muenz