Re: [core] I-D Action: draft-greevenbosch-core-minimum-request-interval-01.txt

Bert Greevenbosch <Bert.Greevenbosch@huawei.com> Sun, 28 April 2013 08:58 UTC

Return-Path: <Bert.Greevenbosch@huawei.com>
X-Original-To: core@ietfa.amsl.com
Delivered-To: core@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3902D21F996D for <core@ietfa.amsl.com>; Sun, 28 Apr 2013 01:58:15 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.599
X-Spam-Level:
X-Spam-Status: No, score=-6.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id bfGEPG0EX-Cm for <core@ietfa.amsl.com>; Sun, 28 Apr 2013 01:58:14 -0700 (PDT)
Received: from lhrrgout.huawei.com (lhrrgout.huawei.com [194.213.3.17]) by ietfa.amsl.com (Postfix) with ESMTP id 343E921F9959 for <core@ietf.org>; Sun, 28 Apr 2013 01:58:13 -0700 (PDT)
Received: from 172.18.7.190 (EHLO lhreml203-edg.china.huawei.com) ([172.18.7.190]) by lhrrg01-dlp.huawei.com (MOS 4.3.5-GA FastPath queued) with ESMTP id ASG28701; Sun, 28 Apr 2013 08:58:11 +0000 (GMT)
Received: from LHREML403-HUB.china.huawei.com (10.201.5.217) by lhreml203-edg.huawei.com (172.18.7.221) with Microsoft SMTP Server (TLS) id 14.1.323.7; Sun, 28 Apr 2013 09:57:25 +0100
Received: from SZXEML412-HUB.china.huawei.com (10.82.67.91) by lhreml403-hub.china.huawei.com (10.201.5.217) with Microsoft SMTP Server (TLS) id 14.1.323.7; Sun, 28 Apr 2013 09:58:06 +0100
Received: from szxeml558-mbx.china.huawei.com ([169.254.7.35]) by szxeml412-hub.china.huawei.com ([10.82.67.91]) with mapi id 14.01.0323.007; Sun, 28 Apr 2013 16:58:02 +0800
From: Bert Greevenbosch <Bert.Greevenbosch@huawei.com>
To: Carsten Bormann <cabo@tzi.org>
Thread-Topic: [core] I-D Action: draft-greevenbosch-core-minimum-request-interval-01.txt
Thread-Index: AQHOQmn7KKkjK6YXUUyevqd2IT/+VJjpRJvw
Date: Sun, 28 Apr 2013 08:58:01 +0000
Message-ID: <46A1DF3F04371240B504290A071B4DB63D71E0E1@szxeml558-mbx.china.huawei.com>
References: <20130426092352.870.74099.idtracker@ietfa.amsl.com> <F6352EA2-8EB9-4D3D-9A4C-B461DF9FF16F@tzi.org>
In-Reply-To: <F6352EA2-8EB9-4D3D-9A4C-B461DF9FF16F@tzi.org>
Accept-Language: en-GB, zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [10.66.162.63]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: "core@ietf.org (core@ietf.org)" <core@ietf.org>
Subject: Re: [core] I-D Action: draft-greevenbosch-core-minimum-request-interval-01.txt
X-BeenThere: core@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: "Constrained RESTful Environments \(CoRE\) Working Group list" <core.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/core>, <mailto:core-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/core>
List-Post: <mailto:core@ietf.org>
List-Help: <mailto:core-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/core>, <mailto:core-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 28 Apr 2013 08:58:15 -0000

Hi Carsten,

Thank you for your feedback. Indeed I remember the discussions in Orlando.

I think the main use case of my flow control draft is reducing the server load based on limiting the communication within a client/server pair. As the solution limits the time between subsequent requests through a minimum request interval (MRI), it is useful for transactions that require multiple requests. Examples are block transactions and browsing transactions.

One of the discussion points in Orlando was about how the server can determine a good MRI. I would say the main issue is that when the server is overloaded, e.g. if it has to drop requests due to full buffers, it needs to increase the MRI, whereas if there is no problem, it can decrease the MRI or even set it to 0 if considered safe. Increasing the MRI could be something simple like multiplying by 2 (within bounds), whereas decreasing could be something like linear decrease. This would mimic TCP but other strategies could be possible as well.

<quote>
Just to build a very rough strawman for such an alternative metric: 
A server could ask a client to send the next request only after twice the time it took the client to obtain the previous response.
(This would figure in network load in a way that the server on its own cannot do.)
</quote>

Yes, that is an idea that deserves exploration. It means that the client would calculate the MRI by itself, whereas the server only indicates to the client the need to adjust the MRI, without signalling an explicit value of the MRI. This could e.g. be done through a new "slow-down" option.

With "doubling", do you mean doubling the MRI once or doubling the current MRI every time the client receives a "slow-down" option? If the latter, there should be an upper bound to the MRI. Also, one could consider linearly reducing the MRI if the "slow-down" option is not included. Since we want to prevent the server to maintain state, it cannot remember to whom it sent how many "slow-down" responses.

Simpler would be for the client to just double the last observed time between Request and Response (the former in above paragraph), but that would only reduce the number of requests per time unit to its half, which may or may not be enough reduction of the server load.

What are your thoughts?

<quote>
Another issue is the scope of the throttling: Is it for a single resource, this endpoint, the IP address, a prefix (say, a /64)?
(Wider scopes have more security implications.)
</quote>

Indeed if the server can signal explicit MRI values, then it can take more factors into account. It could e.g. take the multiplicity of clients it is talking to into account when choosing its MRI. Also if it signals the same MRI to all clients, it enforces fairness. However indeed there are security implications, as in this way clients would have influence on other clients' behaviour, although with a congested server this is the case more often than not. We need to consider whether the server's more extensive knowledge of the situation can lead to significantly better MRIs, and if that would justify the higher complexity.

Just some considerations for further discussion...

Best regards,
Bert


-----Original Message-----
From: core-bounces@ietf.org [mailto:core-bounces@ietf.org] On Behalf Of Carsten Bormann
Sent: 2013年4月26日 18:37
To: core@ietf.org (core@ietf.org)
Subject: Re: [core] I-D Action: draft-greevenbosch-core-minimum-request-interval-01.txt

Bert: Thanks for updating this.
I referred to the -00 in my reply to Martin Stiemerling's IESG comments.

There were some comments in the Orlando meeting that people didn't quite know when to use such an Option and how to find a good value to put in it.
(I think we need to check whether there is a better metric to use for throttling/load shedding than the elapsed time between two requests.)
It would probably help to look at the use cases in some more detail.  What specifically is a server trying to achieve by limiting the request interval?
What kind of metric aligns best with such an objective?

Just to build a very rough strawman for such an alternative metric: 
A server could ask a client to send the next request only after twice the time it took the client to obtain the previous response.
(This would figure in network load in a way that the server on its own cannot do.)

Another issue is the scope of the throttling: Is it for a single resource, this endpoint, the IP address, a prefix (say, a /64)?
(Wider scopes have more security implications.)

Grüße, Carsten


On Apr 26, 2013, at 11:23, internet-drafts@ietf.org wrote:

> 
> A New Internet-Draft is available from the on-line Internet-Drafts directories.
> 
> 
> 	Title           : CoAP Minimum Request Interval
> 	Author(s)       : Bert Greevenbosch
> 	Filename        : draft-greevenbosch-core-minimum-request-interval-01.txt
> 	Pages           : 8
> 	Date            : 2013-04-26
> 
> Abstract:
>   This document defines an "Minimum-Request-Interval" option for CoAP,
>   which can be used to negotiate the minimum time between two
>   subsequent requests within a single client and server pair.  It can
>   be used for flow and congestion control, reducing the consumption of
>   server and network resources when needed.
> 
> Note
> 
>   Discussion and suggestions for improvement are requested, and should
>   be sent to core@ietf.org.
> 
> 
> The IETF datatracker status page for this draft is:
> https://datatracker.ietf.org/doc/draft-greevenbosch-core-minimum-request-interval
> 
> There's also a htmlized version available at:
> http://tools.ietf.org/html/draft-greevenbosch-core-minimum-request-interval-01
> 
> A diff from the previous version is available at:
> http://www.ietf.org/rfcdiff?url2=draft-greevenbosch-core-minimum-request-interval-01
> 
> 
> Internet-Drafts are also available by anonymous FTP at:
> ftp://ftp.ietf.org/internet-drafts/
> 
> _______________________________________________
> I-D-Announce mailing list
> I-D-Announce@ietf.org
> https://www.ietf.org/mailman/listinfo/i-d-announce
> Internet-Draft directories: http://www.ietf.org/shadow.html
> or ftp://ftp.ietf.org/ietf/1shadow-sites.txt
> 

_______________________________________________
core mailing list
core@ietf.org
https://www.ietf.org/mailman/listinfo/core