[aqm] thoughts on operational queue settings (Re: [tsvwg] CC/bleaching thoughts for draft-ietf-tsvwg-le-phb-04) (fwd)

Mikael Abrahamsson <swmike@swm.pp.se> Thu, 12 April 2018 06:46 UTC

Return-Path: <swmike@swm.pp.se>
X-Original-To: aqm@ietfa.amsl.com
Delivered-To: aqm@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B59D2126C2F for <aqm@ietfa.amsl.com>; Wed, 11 Apr 2018 23:46:12 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.3
X-Spam-Level:
X-Spam-Status: No, score=-4.3 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=swm.pp.se
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PYslfcmLVHQe for <aqm@ietfa.amsl.com>; Wed, 11 Apr 2018 23:46:10 -0700 (PDT)
Received: from uplift.swm.pp.se (swm.pp.se [212.247.200.143]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 7FDAE1241FC for <aqm@ietf.org>; Wed, 11 Apr 2018 23:46:10 -0700 (PDT)
Received: by uplift.swm.pp.se (Postfix, from userid 501) id BFA9BAF; Thu, 12 Apr 2018 08:46:08 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=swm.pp.se; s=mail; t=1523515568; bh=6XkmQb37iaAMsciG8C7A95nzr5ccVeockBP9rDEmmis=; h=Date:From:To:Subject:From; b=ii8EtZ68w3VRHY+6gmpKSxUnw+tP+2uioqRFc2pm1/mNt5MJhxlukwWOCPtAGZyJ8 HZ2Jm1OMj9Kc3VHK3NAx/PIkgMoX+OYcrBO7uuPpBHRU0dhbww9sxspTXN3ZzR7hZu Y1AesborY2+94SGvfTW8jMMhW9fx/DEr2SOX+5Eg=
Received: from localhost (localhost [127.0.0.1]) by uplift.swm.pp.se (Postfix) with ESMTP id BC82E9F for <aqm@ietf.org>; Thu, 12 Apr 2018 08:46:08 +0200 (CEST)
Date: Thu, 12 Apr 2018 08:46:08 +0200
From: Mikael Abrahamsson <swmike@swm.pp.se>
To: aqm@ietf.org
Message-ID: <alpine.DEB.2.20.1804120839290.18650@uplift.swm.pp.se>
User-Agent: Alpine 2.20 (DEB 67 2015-01-07)
Organization: People's Front Against WWW
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"; format="flowed"
Archived-At: <https://mailarchive.ietf.org/arch/msg/aqm/8SA85AWv272BAOZm0x_k0ypNU04>
Subject: [aqm] thoughts on operational queue settings (Re: [tsvwg] CC/bleaching thoughts for draft-ietf-tsvwg-le-phb-04) (fwd)
X-BeenThere: aqm@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: "Discussion list for active queue management and flow isolation." <aqm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/aqm>, <mailto:aqm-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/aqm/>
List-Post: <mailto:aqm@ietf.org>
List-Help: <mailto:aqm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/aqm>, <mailto:aqm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Apr 2018 06:46:13 -0000

Hi,

I sent this to tsvwg where we're discussing the LE codepoint. Since I am 
now talking queue settings, I thought it might be interesting to get 
feedback from this group as well on what advice we should give operators.

Please take into account that I am aiming for what is possible on 
currently deployed platforms, seen in the field. Not what might be 
possible on future hardware/software. So available are generally (W)RED 
per queue and a few queues per customer.

I am also going to test a 3 queue setup, where each of these groups of 
DSCP values would go into different queues where LE would perhaps be 
assured 5% of BW and the rest split evenly between a BE and "everything 
else" queue. If I did that, I would probably not start dropping LE traffic 
until 10-20ms buffer fill.

---------- Forwarded message ----------
Date: Thu, 12 Apr 2018 08:39:25 +0200 (CEST)
From: Mikael Abrahamsson <swmike@swm.pp.se>
To: Brian E Carpenter <brian.e.carpenter@gmail.com>
Cc: tsvwg@ietf.org
Subject: thoughts on operational queue settings (Re: [tsvwg] CC/bleaching
     thoughts for draft-ietf-tsvwg-le-phb-04)

On Thu, 12 Apr 2018, Brian E Carpenter wrote:

> BE and LE PHBs should talk about queueing and dropping behaviour, not about 
> capacity share, in any case. It's clear that on a congested link, LE is 
> sacrificed first - peak hour LE throughput might be precisely zero, which is 
> not acceptable for BE.

I have received questions from operational people for configuration examples 
for how to handle LE/BE etc. So I did some work in our lab to give some kind of 
example.

So my first goal was to figure out something that'd do something reasonable on 
a platform that'll only do DSCP based RED (as this is typically available on 
platforms going back 15 years). This is not optimal, but at least it would be 
deployable on lots of platforms currently installed and moving packets for 
customers.

The test was performed with 30ms of RTT, 10 parallel TCP sessions per diffserv 
RED curve, 800 megabit/s access speed (it's really gig, but in my lab setup I 
have some contraints that meant if I set it to gig I might get some 
uncontrolled packet loss due to other equipment sitting on the same shared 
link, so I opted for 800 megabit/s as "close enough").

What I came up with that would give LE ~10% of access bandwidth compared to BE, 
and a slight advantage for anything that is not BE/LE (goal was to give this 
traffic a lossless experience) was this:

This is a Cisco ASR9k that without this RED configuration will buffer packets 
up to ~90 milliseconds, resulting in 120ms RTT (30ms path RTT and 90ms 
buffer-bloat).

  class class-default
   shape average 800 mbps
   random-detect dscp 1,8 1 ms 500 ms
   random-detect dscp 0,2-7 5 ms 1000 ms
   random-detect dscp 9-63 10 ms 1000 ms

This basically says that for LE and CS1, start dropping packets at 1ms of 
buffer fill. Since some applications use CS1 for scavanger, it made sense to me 
to treat CS1 and LE the same.

For BE (which I made to be DSCP 0,2-7), start dropping packets at 5ms buffer 
fill, less agressively compared to LE.

For the rest, don't start dropping packets until 10ms buffer fill, giving it 
slight advantage (thought here being that gaming traffic etc should not see 
much drops even though they will see some induced RTT because of BE traffic).

This typically results in LE using approximately 30-50 megabit/s when there are 
10 LE TCP sessions and 10 BE TCP sessions, all trying to go full out. The BE 
sessions then get ~750 megabit/s. The added buffer delay is around 5-10ms as 
that's where the BE sessions settle their BW usage. Platform unfortunately 
doesn't support ECN marking.

If I were to spend queues on this traffic instead of using RED, I would do this 
differently. I will do more tests with lower speeds etc, this was just initial 
testing for one use-case, but also to give an example of what can be done on 
currently shipping platforms. I know there are much better ways of doing this, 
but I want this into networks NOW, not in 5-10 years. So the easier the advice, 
the better chance we get this into production networks.

I don't think it's a good idea to give CS1/LE no bandwidth at all, that might 
cause failure cases we can't predict. I prefer to give LE traffic a big 
disadvantage, so that it might only get 5-10% or something of bandwidth, when 
there is competing traffic.

I will do more testing, I have several typical platforms available to me that 
are in wide use.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se