Re: [aqm] FQ-PIE kernel module implementation

Polina Goltsman <polina.goltsman@student.kit.edu> Tue, 07 July 2015 09:09 UTC

Return-Path: <polina.goltsman@student.kit.edu>
X-Original-To: aqm@ietfa.amsl.com
Delivered-To: aqm@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B19901A8032 for <aqm@ietfa.amsl.com>; Tue, 7 Jul 2015 02:09:02 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.608
X-Spam-Level:
X-Spam-Status: No, score=-1.608 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DC_PNG_UNO_LARGO=0.001, HTML_MESSAGE=0.001, MIME_8BIT_HEADER=0.3, T_RP_MATCHES_RCVD=-0.01] autolearn=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id h8weEwpIaIim for <aqm@ietfa.amsl.com>; Tue, 7 Jul 2015 02:09:00 -0700 (PDT)
Received: from scc-mailout-kit-01-web.scc.kit.edu (scc-mailout-kit-01-web.scc.kit.edu [IPv6:2a00:1398:9:f712::810d:e75d]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id A73511A802E for <aqm@ietf.org>; Tue, 7 Jul 2015 02:08:59 -0700 (PDT)
Received: from kit-msx-03.kit.edu ([129.13.50.103]) by scc-mailout-kit-01.scc.kit.edu with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:128) (envelope-from <polina.goltsman@student.kit.edu>) id 1ZCOrz-0000Em-KD; Tue, 07 Jul 2015 11:08:52 +0200
Received: from [141.3.219.162] (141.3.219.162) by smtp.kit.edu (129.13.50.106) with Microsoft SMTP Server (TLS) id 8.3.389.2; Tue, 7 Jul 2015 11:08:50 +0200
Message-ID: <559B9724.6090902@student.kit.edu>
Date: Tue, 07 Jul 2015 11:08:52 +0200
From: Polina Goltsman <polina.goltsman@student.kit.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0
MIME-Version: 1.0
To: "Bless, Roland (TM)" <roland.bless@kit.edu>, "Agarwal, Anil" <Anil.Agarwal@viasat.com>, "Fred Baker (fred)" <fred@cisco.com>, Toke Høiland- Jørgensen <toke@toke.dk>
References: <D1961A16.1087%hokano@cisco.com> <5577FBD3.5000804@student.kit.edu> <97EDD2D8-CC0A-4AFA-9A74-3F2C282CF5C2@cisco.com> <87mvzem9i9.fsf@alrua-karlstad.karlstad.toke.dk> <7E6C797B-EE6F-4390-BC8F-606FDD8D5195@cisco.com> <559659A8.9030104@student.kit.edu> <87fv55mtpz.fsf@alrua-karlstad.karlstad.toke.dk> <559674B7.5050004@kit.edu> <7A2801D5E40DD64A85E38DF22117852C70AD0859@wdc1exchmbxp05.hq.corp.viasat.com> <559B889B.4060409@kit.edu>
In-Reply-To: <559B889B.4060409@kit.edu>
Content-Type: multipart/alternative; boundary="------------040501030702000906040004"
Archived-At: <http://mailarchive.ietf.org/arch/msg/aqm/GqYPE5Uj1CVkAzO_walauMVsKUo>
Cc: "draft-ietf-aqm-pie@tools.ietf.org" <draft-ietf-aqm-pie@tools.ietf.org>, "Hironori Okano -X (hokano - AAP3 INC at Cisco)" <hokano@cisco.com>, AQM IETF list <aqm@ietf.org>
Subject: Re: [aqm] FQ-PIE kernel module implementation
X-BeenThere: aqm@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: "Discussion list for active queue management and flow isolation." <aqm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/aqm>, <mailto:aqm-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/aqm/>
List-Post: <mailto:aqm@ietf.org>
List-Help: <mailto:aqm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/aqm>, <mailto:aqm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 07 Jul 2015 09:09:02 -0000

Hello all,

Here are my thoughts about interaction of AQM and fair-queueing system.

I think I will start with a figure. I have started a tcp flow with 
netperf, and 15 seconds later unresponsive UDP flow with iperf with a 
send rate a little bit above bottleneck link capacity. Both flows run 
together for 50 seconds.
This figure plots the throughput of UDP flow that was reported by iperf 
server. (Apparently netperf doesn't produce any output if throughput is 
below some value, so I can't plot TCP flow.).  The bottleneck is 100Mb/s 
and RTT is 100ms. All AQMs were configured with their default values and 
noecn flag.


Here is my example in theory. A link with capacity is C is shared 
between two flows - a non-application-limited TCP flow and unresponsive 
UDP flow with send rate 105%C. Both flows send max-sized packets, so 
round robin can be used instead of fair-queueing scheduler.

Per definition of max-min fair share both flows are supposed to get 50% 
of link capacity.

(1) Taildrop queues:
UDP packets will be dropped when its queue is full, TCP packets will be 
dropped when its queue is full. As long as there are packets in TCP flow 
queue, TCP should receive its fair share. ( As far as I understand, this 
depends on the size of the queue)

(2) AQM with state per queue:
Drop probability of UDP flow will always be non-zero and should 
stabilize around approximately 0.5.
Drop probability of TCP flow will be non-zero only when it starts 
sending above 50%C. Thus, while TCP recovers from packet drops, it 
should not receive another drop.

(3) AQM with state per aggregate:
UDP flow always creates a standing queue, so drop probability of 
aggregate is always non-zero. Let's call it /p_aqm/.
The share of TCP packets in the aggregate /p_tcp = TCP send rate / (TCP 
send rate + UDP send rate)/ and the probability of dropping a TCP packet 
is /p_aqm * p_tcp. /This probability is non-zero unless TCP doesn't send 
at all.

In (3) drop probability is at least different. I assume that it is 
larger than in (2), which will cause more packet drops for TCP flow, and 
as result the flow will reduce its sending rate below its fair share.

Regards,
Polina

On 07/07/2015 10:06 AM, Bless, Roland (TM) wrote:
> Hi,
>
> thanks for your analysis. Indeed, Polina came up with
> a similar analysis for an unresponsive UDP flow and
> a TCP flow. Flow queueing can achieve link share fairness
> despite the presence of unresponsive flows, but is ineffective
> if the AQM is applied to the aggregate and not to the individual
> flow queue. Polina used the FQ-PIE implementation
> to verify this behavior (post will follow).
>
> Regards,
>   Roland
>
>
> Am 04.07.2015 um 22:12 schrieb Agarwal, Anil:
>> Roland, Fred,
>>
>> Here is a simple example to illustrate the differences between FQ-AQM with AQM per queue vs AQM per aggregate queue.
>>
>> Let's take 2 flows, each mapped to separate queues in a FQ-AQM system.
>> 	Link rate = 100 Mbps
>> 	Flow 1 rate = 50 Mbps, source rate does not go over 50 Mbps
>> 	Flow 2 rate >= 50 Mbps, adapts based on AQM.
>>
>> FQ-Codel, AQM per queue:
>> 	Flow 1 delay is minimal
>> 	Flow 1 packet drops = 0
>> 	Flow 2 delay is close to target value
>>
>> FQ-Codel, AQM for aggregate queue:
>> 	Does not work at all
>> 	Packets are dequeued alternatively from queue 1 and queue 2
>> 	Packets from queue 1 experience very small queuing delay
>> 	Hence, CoDel does not enter dropping state, queue 2 is not controlled :(
>>
>> FQ-PIE, AQM per queue:
>> 	Flow 1 delay is minimal
>> 	Flow 1 packet drops = 0
>> 	Flow 2 delay is close to target value
>>
>> FQ-PIE, AQM for aggregate queue:
>> 	Flow 1 delay and queue 1 length are close to zero.
>> 	Flow 2 delay is close to 2 * target_del :(
>> 		qlen2 = target_del * aggregate_depart_rate
>> 	Flow 1 experiences almost the same number of drops or ECNs as flow 2 :(
>> 		Same drop probability and almost same packet rate for both flows
>> 	(If flow 1 drops its rate because of packet drops or ECNs, the analysis gets slightly more complicated).
>>
>> See if this makes sense.
>>
>> If the analysis is correct, then it illustrates that flow behaviors are quite different
>> between AQM per queue and AQM per aggregate queue schemes.
>> In FQ-PIE for aggregate queue,
>> 	- The total number of queued bytes will slosh between
>> 	  queues depending on the nature and data rates of the flows.
>> 	- Flows with data rates within their fair share value will experience
>> 	  non-zero packet drops (or ECN marks).
>> 	- Flows that experience no queuing delay will increase queuing delay of other flows.
>> 	- In general, the queuing delay for any given flow will not be close to target_delay and can be
>> 	  much higher
>
>