[Iccrg] Draft minutes

michawe@ifi.uio.no (Michael Welzl) Wed, 01 August 2012 20:26 UTC

From: michawe@ifi.uio.no
Date: Wed, 01 Aug 2012 20:26:21 +0000
Subject: [Iccrg] Draft minutes
Message-ID: <D67D3E1A-548A-45DE-AF8B-C5B38EC8DB17@ifi.uio.no>
X-Date: Wed Aug 1 20:26:21 2012

Hi all,

Please find below the minutes from the ICCRG meeting last Monday, here  
at the ongoing IETF. Speakers, everyone else who was there, I'd  
appreciate if you'd take a look and send me any fixes you think are  
necessary ASAP. I think you can actually edit them directly here:   http://etherpad.tools.ietf.org:9001/p/notes-ietf-84-iccrg?useMonospaceFont=true&showChat=false

I'll make end of this week the deadline, i.e. I'll finalize the  
minutes next Monday, 6 August.

Cheers,
Michael

****************************

Yuchung Cheng
Client-aided Congestion Management for TCP

Yuchung's presentation and slides described motivations, the
overall idea of doing CC across connections through CM, and
pros/cons of sender-side CM.  He described some reasons why
the client may be in a good position to help, which is what
the Great Snipe client-based CM framework is about, as the
subsequent slides describe.  There are a number of research
issues and opportunities.

Great Snipe is in early stages; source code is planned to be
released as it matures.

[Jim Gettys] - not one, but 2 bottlenecks in the home, 802.11
and broadband.  inside Linux and other OSes, there is a system
that tracks how fast a wireless link can go (minstrel), on a 50ms by
50ms basis, you have some idea how fast you can transmit;
thinking about this as a black box without that information
doesn't make sense to me

[Yuchung] - all this information is on the client-side or end
device and if we can leverage it to help the transport layer
that would be great; it is very difficult to do if we keep it
inside one connection

[Roland Bless] - didn't quite get how this differs from RFC
3124 (congestion manager).

[Yuchung] - was imagining that was based on server-side; the
problem is to share between servers within a farm is hard, so
the client may be a better point

[Tim Shepard] - had trouble following proposal; sender/server
and client/receiver terminology

[Yuchung] - trying to make a distinction NOW, in my talk, the
direction of data flow as always server to client

[Tim] - wasn't sure you've included upload case; I have a server
at home; wasn't sure this would work in general, or what you
intend

[Yuchung] - just picking up the common case of a web server
sitting in the cloud and a client behind a wireless or other
link

[Scott Bradner] - CM RFC is not limited to servers, but it is
sender-side.  As I recall, the reason CM didn't take off is
that there was no way to make sure the same paths were being
taken for each of the flows being managed

[Matt Mathis] - this is early formative stage; it's probably
the case that you need to do CC from both ends, which does
(I believe) fix some of the CM problems.  For many use-cases
you potentially control all applications on a single
bottleneck; it's an improvement even if you do better some of
the time and doing classical CC at the other end fixes the
problems

[Michael Welzl] - what are you planning to put into code?

[Yuchung] - still experimenting; quick answer is that I don't
know yet

[Matt] - still some major holes in the code


Grenville Armitage
FreBSD-based TCP CC research:
Delay-Gradient TCP
Modular TCP Framework

Presentation and slides describe work done on the FreeBSD TCP
stack, and aim to look for collaborators, testers, etc.  The
idea of delay-based TCP sensing onset of congestion through
variations in RTT was described, in the "CDG" algorithm
(CAIA Delay Gradient).  Many aspects of the design and some
results comparing to other algorithms were shown.  Code is
available in FreeBSD.

[Question] - impact of RED

[Kathie Nichols] - since you're looking at how RTT goes up
and down are you inverting fairness between different RTTs
when you have flows with different RTTs.

[Grenville] - haven't looked at it

[Kathie] - undershooting and getting worse throughput is another
thing to look at

[Grenville] - CDG will never obtain the same goodput that loss-based
algorithms will; we will always underestimate a bit.  the benefit
is that you can get pretty good throughput but will hammer the voip
and other interactive traffic less

[Kathie] - comparing against reno instead of cubic; cubic is pretty
widely used

[Grenville] - we don't have it in these slides, but showed that
cubic will hammer the link and is not a real win for home links

[Andrew McGregor/Ken Carlberg] - packet aggregation would lead to
fluxuation in RTT

[Mo Zanaty] - you want to operate under the no-loss case and be
parameterless; how do you reconcide those things?  in what cases are
you able to stay fully lossless

[Grenville] - there are a couple of magic numbers; we don't actually
revert to newreno per-se, we just use our shadow window to only back
off as much as newreno would have under the circumstances

[unknown 1] - how do you distinguish between congestion
and non-congestion loss

[Grenville] - a congestion loss happens when the delay gradients tell  
us that
there was congestion before the loss. there's a potential weakness in  
incorrectly
estimating that queues are still growing

[unknown 1] that may not be the wrong answer; on UDP RTCWEB side  
people think a
few percent loss is fine without reducing the rate; not sure anybody
has really analyzed this; likely related to bursts of other flows
starting up

[Matt Mathis] - have you looked at worst-case scenario (10s of
thousands of streaming customers) and a near-field bottleneck where
loss is correlated across flows

[Grenville] - haven't specifically looked; we would see a massive
gradient

[Matt] - jumping up assumes that I saw an old RTT; suppose I have a
new connection

[Grenville] - would have to think about that

[Andrew/Ingemar] - non-congestive loss in wifi and LTE is about 40%
in the raw channel, and it has its own mechanism for retransmission
to hide its underlying packet losses

[Grenville] - those turn into latency

[Andrew] - the scarce resource is permission to transmit at all, so
it does lead to latency but not as much as you think

[Bob Briscoe] - cubic hasn't got enough control signals at speed

agreement


Microphone switched to Lawrence Stewart to discuss the modular CC
framework in FreeBSD.  Discussed FreeBSD as an R&D platform.  The
implementation of the modular CC capability was discussed, inclucing
SIFTR, khelp/hhook

[Andrew/jabber-Michael Tuexen] - does it include SCTP?

[Lawrence] - TCP immediately is the focus, other protocols down the
track

[Andrew/Dave Taht] - when will we see codel in BSD?

[Lawrence] - soon; several people working on it



Matt Mathis
Laminar TCP and Related Problems

Matt discussed the current status of linux patch for Laminar, and
the list of possible add-ons.  Since some folks were in neither
TCPM nor Matt's presentations at the last IETF meeting, he gave
a brief overview of Laminar that isn't in the slides.

[Michael Welzl] - what is in the patch; full-featured TCP?

[Matt] - yes, it's essentially identical, except for a case where
it's slightly slower, related to congestion window validation, and
it doesn't do TSO the same way

Matt encourages people to play with the code and will find that
Laminar makes things better/easier.  Wants to hear how it
effects other people's work (making it easier or harder).

laminar@ietf.org mailing list

[Lars Eggert] - we have time to talk about what we want to do
here that we didn't in TCPM

[Mirja] - is this going upstream?

[Matt] - this breaks anyone who knows the code well's intuition
about how the code works; breaking it into smaller pieces now,
but that intermediate kernel would require different debugging,
so it's more work to do it that way.  on the other hand, I have
a history of long-term support of patches off to the side that
eventually got imported


Jim Gettys

Jim discussed bufferbloat global research topics involving
measurements needed to tell if it's getting better/worse and
where it is.

Netflix streaming behavior is to pump out chunks of data every
10 seconds, not actually streaming

wants an adaptive AQM; Jim says he's throwing out questions, but
he doesn't have answers.  This is research to be done.

work on tools is needed

[Lars Eggert] - go to homenet and write a document about what
needs to be done

[Andrew McGregor]

[Dave Taht] - openwrt many models already supported

[Lars] - I need one that does fiber

[Jim] Wireless is often as bad as broadband or worse


Meeting adjourned 3PM local time