[Iccrg] Meeting agenda

michael.welzl@uibk.ac.at (Michael Welzl) Tue, 12 September 2006 08:32 UTC

From: michael.welzl@uibk.ac.at
Date: Tue, 12 Sep 2006 08:32:16 +0000
Subject: [Iccrg] Meeting agenda
In-Reply-To: <aa7d2c6d0609091107h219a5761if80d690c004a3de9@mail.gmail.com>
References: <1157563149.3217.285.camel@lap10-c703.uibk.ac.at> <20060908173802.GI21317@grc.nasa.gov> <1157737559.3243.168.camel@lap10-c703.uibk.ac.at> <20060909143410.GI96464@verdi> <aa7d2c6d0609091107h219a5761if80d690c004a3de9@mail.gmail.com>
Message-ID: <1158041095.4771.16.camel@lap10-c703.uibk.ac.at>
X-Date: Tue Sep 12 08:32:16 2006

> > > The correct reaction to corruption.
> >
> >    Actually, there is one: increase the reundancy of coding.
> >    This actually will result in a hopefully-slight _increase_ in bandwidth
> > demand. This is _not_ wrong!
> 
> Good point.  However, bear in mind that it is the role of the source
> to do source coding, rather than "100% reliable" channel coding, which
> is best done at the physical/link level.  One purpose of allowing the
> network to deliver corrupt packets is if the ultimate application can
> withstand some corruption (like transmitting perceptual data).
> 
> In that case, the application should balance more concise source
> coding against the increased channel coding to maximise the user's
> perception *given* the network resources.  It does not immediately
> tell us whether we should increase or decrease the bandwidth.

Many different things are mixed here - the questions "who
reacts", and "will corrupt data be delivered?"...

So, to put it more precisely: my question was about the
correct response of a transport endpoint that would be
notified about corruption when the receiver cannot use
corrupt data.

This can be done with the DCCP Data Checksum option, and
also the mechanism described in this draft:
http://www.ietf.org/internet-drafts/draft-stewart-sctp-pktdrprep-05.txt

In this case, increasing redundancy is no option.

So, under these circumstances, what is the right response
for the transport sender?


> >    It is always "wrong" to fill a pipe with traffic which serves no useful
> > purpose. Alas, there's often no way to determine whether it's useful.
> 
> Exactly.  Getting corrupted video is arguably more "useful" than
> getting reliable spam.  It should be up to the application to
> determine what is useful.

Again, I think that this has to do with my imprecise question.
In the setting I described above (the one which my question
should have addressed), it's clear that the received packet
is of no use if it's erroneous, except that it enables the
receiver to inform the sender of its reception.


> > > It may be the right thing to reduce the rate by less than
> > > in the normal congestion case, but by how much...?
> >
> >    Wrong question!
> >
> >    Sufficient redundancy can repair the corruption (and, of course,
> > eliminate the need for retransmission).
> 
> No, it is the right question.  The well-developed theory of Network

It was an imprecise question  :)


> Utility Maximisation tells us that *** the amount by which the source
> should back off when there is corruption increases with the amount of
> congestion (signalled by loss).*** It is up to the corruption-tolerant
> application whether it uses the bandwidth for redundancy or reduced
> compression.
> 
> The issue of whether different applications should get more or less
> bandwidth is an issue of choosing the "utility" the application
> obtains by getting the data.  Michael is right that the incremental
> utility gained by getting corrupt data is smaller than that from
> un-corrupt data.  That means that the "optimal" solution will cause
> that rate to decrease *provided* there is other traffic to take up the
> slack (indicated by loss).  If there is no loss, the source should not
> back off due to corruption.
> 
> A crude approach would be to respond to a loss rate  X  and corruption
> rate  Y  the same way as to  X ((X + 2Y) / (X+Y))  loss with no
> corruption.  For  X << Y  (an uncongested lossy link), this gives
> "loss" rate 2X and hence high potential throughput, but deferring to
> those with less loss.  If  X >> Y (a congested link) it gives  "loss"
> rate X+Y, treating corruption as loss.

That looks very interesting! How did you obtain this equation -
what's the rationale behind it?


> This can be implemented by a lookup table, the same way as HS-TCP is.
> 
> An important question is whether all applications should be
> arbitrarily assigned the same utility (as is implicitly done by all
> TCP variants), or the utility should reflect the actual application's
> benefit from getting a certain amount of data at a given level of
> corruption.  Since different application gain vastly different amounts

Let's assume equal utilities for now, just to keep things
simple. In principle, addressing applications which would
also benefit from delivery of some corrupt data in addition
to the above would also be interesting, but let's start
simple for now.


> of utility from corrupt date, I would suggest having two classes of
> utility (i.e., two responses to congestion) -- one for TCP-like
> behaviour which seeks 100% reliability and one for perceptual
> applications which do their own redundancy tradeoff.
> 
> Congestion control that responds to "hetereogeneous" signals, like
> loss and corruption, have begun to be studied by Kevin Tang
> <http://netlab.caltech.edu/~aotang/pub/07/ton2007.pdf>.

Great pointer, thanks!

Cheers,
Michael