Re: [rohc] Re: [MOBILE-IP] Comments on PILC draft

Mikael Degermark <micke@CS.Arizona.EDU> Wed, 05 April 2000 02:41 UTC

Message-Id: <200004050240.TAA04455@baskerville.CS.Arizona.EDU>
X-Sender: micke@127.0.0.1
X-Mailer: QUALCOMM Windows Eudora Pro Version 4.0
Date: Wed, 05 Apr 2000 04:41:30 +0200
To: Andrew Worsley <epaanwo@asac.ericsson.se>
From: Mikael Degermark <micke@CS.Arizona.EDU>
Subject: Re: [rohc] Re: [MOBILE-IP] Comments on PILC draft
Cc: Phil Neumiller <neumille@cig.mot.com>, rohc@cdt.luth.se, pilc@lerc.nasa.gov
In-Reply-To: <E12ccR0-0003He-00@assc00.epa.ericsson.se>
References: <Message from Phil Neumiller <neumille@CIG.MOT.COM> <38EA5BEA.D50EACE5@cig.mot.com>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Sender: owner-pilc@lerc.nasa.gov
Precedence: bulk
Status: RO
Content-Length: 7812
Lines: 163

Andrew Worsley wrote:

>   This sounds like a worthy goal and I believe some what compatible with
>   rohc groups agenda which covers giving guidelines to link layer designers
>   and defining compression schemes for specific link technologies.

The "guidelines for link layer designers" document developed by ROHC is ONLY 
intended to provide guidelines on what properties the link layer must and should
have in order to run ROHC header compression over it. It is emphatically 
NOT a general-purpose guideline document for wireless link-layer designers.

>  Correct me if am wrong but the ROHC schemes assume that the packets sent
>  have little or no error detection in them

No, that is not an assumption. We do, however, want (some instance of) the ROHC
schemes to work IF there isn't strong error detection on the link layer.

>  but that is how things are wanted
>  for Voice over IP over the air as they would rather have corrupted data

>  than have the whole packet dropped.

Some voice coders want that, yes. Not all, but some that some people care about. 

>  The draft-jonsson-robust-hc-04.txt working document spells out having
>  checksums only of the headers and presumably only dropping packets whoose
>  header is corrupted (when it's compressed to a few bytes this is likely to
>  be rare). Presumably corrupt packets with good headers are better than no
>  packets.

1. The ROCCO checksum covers the original header, before compression. Its
primary purpose is to allow dependable verification of correct decompression.

2. Using that checksum to detect corruption of compressed headers as well will
weaken it for its original purpose. That may or may not be acceptable depending on
a) how resilient the scheme needs to be against production of erroneous headers, and 
b) the error properties of the link. 

>  One question I have about that is that UDP for IPv6 requires mandatory
>  checksums. Does this imply that the implementation would still try to use
>  the failing packets? It's not clear to me that exactly how this is going to
>  work.

1. Even if UDP prevents the packet to be delivered to the application, it will
enhance performance it the header decompression module sees it. 

2. The reason that IPv6 mandates using the UDP checksum is that IPv6 does
not have a header checksum, so in order to ensure correct delivery for UDP, IPv6
had to mandate turning the UDP checksum on. That gives some assurance that
the UDP payload is delivered to the app it is intended for.

3. Applications that want to have corrupted payloads delivered to them, 
clearly cannot use UDP with an enabled checksum to do so. Thus they cannot use
UDP over IPv6. 

4. A transport protocol that optionally allows such applications to receive corrupted

payloads is UDP Lite, but it remains to be seen whether it is acceptable to the IETF.
UDP Lite might be presented to the transport area WG and will be discussed on its
list sometime before the next IETF meeting. 

This is clearly out of scope for the ROHC WG. People who want to discuss issues
related to UDP and UDP Lite should do so on the Transport Area WG mailing list:

General Discussion: tsvwg@ietf.org 
To Subscribe:  tsvwg-request@ietf.org 
In Body: subscribe email_address 
Archive: ftp://ftp.ietf.org/ietf-mail-archive/tsvwg/ 

>  Finally I would like to suggest that a reliable profile transfer where
>  there is bidirectional links be produced using a simple Nack
>  (Negative acknowledgement) scheme like that described by Phil Karn
>  in his paper:
>
>  	http://people.qualcomm.com/karn/papers/usenix.ps.gz
>    
>  I suggest segmentation using small segments and Nacks.

For TCP and most of the non-audio and non-video UDP, I agree that 
the link should be almost error-free. This will be achieved 
by some some combination of (channel) FEC and ARQ. Most links that
ROHC is explicitly intended for can be run in such a mode.
The price paid for this is delay, but that is ok for TCP and some UDP. 

NOTE that the added delay does not primarily come from the ARQ scheme,
but from the increased interleaving needed to make the FEC work well. 

>  There must be a large amount of theory on this and some formulas about it.
>  I did some maths and found that surprisingly low segmentation numbers for
>  even quite modest BER values.

< deletion of some theory...>

>    BER = .01 	f = 4 p = 0.617290  efficiency = 2.430
>    BER = .001 	f = 16 p = 0.865825 efficiency = 1.299
>    BER = 0.0001 f = 50 p = 0.959251 efficiency = 1.084
>    BER = 1e-05 f = 150 p = 0.987914 efficiency = 1.026
>
>      (efficiency is how much data is transmitted in the one direction
>      divided by the size of the original data)
>
>   Admittedly my maths may be off but although surprisingly low, I think it's
>   about right.

I am not surprised. This suggests that performance-enhancing ARQ schemes
over the link should use frames significantly smaller than typical TCP segments. 

<more stuff deleted> 

>    I think that the ROCCO system could use this for TCP support.  I think
>    the ideal fragment size calculation could be done on the number of
>    packets lost over a period. The result from this calculation could
>    be used to update the size of the segement size used.  This would
>    allow the system to migrate to an appropriate value for the link
>    dynamically as the environment changes. With f values < 200 this
>    might only take a few seconds between adjustments.

It seems like a tall order for the ROHC WG to develop an ARQ scheme given that
the radio links explicitly mentioned in the charter already have such schemes. But, 
if the WG so desires, we can ask the ADs to add it to our charter. I would hesitate
to do so before we're done with the tasks currently at hand, though.


>    It needs a lot of work to turn the idea into a practical scheme, but the
>    basic principle seems sound to me. e.g. Issues of frame size, handling
>    sequence count wrap around and so on.

>    Admittedly a lot of assumptions are made about the distribution of error
>    bits but the general principal of using moderate segment sizes and Nacks
>    seems to be a very robust approach. i.e. Erring on the side of small
>    segments allows people to use large MTUs on their link (to avoid IPv4
>    fragmentation problems) and minimise the damage that long or short error
>    bursts will do.
>
>    This scheme might actually be useful with some UDP systems. Possibly
>    the sending of a packet over this profile as opposed to another more
>    lossy profile but lower latency profile could be chosen on a packet stream
>    basis?  i.e. Using QoS or protocol TCP vs UDP.

I agree that different packet streams need different service from the 
radio system. The dividing line is not primarily UDP vs TCP, but rather
interactive audio & video    vs.  traditional TCP & UDP. I also expect that 
some games and other interactive apps will prefer low delay to error-freeness. 

>    Finally the selection of what to send over such a reliable profile needs
>    to be judged based on the latency of the Nacks. If segments are small
>    enough and Nacks can be returned quickly enough then the scheme would
>    be practical for time constrained media like voice.

For the links explicitly mentioned in the ROHC chater, the delay budget for 
interactive voice will not tolerate using such a nack scheme. That is the primary
reason for doing the ROHC work in the first place.

Moreover, if the delay budget allowed using the nack scheme, it would almost
certainly be a better overall system trade-off to improve the performance
of the channel coding by adding interleaving than to deploy the nack scheme.

Mikael Degermark

>    	Andrew Worsley
>---
>Mailing list for Robust Header Compression WG
>Archive: http://www.cdt.luth.se/rohc/
>