Re: [tsvwg] Network congestion, dynamic lossless compression?

"David G. Pickett" <> Sat, 03 August 2019 19:03 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 8D76112011F for <>; Sat, 3 Aug 2019 12:03:04 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.997
X-Spam-Status: No, score=-1.997 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id wSRQjnAUkYJq for <>; Sat, 3 Aug 2019 12:03:02 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id A727612000E for <>; Sat, 3 Aug 2019 12:03:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=a2048; t=1564858981; bh=3frtgOgjwd8V4CQM41704xmrK2Y3EaA8w6oOGxQF/34=; h=Date:From:To:In-Reply-To:References:Subject:From:Subject; b=rwFJXcOJmaWMdtSw0GwyEtoSwArx9rNGmHuFV+FX025u/WzGbYzPO79VQQpMry0Nh7mxtWxnMkysD0hTBzjRBos8ij0lglOGDvO+r7N0qiQbA3z5Bo6GQOv3WTTe+cv3JYW5ICqVivERgeoLvZMVe6TkQp4CQJgqYC5wQUBy9K32S1aWOrWAzgiKJqRCUr2jxVhjM3Eh0sTIzo3Mck3pxQXLvs8SZFrZoxL1Vjqq4MKqsoiQNQZct1dGKVK5bESpbIXG9FC6K0295w5k80gLXWVK+GFdXnnNvyvF4tyJ8DoqJxqUGO78gdLjvmncQCS+NuC7cPsXTzE8eIG2JIdV1Q==
X-YMail-OSG: 0XHrjWgVM1mH5aAOOkV5S6IE5lzJ9cZxUnN28sAYGV8A.0X2Y1dIP5uVvQWavXJ zdrgVVrD9OoRIyu_ZdfbElSFp_jAHoPG79GjyhdzNunnvvJOJRNVoSJo_Y3s.qDEAUI74Ry7uj28 U421A4IFBEQ9mOWVTtgH_jJAKr3Sz2vVsHFKpAoXB5qfbzP5sxIDYQ8J7sqIXM7rHJLGB6iy5Nj1 QNfnCoxh8TCo1aBMuV.AB3h1YyHOL1IZZs8qUX1Z..6Hr7YVrzshqjqXeGjqtEq3VZy3rlQ7hifP E0uM0XytDYIOv5yJxhgGwvyYopqzxWI.KN7IeFe0AUVKrZrW9CkpOUxUJupQS0sMbJgfB1i_byjm oX.AiI22_wbJnfz5AN9CT_N7JaplRV12cjOpGl2eNpKPv.SYHJMuyMZ_sGaGRBTv.TTTfcbPNQ6G XKHyAyr1rHBy9DGChM.tWr9E5et8rX7GDRs8HCM9K.yYdgZOgchA0v9TE.7gwOdS9NOhpu541aUA jqSJ3z9sJD0vOYk_uawR6NaC48KrcU0MV84TIno_hTi.wK9eUH5QQuaN_OIiaLe.Fi9pUHtnxVcv P5uCC.y1Lr3_5UGVB7uTo2FVLElCVeGJm_q6oDX03GBKMUgUO72qoUKhDC2zt4YfvYT7g5mnB9QV 9eDvaJPhYTMxziqmyBWlZ98lGw_oqtAcyrS0yUp4IeZnh4REmTmbv3efgt08jpt40Rb7uYdf0Gk4 _ibAaxgx2ivlClgdePjutT0RYmraSDWGqkvHpBc2YIsUiwCKiaWbRLDUl3banlF87yIN3sw1Pv3f jOiZMOLPCDMm7bEpLmhRLZnRMpm397Y0879zFP0XxRoz4_zcG4zuziIHHGYKEZ3s3pVpUloGocnI oLal71xmjRZWjGvkHbFs9MIpjKvjClE7psyGCLajKPxTqQOUCKxZfMowgZcFyg.uK2d3ph4SPMlr 9K.YWr0Oh3JcROxIogOUW.k596ypb1a3mi_jnakQYSFHMc7vRJmMwawHGZHwMh5H.DvN_l9n5NNR yo.LhgcrVg1p3z47mMp._FKOeE2q_3bh8nyDMT0jTgcZI9Zobtsg0FXeL0vSGg3thcJm5IJ3mHIc yBCjdMbptair.CZGhkPwDalgBcYVvmqfaxsDt2yOMxhIGPnJgHxQe66MOnim5FIpA4IYMY.PpUow 7FgIDZ1b2IrfB4NIdjsEhBhfm
Received: from by with HTTP; Sat, 3 Aug 2019 19:03:01 +0000
Date: Sat, 3 Aug 2019 19:02:56 +0000 (UTC)
From: "David G. Pickett" <>
Message-ID: <>
In-Reply-To: <>
References: <> <> <>
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="----=_Part_735623_2990273.1564858976795"
X-Mailer: WebService/1.1.14097 aolwebmail Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36
Archived-At: <>
Subject: Re: [tsvwg] Network congestion, dynamic lossless compression?
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sat, 03 Aug 2019 19:03:05 -0000

Already compressed: You may not have noticed that zip detects not compressible parts and just copies them into the archive.  Similarly, a packet stream compressor could send such blocks on verbatim in the stream, or leave the packets alone entirely.  There is a slight overhead for headers on no-compression data, so the protocol would need a threshold for deciding which path a packet takes.  Backlogs give the node time to detect such and make decisions.  You still get the potential for combining smaller packets into larger ones, gaining a reduction of overhead in many media with larger local link MTU and inter-packet lost media use.  There still needs to be a packet discard mechanism when the compressed flow falls too far behind.  What it gives us is a softer saturation.  And lost packets are not free, as the overhead of retransmission is a cost, as well as the pause in transmission hurting both latency and bandwidth.
Discussion is great: I wonder if the compression choice could also be to use a lower, faster mode when the backlog is light, and a slower, higher compression mode when things get worse?  In some cases, streaming small packets in big packets might increase link speed enough without any compression!
Demand grows: Parkins law does not justify no innovation for improvements.  Softer link saturation means less packet loss, both due to packet discard and overdue packet timeout.  While compression at first glance increases latency, when the link is saturated, the increased bandwidth translates into lower latency as the packet gets sent sooner.  And for momentary overload, the link would switch back to uncompressed packet by packet forwarding as soon as the bubble of load is disposed of, for lowest latency!

There is always room at the top!
-----Original Message-----
From: Jonathan Morton <>
To: David G. Pickett <>
Cc: tsvwg <>
Sent: Sat, Aug 3, 2019 2:34 pm
Subject: Re: [tsvwg] Network congestion, dynamic lossless compression?

> On 3 Aug, 2019, at 6:13 pm, David G. Pickett <> wrote:
> Lossless compression could be applied without any effect on the transmitted data.

The chief problem with that idea is that most data transmitted these days is already compressed and/or encrypted, making it impossible to further compress transparently.  Even Web traffic, which used to be relatively compressible back when PPP options and in-modem compression were more relevant, is now encrypted more often than not (HTTPS replaces HTTP), and the encryption system itself applies compression to reduce exploitable entropy in the plaintext.

In the end, anyone can do better than your suggestion by "just" building a bigger pipe.  But that isn't a solution by itself, because demand grows to meet and exceed supply.  ("The bureaucracy is expanding, to meet the needs of the expanding bureaucracy.")  To actually improve quality of service in the long run means finding ways to reduce latency and packet loss, not to improve capacity.

 - Jonathan Morton