Re: [TLS] Confirmation of Consensus on Removing Compression from TLS 1.3 (Martin Rex) Wed, 26 March 2014 20:56 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 113CB1A03CB for <>; Wed, 26 Mar 2014 13:56:28 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -6.552
X-Spam-Status: No, score=-6.552 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HELO_EQ_DE=0.35, RCVD_IN_DNSWL_HI=-5, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id GHsKKumVA0D1 for <>; Wed, 26 Mar 2014 13:56:25 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 512DE1A03C6 for <>; Wed, 26 Mar 2014 13:56:25 -0700 (PDT)
Received: from by (26) with ESMTP id s2QKuI4p003225 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 26 Mar 2014 21:56:18 +0100 (MET)
In-Reply-To: <>
To: "Joseph Salowey (jsalowey)" <>
Date: Wed, 26 Mar 2014 21:56:18 +0100 (CET)
X-Mailer: ELM [version 2.4ME+ PL125 (25)]
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="US-ASCII"
Message-Id: <>
From: (Martin Rex)
X-SAP: out
Subject: Re: [TLS] Confirmation of Consensus on Removing Compression from TLS 1.3
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 26 Mar 2014 20:56:28 -0000

Joseph Salowey (jsalowey) wrote:
> The use of compression within TLS has resulted in vulnerabilities
> that can be exploited to disclose TLS encrypted application data.
> The consensus in the room at IETF-89 was to remove compression from
> TLS 1.3 to remove this attack vector.  If you have concerns about
> this decision please respond on the TLS list by April 11, 2014.

This characterization about the use of compression in TLS is
sufficiently misleading that it is wrong.

First of all, the problem is nowhere specific to TLS, and affects
compression anywhere within the software protocol stack.

Besides, compression only aggravates the problem when either the entropy
of all data within a TLS record is so extremely low, that the TLS record
size difference will make the difference.

As I previously noted on this list, the way it is specified, the TLS
compression function could be used to provide a random padding of TLS
records, so that all TLS records come out in equal size, essentially
a form of anti-compression.  And that could be applied to all existing
cipher suites, including Stream ciphers and AEAD -- both of which
do not allow any padding at all so far and therefor leak much more
about the protected traffic than cipher suites that use the
GenericBlockCipher PDU. 

The only situation where compression in general at the TLS-level
aggravate the problem in any meaningful fashion is when either one
or both of the endpoints do extraordinarily stupid things, like
multiplexing attacker-supplied data with data that the attacker is not
supposed to know into the same compression function (i.e. protect it
with the very same TLS connection state rather than through seperate
TLS connection states (i.e. with independent traffic protection keys),
which is a clear an obvious fallacy, in clear abuse of the TLS protocol:

   Any protocol designed for use over TLS must be carefully designed to
   deal with all possible attacks against it.  As a practical matter,
   this means that the protocol designer must be aware of what security
   properties TLS does and does not provide and cannot safely rely on
   the latter.

Our implementation does not implement support for compression so far,
but if anyone wanted to have support for random padding in Stream and
AEAD ciphers within TLS (v1.0-v1.2) I would really prefer to see that
being done through an (anti-)compression scheme rather than changing
all of the 2/3 Encryption PDUs seperately.

For TLSv1.3, support for random (sized) padding could be added by
appropriate definition of the respective PDUs and their processing,
but that will reduce the code sharing further and further.