Re: Getting to consensus on packet number encryption

Patrick McManus <pmcmanus@mozilla.com> Wed, 04 April 2018 12:54 UTC

Return-Path: <pmcmanus@mozilla.com>
X-Original-To: quic@ietfa.amsl.com
Delivered-To: quic@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5038812741D for <quic@ietfa.amsl.com>; Wed, 4 Apr 2018 05:54:36 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.234
X-Spam-Level:
X-Spam-Status: No, score=-1.234 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, SPF_SOFTFAIL=0.665] autolearn=no autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id va8-La13Cw0s for <quic@ietfa.amsl.com>; Wed, 4 Apr 2018 05:54:33 -0700 (PDT)
Received: from linode64.ducksong.com (www.ducksong.com [192.155.95.102]) by ietfa.amsl.com (Postfix) with ESMTP id 6198D1275FD for <quic@ietf.org>; Wed, 4 Apr 2018 05:54:33 -0700 (PDT)
Received: from mail-ot0-f177.google.com (mail-ot0-f177.google.com [74.125.82.177]) by linode64.ducksong.com (Postfix) with ESMTPSA id CAB683A042 for <quic@ietf.org>; Wed, 4 Apr 2018 08:54:32 -0400 (EDT)
Received: by mail-ot0-f177.google.com with SMTP id m22-v6so23154064otf.10 for <quic@ietf.org>; Wed, 04 Apr 2018 05:54:32 -0700 (PDT)
X-Gm-Message-State: ALQs6tDvEZi+IKoA5mbtsBnj4tPXp/n7qyx44EsNlwCnaDkiNigKJ3FO 0WWUTV50cBmmHLVAyGbPSr3+lYgxy/QRTp01dY4=
X-Google-Smtp-Source: AIpwx48dAC1wKfBGStCx8ZsrIAcNJ2IupsPxv90LrnjbLMoyf1HAW6rptjAc4PHb0MXDFEB5Q0wf4IDUjwW2umU+wqg=
X-Received: by 2002:a9d:4494:: with SMTP id v20-v6mr2124406ote.397.1522846472551; Wed, 04 Apr 2018 05:54:32 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.74.70.23 with HTTP; Wed, 4 Apr 2018 05:54:32 -0700 (PDT)
In-Reply-To: <40C1F6FE-2B2C-469F-8F98-66329703ED50@mnot.net>
References: <7fd34142-2e14-e383-1f65-bc3ca657576c@huitema.net> <F9FCC213-62B9-437C-ADF9-1277E6090317@gmail.com> <CABcZeBM3PfPkqVxPMcWM-Noyk=M2eCFWZw2Eq-XytbHM=0T9Uw@mail.gmail.com> <CAN1APdfjuvd1eBWCYedsbpi1mx9_+Xa6VvZ3aq_Bhhc+HN67ug@mail.gmail.com> <CABcZeBMtQBwsAF85i=xHmWN3PuGRkJEci+_PjS3LDXi7NgHyYg@mail.gmail.com> <1F436ED13A22A246A59CA374CBC543998B5CCEFD@ORSMSX111.amr.corp.intel.com> <CABcZeBNfPsJtLErBn1=iGKuLjJMo=jEB5OLxDuU7FxjJv=+b=A@mail.gmail.com> <1F436ED13A22A246A59CA374CBC543998B5CDAD4@ORSMSX111.amr.corp.intel.com> <BBB8D1DE-25F8-4F3D-B274-C317848DE872@akamai.com> <CAN1APdd=47b2eXkvMg+Q_+P254xo4vo-Tu-YQu6XoUGMByO_eQ@mail.gmail.com> <CAKcm_gMpz4MpdmrHLtC8MvTf5uO9LjD915jM-i2LfpKY384O2w@mail.gmail.com> <HE1PR0702MB3611A67E764EE1C7D1644FAD84AD0@HE1PR0702MB3611.eurprd07.prod.outlook.com> <d8e35569-e939-4064-9ec4-2cccfba2f341@huitema.net> <CACpbDccqKoF-Y1poHMN2cLOK9GOuvtMTPsF-QEen3b30kUo9bg@mail.gmail.com> <CAKcm_gNffwpraF-H2LQBF33vUhYFx0bi_UXJ3N14k4Xj4NmWUw@mail.gmail.com> <40C1F6FE-2B2C-469F-8F98-66329703ED50@mnot.net>
From: Patrick McManus <pmcmanus@mozilla.com>
Date: Wed, 04 Apr 2018 08:54:32 -0400
X-Gmail-Original-Message-ID: <CAOdDvNo9QS=CX5YUWK8Lxs_SYX4nEM7OWv2+zB=VGhOX6J-BEw@mail.gmail.com>
Message-ID: <CAOdDvNo9QS=CX5YUWK8Lxs_SYX4nEM7OWv2+zB=VGhOX6J-BEw@mail.gmail.com>
Subject: Re: Getting to consensus on packet number encryption
To: Mark Nottingham <mnot@mnot.net>
Cc: IETF QUIC WG <quic@ietf.org>, Lars Eggert <lars@eggert.org>
Content-Type: multipart/alternative; boundary="0000000000000b44380569055347"
Archived-At: <https://mailarchive.ietf.org/arch/msg/quic/Zt8LYAhzHpru7v1aEFjjlkWulUE>
X-BeenThere: quic@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <quic.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/quic>, <mailto:quic-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/quic/>
List-Post: <mailto:quic@ietf.org>
List-Help: <mailto:quic-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/quic>, <mailto:quic-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Apr 2018 12:54:36 -0000

I think its time to move forward with 1079.

btw I think its wrong to characterize this as perf vs privacy - its cpu vs
bandwidth, the privacy is a must have.

1079 costs cpu (both in hw and sw) and in my mind it really competes with
either a multipath-pn-scheme (which we'll table for v2) or a one pass
scheme with a new nonce rather than making the packet number do double
duty. The latter is bandwidth tax and I'd rather pay the cpu cost.

The only serious alternative to me is to take the timeline hit and work out
multiple packet number spaces for v1 (which are likely part of multipath
anyhow).

-P


On Wed, Apr 4, 2018 at 12:58 AM, Mark Nottingham <mnot@mnot.net> wrote:

> Hi everyone,
>
> The editors have told your chairs that this issue is starting to block
> progress on other aspects of QUIC. Coming to consensus on it soon (i.e.,
> before Stockholm, if possible) would be good.
>
> It looks like this thread has come to a natural pause. Reading through it,
> I think we agree there are some unpleasant tradeoffs here, but so far we
> have only one concrete proposal -- PR#1079.
>
> My AU$.05 - While we're chartered to produce a protocol that's encrypted
> as much as operational concerns allow, and that privacy is a natural
> extension of that (especially in light of RFC7258 and RFC6973), there is no
> *requirement* that we produce a protocol that is able to be accelerated by
> hardware.
>
> That being the case, I think we need to ask ourselves if we believe that
> inclusion of PR#1079 will significantly inhibit deployment of the protocol.
> Based on the discussion so far, that doesn't seem to be the case, but I'd
> be interested to hear what others think.
>
> If we can get to consensus to incorporate the PR, and folks come back
> later with a more hardware-friendly replacement that doesn't change the
> Invariants or increase linkability, I suspect the WG will be amenable to
> that.
>
> What do folks think?
>
> Cheers,
>
>
> > On 29 Mar 2018, at 11:39 am, Ian Swett <ianswett=40google.com@dmarc.
> ietf.org> wrote:
> >
> > Thanks for the nice summary Jana.
> >
> > As much as I'd love to have easier crypto HW acceleration, I've ended up
> arriving at the same conclusion.  I don't want to bite off the work to do
> proper multipath in QUIC v1, which I think is the only other reasonable
> option of those Christian outlined.
> >
> > If someone comes up with a way to transform packet number to make it
> non-linkable, but doesn't have the downside of making hardware offload
> difficult, then I'm open to it.  But we've been talking about this for 2
> months without any notable improvements over Martin's PR.
> >
> > Given we never talk about any issue only once in QUIC, I'm sure this
> will come up again, but for the time being I think #1079 is the best option
> we have.
> >
> >
> >
> > On Wed, Mar 28, 2018 at 8:03 PM Jana Iyengar <jri.ietf@gmail.com> wrote:
> > A few quick thoughts as I catch up on this thread.
> >
> > I spent some time last week working through a design using multiple PN
> spaces, and it is quite doable. I suspect we'll head towards multiple PN
> spaces as we consider multipath in the future. That said, there is
> complexity (as Christian notes). This complexity may be warranted when
> doing multipath in v2 or later, but I'm not convinced that this is
> necessary as a design primitive for QUICv1.
> >
> > We may want to creatively use the PN bits in v2, say to encode a path ID
> and a PN, for multipath. We want to retain flexibility in these bits going
> into v2. We've used encryption to ensure that we don't lose flexibility
> elsewhere in the header, and it follows that we should use PNE to retain
> flexibility in these bits as well. (Simplicity of design is the other value
> in using PNE, since handling migration linkability is non-trivial without
> it.)
> >
> > This leaves the question of HW acceleration being at loggerheads with
> the design in PR #1079. First, I expect that the primary benefit of
> acceleration will be in DC environments. Yes, there are some gains to be
> had in serving the public Internet as well, but I'm unconvinced that this
> is the driving use case for hardware acceleration. I understand that others
> may disagree with me here.
> >
> > AFAIK, QUIC has not been used in DC environments yet. I expect there are
> other things in the protocol that we'd want to change as we gain experience
> deploying QUIC in DCs. Spinning up a new version to try QUIC within DCs is
> not only appropriate, I would recommend it. This allows for rapid
> iterations internally, and the experience can drive subsequent changes to
> QUIC. It's what *I* would do if I was to deploy QUIC inside a DC.
> >
> > So, in short, I think we should go ahead with PR# 1079. This ensures
> that future versions are guaranteed the flexibility to change the PN bits
> for better support of HW acceleration or multipath or what-have-you.
> >
> > - jana
> >
> > On Mar 26, 2018 9:41 AM, "Christian Huitema" <huitema@huitema.net>
> wrote:
> >
> > On 3/26/2018 8:20 AM, Swindells, Thomas (Nokia - GB/Cambridge) wrote:
> >> Looking at https://en.wikipedia.org/wiki/AES_instruction_set#Intel_and_
> AMD_x86_architecture it seems to imply a large range of server, desktop
> and mobile chips all have a CPU instruction set available to do AES
> acceleration and other similar operations (other instruction sets are also
> available).
> >>
> >> If we are considering the AES instructions then it looks like it is (or
> at least will be in the near future) a sizeable proportion of the public
> internet have it to be used.
> >>
> >
> > Certainly, but that's not the current debate. PR #1079 is fully
> compatible with use of the AES instructions. The issue of the debate is
> that the mechanism in PR #1079 required double buffering, first encrypt the
> payload, then use the result of the encryption to encrypt the PN. This is
> not an issue in a software implementation that can readily access all bytes
> of the packet from memory, but it may be an issue in some hardware
> implementations that are designed to do just one pass over the data.
> >
> >
> > -- Christian Huitema
> >
> >
> >
>
> --
> Mark Nottingham   https://www.mnot.net/
>
>