RE: Impact of hardware offloads on network stack performance

Mike Bishop <mbishop@evequefou.be> Fri, 06 April 2018 20:37 UTC

Return-Path: <mbishop@evequefou.be>
X-Original-To: quic@ietfa.amsl.com
Delivered-To: quic@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 06136126C0F for <quic@ietfa.amsl.com>; Fri, 6 Apr 2018 13:37:44 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.901
X-Spam-Level:
X-Spam-Status: No, score=-1.901 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=evequefou.onmicrosoft.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id EUF0PeMTOGpc for <quic@ietfa.amsl.com>; Fri, 6 Apr 2018 13:37:40 -0700 (PDT)
Received: from NAM01-BY2-obe.outbound.protection.outlook.com (mail-by2nam01on0110.outbound.protection.outlook.com [104.47.34.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 71A581200C1 for <quic@ietf.org>; Fri, 6 Apr 2018 13:37:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=evequefou.onmicrosoft.com; s=selector1-evequefou-be; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=/uQ6+bvR3+cUHKmI2qG/7qOaLHAiyFQR4TodZSUaM0A=; b=EBWqQEsWSQ60fgKUS8g+til07x10oafonsfU4G0TUESYjB1JLKXQnDXnupa8arzjnllI4ke8zmr/prS3cWZ1ol6gfUCwEG4nJuyV0syx1Wwd2Q7SMsOeM9uD/AVV8+c/GLCzavMIHCvFw0Wa4R+1CV/5fGI0EJ7AP9eB6Fz5REo=
Received: from SN1PR08MB1854.namprd08.prod.outlook.com (10.169.39.8) by SN1PR08MB1693.namprd08.prod.outlook.com (10.162.133.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.653.12; Fri, 6 Apr 2018 20:37:37 +0000
Received: from SN1PR08MB1854.namprd08.prod.outlook.com ([fe80::e9d8:1358:e716:ba77]) by SN1PR08MB1854.namprd08.prod.outlook.com ([fe80::e9d8:1358:e716:ba77%13]) with mapi id 15.20.0653.013; Fri, 6 Apr 2018 20:37:37 +0000
From: Mike Bishop <mbishop@evequefou.be>
To: Mikkel Fahnøe Jørgensen <mikkelfj@gmail.com>, Ian Swett <ianswett@google.com>
CC: Praveen Balasubramanian <pravb@microsoft.com>, IETF QUIC WG <quic@ietf.org>
Subject: RE: Impact of hardware offloads on network stack performance
Thread-Topic: Impact of hardware offloads on network stack performance
Thread-Index: AdPMUbL7T8nZAW2ySiO289IPTC0ttgAApm4AADSbFZAACUWYgAAFkLQAACA3b/A=
Date: Fri, 06 Apr 2018 20:37:37 +0000
Message-ID: <SN1PR08MB18548155722C665FFD45CC4DDABA0@SN1PR08MB1854.namprd08.prod.outlook.com>
References: <CY4PR21MB0630CE4DE6BA4EDF54A1FEB1B6A40@CY4PR21MB0630.namprd21.prod.outlook.com> <CAN1APde93+S8CP-KcZmqPKCCvsGRiq6ECPUoh_Qk0j9hqs8h0Q@mail.gmail.com> <SN1PR08MB1854E64D7C370BF0A7456977DABB0@SN1PR08MB1854.namprd08.prod.outlook.com>, <CAKcm_gNhsHxXM77FjRj-Wh4JxA21NAWKXX3KBT=eZJsCdacM7Q@mail.gmail.com> <DB6PR10MB1766789B25E31EBE70564BD3ACBA0@DB6PR10MB1766.EURPRD10.PROD.OUTLOOK.COM>
In-Reply-To: <DB6PR10MB1766789B25E31EBE70564BD3ACBA0@DB6PR10MB1766.EURPRD10.PROD.OUTLOOK.COM>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: spf=none (sender IP is ) smtp.mailfrom=mbishop@evequefou.be;
x-originating-ip: [38.134.241.6]
x-ms-publictraffictype: Email
x-microsoft-exchange-diagnostics: 1; SN1PR08MB1693; 7:R0c0dvJvUjUuMfwl9dmvBN0vowiLWUiF8I4Qoj3MD8dqGUXnSW/QwLHip9B7tV9w605SHcbaAN8iQtw2YtfE2sJlMErSHhrxnr73TwWT2k/9/4zWZNrgVZ+LfEFOJcdmfpbmlw4IdYKTH+tmWQ5LDG9o03i9sFr3NrqDcbCKwDVOmHYe5nnHBWIbugQ+Jnx8h6AJugbKaxilIqme/PL/mk7YWFDmb1ZPK8lRDn391GtW8qwGyS/qFETxRFN6LVv/
x-ms-exchange-antispam-srfa-diagnostics: SOS;
x-ms-office365-filtering-correlation-id: 39780d65-579f-4ccb-1188-08d59bfe3c2b
x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(7021125)(5600026)(4604075)(3008032)(4534165)(7022125)(4603075)(4627221)(201702281549075)(7048125)(7024125)(7027125)(7028125)(7023125)(2017052603328)(7153060)(7193020); SRVR:SN1PR08MB1693;
x-ms-traffictypediagnostic: SN1PR08MB1693:
x-microsoft-antispam-prvs: <SN1PR08MB1693D7EC4576216F9291748EDABA0@SN1PR08MB1693.namprd08.prod.outlook.com>
x-exchange-antispam-report-test: UriScan:(28532068793085)(158342451672863)(89211679590171)(85827821059158)(211936372134217)(153496737603132)(21748063052155);
x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(6040522)(2401047)(8121501046)(5005006)(93006095)(93001095)(3231221)(944501327)(52105095)(3002001)(10201501046)(6041310)(20161123560045)(20161123564045)(20161123558120)(20161123562045)(2016111802025)(6043046)(6072148)(201708071742011); SRVR:SN1PR08MB1693; BCL:0; PCL:0; RULEID:; SRVR:SN1PR08MB1693;
x-forefront-prvs: 0634F37BFF
x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(346002)(376002)(396003)(39830400003)(366004)(39380400002)(51914003)(199004)(189003)(476003)(2906002)(606006)(68736007)(81156014)(81166006)(53936002)(33656002)(66066001)(8666007)(55016002)(8936002)(2900100001)(5250100002)(6436002)(478600001)(19609705001)(105586002)(236005)(9686003)(6116002)(3846002)(54896002)(6306002)(8676002)(790700001)(316002)(7736002)(7696005)(26005)(106356001)(5660300001)(74316002)(186003)(11346002)(76176011)(446003)(14454004)(86362001)(102836004)(53546011)(97736004)(110136005)(54906003)(3280700002)(45080400002)(6246003)(486006)(3660700001)(93886005)(99286004)(39060400002)(59450400001)(229853002)(25786009)(6506007)(74482002)(4326008); DIR:OUT; SFP:1102; SCL:1; SRVR:SN1PR08MB1693; H:SN1PR08MB1854.namprd08.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:0; MX:1;
received-spf: None (protection.outlook.com: evequefou.be does not designate permitted sender hosts)
x-microsoft-antispam-message-info: m8uQEyqV/A7jIsbO9XU4K1HVAojgt4Sz8nTeNmmzdC34it0s69D4c9hz2gF69vTLeuECHEtEt6+YGHxODXjzN8XD1VDHcgexoMoxGHb0SHXxB+e+W5O940LhGG8zi+8J6ZdK8EjeckqZ0537Qfg7JYoxkjV6T03Clg+9RpEwraM5nr6p5YR76gIf3Dwaf8ovf686cumChbTCZFHBI0Jozp4w9UxQLZV+82aQ0MXTc5coLWiMdQmlvgMELOOiJ70pvpfi4dijDDlCxkPTefeoU/opyvVNUrsmx1IK1eXg69gAv0qisdc+9m8WyeQoXo5gxpeiQ2XPPYvAQOyGQeW2OgCbGyh2DCdCRQOd52xQuTO8Syguq4CuhIVlGmpuk4cQQBb99l/9kFCL+M5ppsNYXyVixsi8iK5GCTnaPBZsXyA=
spamdiagnosticoutput: 1:99
spamdiagnosticmetadata: NSPM
Content-Type: multipart/alternative; boundary="_000_SN1PR08MB18548155722C665FFD45CC4DDABA0SN1PR08MB1854namp_"
MIME-Version: 1.0
X-OriginatorOrg: evequefou.be
X-MS-Exchange-CrossTenant-Network-Message-Id: 39780d65-579f-4ccb-1188-08d59bfe3c2b
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Apr 2018 20:37:37.4215 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 41eaf50b-882d-47eb-8c4c-0b5b76a9da8f
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR08MB1693
Archived-At: <https://mailarchive.ietf.org/arch/msg/quic/2pFzHSqGlJjjhhNXrUcr6I1SChs>
X-BeenThere: quic@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <quic.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/quic>, <mailto:quic-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/quic/>
List-Post: <mailto:quic@ietf.org>
List-Help: <mailto:quic-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/quic>, <mailto:quic-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Apr 2018 20:37:44 -0000

Thanks for the clarification.  I guess what I’m really getting down to is how do we quantify the actual cost of PNE?  Are we substantially increasing the crypto costs?  If Ian is saying that crypto is comparatively cheap and the cost is that it’s harder to offload something that’s comparatively cheap, what have we lost?  I’d think we want to offload the most intensive piece we can, and it seems like we’re talking about crypto offloads....  Or are we saying instead that the crypto makes it harder to offload other things in the future, like a QUIC equivalent to LRO/LSO?

From: Mikkel Fahnøe Jørgensen [mailto:mikkelfj@gmail.com]
Sent: Thursday, April 5, 2018 9:45 PM
To: Ian Swett <ianswett@google.com>; Mike Bishop <mbishop@evequefou.be>
Cc: Praveen Balasubramanian <pravb@microsoft.com>; IETF QUIC WG <quic@ietf.org>
Subject: Re: Impact of hardware offloads on network stack performance

To be clear - I don’t think crypto is overshadowing other issues as Mike read my post. It certainly comes at a cost, but either multiple cores or co-processors will deal with this. 1000ns is 10Gbps crypto speed on one core and it is higly parallelisable and cache friendly.

But if you have to drip your packets through a traditional send interface that is copying, buffering or blocking, and certainly sync’ing with the kernel - it is going to be tough.

For receive you risk high latency or top much scheduling in receive buffers.

________________________________
From: Ian Swett <ianswett@google.com<mailto:ianswett@google.com>>
Sent: Friday, April 6, 2018 4:06:01 AM
To: Mike Bishop
Cc: Mikkel Fahnøe Jørgensen; Praveen Balasubramanian; IETF QUIC WG
Subject: Re: Impact of hardware offloads on network stack performance


On Thu, Apr 5, 2018 at 5:57 PM Mike Bishop <mbishop@evequefou.be<mailto:mbishop@evequefou.be>> wrote:
That’s interesting data, Praveen – thanks.  There is one caveat there, which is that (IIUC, on current hardware) you can’t do packet pacing with LSO, and so my understanding is that even for TCP many providers disable this in pursuit of better egress management.  So what we’re really saying is that:

  *   On current hardware and OS setups, UDP runs less than half as much throughput as TCP; that needs some optimization, as we’ve already discussed
  *   TCP will dramatically gain performance once hardware offload catches up and allows paced LSO 😊

However, as Mikkel points out, the crypto costs are likely to overwhelm the OS throughput / scheduling issues in real deployments.  So I think the other relevant piece to understanding the cost here is this:

I have a few cases(both client and server) where my UDP send costs are more than 30%(in some cases 50%) of CPU consumption, and crypto is less than 10%.  So currently, I assume crypto is cheap(especially AES-GCM when hardware acceleration is available) and egress is expensive.  UDP ingres is not that expensive, but could use a bit of optimization as well.

The only 'fancy' thing our current code is doing for crypto is encrypting in place, which was a ~2% win.  Nice, but not transformative.  See EncryptInPlace<https://cs.chromium.org/chromium/src/net/quic/core/quic_framer.cc?sq=package:chromium&l=1893>

I haven't done much work benchmarking Windows, so possibly the Windows UDP stack is really fast, and so crypto seems really slow in comparison?


  *   What is the throughput and relative cost of TLS/TCP versus QUIC (i.e. how much are the smaller units of encryption hurting us versus, say, 16KB TLS records)?

     *   TLS implementations already vary here:  Some implementations choose large record sizes, some vary record sizes to reduce delay / HoLB, so this probably isn’t a single number.

  *   How much additional load does PNE add to this difference?
  *   To what extent would PNE make future crypto offloads impossible versus requires more R&D to develop?

From: QUIC [mailto:quic-bounces@ietf.org<mailto:quic-bounces@ietf.org>] On Behalf Of Mikkel Fahnøe Jørgensen
Sent: Wednesday, April 4, 2018 1:34 PM
To: Praveen Balasubramanian <pravb=40microsoft.com@dmarc.ietf.org<mailto:40microsoft.com@dmarc.ietf.org>>; IETF QUIC WG <quic@ietf.org<mailto:quic@ietf.org>>
Subject: Re: Impact of hardware offloads on network stack performance

Thanks for sharing these numbers.

My guess is that these offloads deal with kernel scheduling, memory cache issues, and interrupt scheduling.

It probably has very little to do with crypto, TCP headers, and any other cpu sensitive processing.

This is where netmap enters: you can transparently feed the data as it becomes available with very little sync work and you can also efficiently pipeline so you pass data to decryptor, then to app, with as little bus traffic as possible. No need to copy data or synchronize kernel space.

For 1K packets you would use about 1000ns (3 cycles/byte) on crypto and this would happen in L1 cache. It would of course consume a core which a crypto offload would not, but that can be debated because with 18 core systems, your problem is memory and network more than CPU.

My concern with PNE is for small packets and low latency where
1) an estimated 24ns for PN encryption and decryption becomes measurable if your network is fast enough.
2) any damage to the packet buffer cause all sorts of memory and cpu bus traffic issues.

1) is annoying. 2) is bad, completely avoidable but not as PR 1079 is formulated currently.

2) is also likely bad for hardware offload units as well.

As to SRV-IO - I’m not familiar with it, but obviously there is some IO abstraction layer - the question is how you make it accessible to apps as opposed to device drivers that does nor work with you QUIC custom stack, and netmap is one option here.

Kind Regards,
Mikkel Fahnøe Jørgensen


On 4 April 2018 at 22.15.52, Praveen Balasubramanian (pravb=40microsoft.com@dmarc.ietf.org<mailto:pravb=40microsoft.com@dmarc.ietf.org>) wrote:
Some comparative numbers from an out of box default settings Windows Server 2016 (released version) for a single connection with a microbenchmark tool:

Offloads enabled

TCP gbps

UDP gbps

LSO + LRO + checksum

24

3.6

Checksum only

7.6

3.6

None

5.6

2.3


This is for a fully bottlenecked CPU core -- if you run lower data rates there is still a significant difference in Cycles/byte cost. Same increased CPU cost applies for client systems going over high data rate Wifi and cellular.

This is without any crypto. Once you add crypto the numbers become much worse with crypto cost becoming dominant. Adding another crypto step further exacerbates the problem. Hence crypto offload gains in importance followed by these batch offloads.

If folks need any more numbers I’d be happy to provide them.

Thanks