RE: Impact of hardware offloads on network stack performance

Praveen Balasubramanian <pravb@microsoft.com> Fri, 06 April 2018 22:58 UTC

Return-Path: <pravb@microsoft.com>
X-Original-To: quic@ietfa.amsl.com
Delivered-To: quic@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0255B126CD6 for <quic@ietfa.amsl.com>; Fri, 6 Apr 2018 15:58:00 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2
X-Spam-Level:
X-Spam-Status: No, score=-2 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=microsoft.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ncpiCpt8Onvm for <quic@ietfa.amsl.com>; Fri, 6 Apr 2018 15:57:56 -0700 (PDT)
Received: from NAM03-CO1-obe.outbound.protection.outlook.com (mail-co1nam03on072f.outbound.protection.outlook.com [IPv6:2a01:111:f400:fe48::72f]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 8106F126C3D for <quic@ietf.org>; Fri, 6 Apr 2018 15:57:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=K+7CmUKbS5Zp0jkCS6KPoF51t/FgSMvyJEhgLAliawc=; b=RukFnN5SDJtc3CtdcPVqCyYZz4SQEY0PR8TrNiqLddoCGOXQMX2kYhBj8JLtNZulnquVCot50nR/gcTEv7rAvDVJpHuLWKUCrP2qbqvfFXKARN23Ab8GRTNpQswC8P2Kb7xTjbtn4uQ1zwJGBAl1iOp1MXGre9+hevq7ewzBFoY=
Received: from DM5PR21MB0636.namprd21.prod.outlook.com (10.175.111.143) by DM5PR21MB0282.namprd21.prod.outlook.com (10.173.174.17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.675.1; Fri, 6 Apr 2018 22:57:54 +0000
Received: from DM5PR21MB0636.namprd21.prod.outlook.com ([fe80::2043:e91f:2585:3121]) by DM5PR21MB0636.namprd21.prod.outlook.com ([fe80::2043:e91f:2585:3121%10]) with mapi id 15.20.0675.005; Fri, 6 Apr 2018 22:57:54 +0000
From: Praveen Balasubramanian <pravb@microsoft.com>
To: Mike Bishop <mbishop@evequefou.be>, Mikkel Fahnøe Jørgensen <mikkelfj@gmail.com>, IETF QUIC WG <quic@ietf.org>
Subject: RE: Impact of hardware offloads on network stack performance
Thread-Topic: Impact of hardware offloads on network stack performance
Thread-Index: AdPMUbL7T8nZAW2ySiO289IPTC0ttgAApm4AADSbFZAANIcgoA==
Date: Fri, 06 Apr 2018 22:57:54 +0000
Message-ID: <DM5PR21MB0636B1E2B4E5B96CC49F6742B6BA0@DM5PR21MB0636.namprd21.prod.outlook.com>
References: <CY4PR21MB0630CE4DE6BA4EDF54A1FEB1B6A40@CY4PR21MB0630.namprd21.prod.outlook.com> <CAN1APde93+S8CP-KcZmqPKCCvsGRiq6ECPUoh_Qk0j9hqs8h0Q@mail.gmail.com> <SN1PR08MB1854E64D7C370BF0A7456977DABB0@SN1PR08MB1854.namprd08.prod.outlook.com>
In-Reply-To: <SN1PR08MB1854E64D7C370BF0A7456977DABB0@SN1PR08MB1854.namprd08.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [2001:4898:80e8:9::712]
x-ms-publictraffictype: Email
x-microsoft-exchange-diagnostics: 1; DM5PR21MB0282; 7:t5lIpIQdRwGSYYITUvaToGd5N7G2TkBNVPmnKDiitFLJJnYTmHZzCMFHAZCBudLTthiIxlcXx7fQ26KaAdw/6qLdkCyPwTav1qcNzTf1l5SLfb3QHqrXkVbTkRryS32XotzezQ4Lxpw1MLZL8ux0OarVpIOPdtsensM7WP3MoOAvPVxtNFnxn1asMugXlQ+0ROQdjBz+IBGhMEO1GygIddJrhvZtqGAmyloNAN5PIIJR8mlPx43glrpeG9t+ExWN; 20:GFf/K60n0gRXKS3zqgIMTyqyasWhVQV4TsjWIRGjXXAcPtmtWunmkl4dybXrHSUd/JBDtU6QxfMhPWOzJNBhWF+j9GezfQgoJCMIDxXqoAfDcETboYMLX/qwP7v/MDVG8NJnL8DyZKA1yTsAuhVutLLGoLjuQcaxwEv0Xt/iIGQ=
x-ms-exchange-antispam-srfa-diagnostics: SOS;
x-ms-office365-filtering-correlation-id: 65400d41-9833-4ce2-2627-08d59c11d4ca
x-ms-office365-filtering-ht: Tenant
x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(5600026)(4604075)(3008032)(48565401081)(4534165)(4627221)(201703031133081)(201702281549075)(2017052603328)(7193020); SRVR:DM5PR21MB0282;
x-ms-traffictypediagnostic: DM5PR21MB0282:
authentication-results: spf=none (sender IP is ) smtp.mailfrom=pravb@microsoft.com;
x-microsoft-antispam-prvs: <DM5PR21MB028222CD89BD3D4773D28C68B6BA0@DM5PR21MB0282.namprd21.prod.outlook.com>
x-exchange-antispam-report-test: UriScan:(28532068793085)(158342451672863)(89211679590171)(85827821059158)(21748063052155);
x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(61425038)(6040522)(2401047)(5005006)(8121501046)(3231221)(944501327)(52105095)(10201501046)(93006095)(93001095)(3002001)(6055026)(61426038)(61427038)(6041310)(20161123558120)(20161123560045)(20161123564045)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(6072148)(201708071742011); SRVR:DM5PR21MB0282; BCL:0; PCL:0; RULEID:; SRVR:DM5PR21MB0282;
x-forefront-prvs: 0634F37BFF
x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(39380400002)(39860400002)(346002)(366004)(396003)(376002)(199004)(189003)(9686003)(54896002)(81166006)(10290500003)(478600001)(3660700001)(46003)(81156014)(74316002)(19609705001)(25786009)(8676002)(68736007)(14454004)(6306002)(7736002)(10090500001)(8990500004)(53936002)(5660300001)(55016002)(6436002)(236005)(476003)(11346002)(102836004)(110136005)(53546011)(186003)(6506007)(3280700002)(2906002)(86362001)(446003)(106356001)(97736004)(316002)(486006)(76176011)(790700001)(6116002)(229853002)(6246003)(39060400002)(86612001)(8936002)(33656002)(105586002)(2900100001)(5250100002)(7696005)(22452003)(99286004); DIR:OUT; SFP:1102; SCL:1; SRVR:DM5PR21MB0282; H:DM5PR21MB0636.namprd21.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1;
received-spf: None (protection.outlook.com: microsoft.com does not designate permitted sender hosts)
x-microsoft-antispam-message-info: 0DUkLkFp42dcVMV2BZMR9RMsPzp6MpxOFNBJQui+12ttGpJ2m33o7VP+1dIECFhIAay0uHxRMGXlDMHfepcSe10spQDXvDGg7ujP9hyonkC9kDLtgqdrIfsregy22cRr6Fxw5lqe0rgGChY2PPykVkNzm7JG+rFpUtTQERb5khm90Fu02HCO0ChWxw9UNcF4+8GLtpOtavSEjw3YxWJvCht+JQerbzqInIcN+2cbEBu1gVkQBJh3j0Gtps5BdWM+nf0dwHVKzidvFvYiAu3oy5Sfo0MPLU0O5OiuRkLTVe/NUEx444NlHhRkKxbmb9yRJF8Sb6VLMg2i1jEX/wZ/cT0qFyy/9iZt3o1DjQr3kjJY8Io9f8xiTX/5MJIJrZI557u61Q99jwcAOZOeg+dWA7brRFUjqX2mlaDUC4t9E1w=
spamdiagnosticoutput: 1:99
spamdiagnosticmetadata: NSPM
Content-Type: multipart/alternative; boundary="_000_DM5PR21MB0636B1E2B4E5B96CC49F6742B6BA0DM5PR21MB0636namp_"
MIME-Version: 1.0
X-OriginatorOrg: microsoft.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 65400d41-9833-4ce2-2627-08d59c11d4ca
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Apr 2018 22:57:54.0847 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 72f988bf-86f1-41af-91ab-2d7cd011db47
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR21MB0282
Archived-At: <https://mailarchive.ietf.org/arch/msg/quic/fqj8c1Uy6qmJ9Bg8Jsw7Wbqo41k>
X-BeenThere: quic@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <quic.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/quic>, <mailto:quic-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/quic/>
List-Post: <mailto:quic@ietf.org>
List-Help: <mailto:quic-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/quic>, <mailto:quic-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Apr 2018 22:58:00 -0000

>>.  There is one caveat there, which is that (IIUC, on current hardware) you can’t do packet pacing with LSO, and so my understanding is that even for TCP many providers disable this in pursuit of better egress management
Some do some don’t. Windows hasn’t done packet pacing for TCP so far and is quite widely deployed across servers and clients. Even with pacing you can come up with proper sizing so you don’t lose the advantages of LSO. As you can see in the numbers LSO has a pretty sizeable impact on perf.

>> TCP will dramatically gain performance once hardware offload catches up and allows paced LSO
TCP as deployed today on Windows uses hardware offload (LSO, LRO, checksum) by default on all systems when available and support is almost universal except for some client NICs. There is no strict requirement for pacing to do LSO since we have been successfully deployed this way for 10+ years. TCP already has dramatically better perf than UDP on all public clouds across all VM sizes.

As to overhead due to PNE, we did an experiment for that right when the PNE discussion stated. With our current QUIC implementation over the (unoptimized) UDP stack and there was 1 to 1.25% overhead. Once the UDP and QUIC implementations are optimized, this number will increase. Happy to do the measurement again.

As to what extent hardware is impacted, hardware vendors will have to chime in. Several folks had chimed in on the original thread. Since we have a lack of participation from hardware vendors, I am trying to reach out to more folks to join the group and participate.

From: Mike Bishop [mailto:mbishop@evequefou.be]
Sent: Thursday, April 5, 2018 2:57 PM
To: Mikkel Fahnøe Jørgensen <mikkelfj@gmail.com>; Praveen Balasubramanian <pravb@microsoft.com>; IETF QUIC WG <quic@ietf.org>
Subject: RE: Impact of hardware offloads on network stack performance

That’s interesting data, Praveen – thanks.  There is one caveat there, which is that (IIUC, on current hardware) you can’t do packet pacing with LSO, and so my understanding is that even for TCP many providers disable this in pursuit of better egress management.  So what we’re really saying is that:

  *   On current hardware and OS setups, UDP runs less than half as much throughput as TCP; that needs some optimization, as we’ve already discussed
  *   😊

However, as Mikkel points out, the crypto costs are likely to overwhelm the OS throughput / scheduling issues in real deployments.  So I think the other relevant piece to understanding the cost here is this:

  *   What is the throughput and relative cost of TLS/TCP versus QUIC (i.e. how much are the smaller units of encryption hurting us versus, say, 16KB TLS records)?
     *   TLS implementations already vary here:  Some implementations choose large record sizes, some vary record sizes to reduce delay / HoLB, so this probably isn’t a single number.
  *   How much additional load does PNE add to this difference?
  *   To what extent would PNE make future crypto offloads impossible versus requires more R&D to develop?

From: QUIC [mailto:quic-bounces@ietf.org] On Behalf Of Mikkel Fahnøe Jørgensen
Sent: Wednesday, April 4, 2018 1:34 PM
To: Praveen Balasubramanian <pravb=40microsoft.com@dmarc.ietf.org<mailto:pravb=40microsoft.com@dmarc.ietf.org>>; IETF QUIC WG <quic@ietf.org<mailto:quic@ietf.org>>
Subject: Re: Impact of hardware offloads on network stack performance

Thanks for sharing these numbers.

My guess is that these offloads deal with kernel scheduling, memory cache issues, and interrupt scheduling.

It probably has very little to do with crypto, TCP headers, and any other cpu sensitive processing.

This is where netmap enters: you can transparently feed the data as it becomes available with very little sync work and you can also efficiently pipeline so you pass data to decryptor, then to app, with as little bus traffic as possible. No need to copy data or synchronize kernel space.

For 1K packets you would use about 1000ns (3 cycles/byte) on crypto and this would happen in L1 cache. It would of course consume a core which a crypto offload would not, but that can be debated because with 18 core systems, your problem is memory and network more than CPU.

My concern with PNE is for small packets and low latency where
1) an estimated 24ns for PN encryption and decryption becomes measurable if your network is fast enough.
2) any damage to the packet buffer cause all sorts of memory and cpu bus traffic issues.

1) is annoying. 2) is bad, completely avoidable but not as PR 1079 is formulated currently.

2) is also likely bad for hardware offload units as well.

As to SRV-IO - I’m not familiar with it, but obviously there is some IO abstraction layer - the question is how you make it accessible to apps as opposed to device drivers that does nor work with you QUIC custom stack, and netmap is one option here.

Kind Regards,
Mikkel Fahnøe Jørgensen


On 4 April 2018 at 22.15.52, Praveen Balasubramanian (pravb=40microsoft.com@dmarc.ietf.org<mailto:pravb=40microsoft.com@dmarc.ietf.org>) wrote:
Some comparative numbers from an out of box default settings Windows Server 2016 (released version) for a single connection with a microbenchmark tool:

Offloads enabled

TCP gbps

UDP gbps

LSO + LRO + checksum

24

3.6

Checksum only

7.6

3.6

None

5.6

2.3


This is for a fully bottlenecked CPU core -- if you run lower data rates there is still a significant difference in Cycles/byte cost. Same increased CPU cost applies for client systems going over high data rate Wifi and cellular.

This is without any crypto. Once you add crypto the numbers become much worse with crypto cost becoming dominant. Adding another crypto step further exacerbates the problem. Hence crypto offload gains in importance followed by these batch offloads.

If folks need any more numbers I’d be happy to provide them.

Thanks