Re: [bmwg] [Newsletter] Re: Query about 50% values in [Benchmarking Methodology for Network Security Device Performance draft 02]

Alex Samonte <> Wed, 03 June 2020 19:54 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 1C4913A0EEC for <>; Wed, 3 Jun 2020 12:54:11 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.088
X-Spam-Status: No, score=-2.088 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_PASS=-0.001, T_KAM_HTML_FONT_INVALID=0.01, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id jq92bJtjNj32 for <>; Wed, 3 Jun 2020 12:54:07 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 459163A0EEA for <>; Wed, 3 Jun 2020 12:54:07 -0700 (PDT)
Received: from ( []) ( mech=PLAIN bits=0) by with ESMTP id 053Js5mv015997-053Js5mw015997 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=CAFAIL) for <>; Wed, 3 Jun 2020 12:54:06 -0700
Received: by with SMTP id n24so4300061lji.10 for <>; Wed, 03 Jun 2020 12:54:06 -0700 (PDT)
X-Gm-Message-State: AOAM531fKwvOO6J7Tj3al8dbHdK08l91k+TsuD/k4cLprS+E26zk20L8 ci+c+0XAAhk4kHf3JqkVmVFFQOg6BF7vmOSompgtRA==
X-Google-Smtp-Source: ABdhPJwikM6EcVbxiLHSIUicbzYcSiHrIDwOOaNLiisMdqiJC1CVjhAlFFQ9g7v6kCioLZkjvn2sJLHgX6Q4zhEBb34=
X-Received: by 2002:a2e:b615:: with SMTP id r21mr384508ljn.1.1591214044833; Wed, 03 Jun 2020 12:54:04 -0700 (PDT)
MIME-Version: 1.0
References: <> <027801d639b7$0d4ac290$27e047b0$> <>
In-Reply-To: <>
From: Alex Samonte <>
Date: Wed, 3 Jun 2020 12:53:53 -0700
X-Gmail-Original-Message-ID: <>
Message-ID: <>
Cc: "" <>, Simon Edwards <>, "" <>
Content-Type: multipart/alternative; boundary="000000000000e7835b05a733636b"
X-FEAS-Auth-User: 92:28:4:SYSTEM
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt;; s=dkim; c=relaxed/relaxed; h=mime-version:references:reply-to:from:date:message-id:subject:to:cc:content-type; bh=Y4/S08QD3qPXDOwG9XRPDCWgsoxI4DYvLCoFioy9SRE=; b=Vf+zFZbXLc1JX/HJkShm5xyLp8J2Nb3QpTz0SXPV3uSzD2j+V4t/Xogy7Xlwfj0y+0EWo7u4N0Dz dNOMmPnNpayvtOj95Sxt4qAkrAFNMw2KC65jXfEG5zn9GOgi4unIq4ayoLwyj2Lmusnfs0WB/Vx/ GHoQ5p3EvZkHU1+2cS7hDzFx3S3vO58wqgLwFmNs4qRmQhnaxSsx/aNRbEhKcHFW6wKiYOWdCL6l DLMMyf8rv4JQiWeGmgpB0/k0Hi3xH7eNDdIWb+id54LO71dpCX4Z7hU83Z1jkx2KpLxZmj+hTKPM vqMacEefp/Z1yU1IIDJu2VAnQ1MZVyG5BFg/pQ==
Archived-At: <>
Subject: Re: [bmwg] [Newsletter] Re: Query about 50% values in [Benchmarking Methodology for Network Security Device Performance draft 02]
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Benchmarking Methodology Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 03 Jun 2020 19:54:11 -0000

The main rationale is to see what the nominal latency is.

As we all know the latency, will rise dramatically (and be unpredictable)
when something gets close to bottlenecking.  That doesn't mean things will
fail, but you start getting ugly behavior.

from 0%-XX% the latency is fairly reliable (XX to be guessed at later).
This means the latency doesn't change dramatically as you increase that
percentage.  Usually it's something linear with a slope of much less than
1.    When you get to XX%-100%, the latency becomes unstable and it varies
wildly as various components of the system have a bottleneck.  When that
happens, other things compensate, and then they become a bottleneck (when
the first thing is not) and it oscillates all over the place.  This is what
leads to the unstable latency.  The difference of the latency at 1% vs XX%
is small compared to the latency at XX% and some of the unstable latency at

So the question is how to we determine XX%?   Unfortunately, all DUTs are
different, have different limits, bottlenecks, and features that may change
XX%.   It's not the same from vendor to vendor, nor even model to model
within the same vendor.

A couple big examples to point out.
1) multi core CPUs.  Because of the way that most stateful devices behave,
a single session will probably be handled by a single core/thread of a
CPU.  If a device has 8 cores, if there is only 1 session, 7 cores will
likely go unused.   If there are 9 sessions, 1 core will be handling 2
sessions, and the others will be handling 1 (50% difference).  If there are
8001 sessions, then it's 1001, and others 1000, and the uneveness does not
matter.   However, if one of those sessions is 'harder' or more difficult
to process.  Or the number of sessions on a single core may be harder to
process, that core may reach 100% capacity before the others do.   Lets'
say I have 100%, 50%, 50%, 50% in a 4 core case.   Here we will experience
unstable latency.  The sessions that go through the core at 100% will
experience additional latency, where the sessions going through the other 3
cores will not, leading to unstable latency.

2) internal buffers.  Depending on the protocol different protocols may
have different buffers associated with them, so when buffers fill up, more
latency is added, but because they can be differently allocated for
different protocols, you will see variable latency depending on which is
exhausted and which is not.

3) interface (internal or external) limits.   When you approach 100% of a
certain speed link (be it internal or external to the DUT) you can also
experience additional latency.  It's hard to determine what a particular
DUT's internal architecture is(nor do we really want to try to figure that
out), So we generally do not want to push it to 100% because we know we
will incur

So what we've said is that:
0%-XX% the latencies are reliable / not unstable
XX%-100% the latency are unstable

People looking to benchmark are generally interested in the largest stable
value (XX).   100% (or near it) is generally not a representative value of
the normal case of traffic.   0% (or 1%) is also generally not
representative of the normal case for traffic.

Ideally XX% is what we're trying to get, but it's different for each
model.   Since 0-XX% represents the stable side, i'd rather err on that
side, than the XX%-100%.

When doing the benchmark we're finding the various max values (without
failure), but that does not necessarily mean stable latency.   In order to
get ot that range we're going to back off from 100%.

Now what could that value be?  95%  75%  50%?   This was purposefully set
low so we would not encounter any other bottlenecks that would make the
latency unstable.

I've tested lots of different devices over the years, both my own and other
vendors, and the lower you make that number, the higher the number of
devices will stay in the 'stable' area of latency.

so 50% was chosen as the least likely to cause unstable latency, and still
representative of real customer usage (plenty of customers have no issue of
using a device that is at 50% capacity).   At 75% and much more so at 95%
customers are looking to change/upgrade their equipment so they are not
nearing the top of what the device is capable of.   So again it makes me
less interested in the latency there.

Hopefully that makes sense, and while it could be shorter in the
explanation, i'm not sure I could make it into a 2 sentence one.


On Wed, Jun 3, 2020 at 9:12 AM MORTON, ALFRED C (AL) <>

> Thanks for your question, Simon, and the history, Brian.
> I confess that this question (why 50%?) has occurred to me in other
> contexts, and it may help to add a sentence to two of rationale. So if the
> technical folks can help with suggestions, that would be great!
> regards,
> Al
> *From:* bmwg [] *On Behalf Of *
> *Sent:* Wednesday, June 3, 2020 10:56 AM
> *To:* 'Simon Edwards' <>
> *Cc:*
> *Subject:* Re: [bmwg] Query about 50% values in [Benchmarking Methodology
> for Network Security Device Performance draft 02]
> Simon,
> This requirement was agreed to and adopted by the working group within
> NetSecOPEN. It first appeared in the IETF individual draft on October 14,
> 2018. (
> <>).
> It was clarified and expanded on March 5, 2019 in
> <>.
> It’s form has largely been unchanged since then.
> I will leave it to the technical folks to expand on this more. However, I
> am saying all of this because it has been in the position to be reviewed
> and commented on by the BMWG community for awhile. Our assumption is that
> given no one appears to have an issue with it that we hit the mark.
> Brian
> *From:* bmwg <> *On Behalf Of *Simon Edwards
> *Sent:* June 3, 2020 5:10 AM
> *To:*
> *Subject:* [bmwg] Query about 50% values in [Benchmarking Methodology for
> Network Security Device Performance draft 02]
> Hi all,
> In a number of sections, but specifically '  Test Equipment
> Configuration Parameters', there are requirements to measure with 50% of
> the maximum connections/ sec measured in the HTTP/S throughput tests
> E.g. "Target objective for scenarios 1 and 2: 50% of the maximum
> connections per second measured in test scenario..."
> I'm sure this 50% value is the product of much thought and discussion,
> rather than an arbitrary choice. Is anyone able to explain the reason for
> the specific '50%' value (as opposed to 25%, 75% or whatever) or could you
> please point to documentation around that decision made by the group?
> I'm asking just to understand. I don't disagree with the decision : )
> Very best wishes,
> Simon
> _______________________________________________
> bmwg mailing list


Alex Samonte
Director Of Technical Architecture

[image: FortinetLogosig1550861636.png]
M: +1 408-475-8737
Skype: asamonteFN
           [image: Fortinetsocialtwitternew1550866064.png]
  [image: Fortinetsociallinkedinnew1550866125.png]
  [image: Fortinetsocialfbnew1550866196.png]
 [image: Fortinetsocialinstagramnew1550866291.png]
  [image: Fortinetsocialblognew1550866342.png]
Secure Remote Access for Your Workforce at Scale]

***  Please note that this message and any attachments may contain confidential and proprietary material and information and are intended only for the use of the intended recipient(s). If you are not the intended recipient, you are hereby notified that any review, use, disclosure, dissemination, distribution or copying of this message and any attachments is strictly prohibited. If you have received this email in error, please immediately notify the sender and destroy this e-mail and any attachments and all copies, whether electronic or printed. Please also note that any views, opinions, conclusions or commitments expressed in this message are those of the individual sender and do not necessarily reflect the views of Fortinet, Inc., its affiliates, and emails are not binding on Fortinet and only a writing manually signed by Fortinet's General Counsel can be a binding commitment of Fortinet to Fortinet's customers or partners. Thank you. ***