Re: [bmwg] WGLC on New version of draft-ietf-bmwg-ngfw-performance

bmonkman@netsecopen.org Fri, 16 July 2021 18:39 UTC

Return-Path: <bmonkman@netsecopen.org>
X-Original-To: bmwg@ietfa.amsl.com
Delivered-To: bmwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E39BB3A4074 for <bmwg@ietfa.amsl.com>; Fri, 16 Jul 2021 11:39:40 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.897
X-Spam-Level:
X-Spam-Status: No, score=-1.897 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, WEIRD_QUOTING=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=netsecopen-org.20150623.gappssmtp.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ELkmwmB7YXu1 for <bmwg@ietfa.amsl.com>; Fri, 16 Jul 2021 11:39:34 -0700 (PDT)
Received: from mail-qv1-xf2d.google.com (mail-qv1-xf2d.google.com [IPv6:2607:f8b0:4864:20::f2d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 15CCB3A408F for <bmwg@ietf.org>; Fri, 16 Jul 2021 11:39:28 -0700 (PDT)
Received: by mail-qv1-xf2d.google.com with SMTP id f3so5048573qvm.2 for <bmwg@ietf.org>; Fri, 16 Jul 2021 11:39:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netsecopen-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:references:in-reply-to:subject:date:message-id :mime-version:thread-index:content-language; bh=kSHnwX3dOKu4CcvjYWwblTsRfMjii4sWhQdhbyxFCng=; b=JyoyWEJJ8QYlg18WuoH0Fidf8kwbkNgYbx07W+YjWhPnikVWmo34y8/Rk5Sxo7E6Qo d3HdbmbzTYiIVm993I1Q6aU8N233hCcKmQNrNqLb01aj69aEsIPbRb/h0sLlJ/svvz4+ 42E8SagIfSo6vGZV2Hq5xoUa2OB0ZpeyEwEAw1ZqlmDnmEs5srrwwRF6maHOMpkkLONY kvz/CGdVHfsA0kM0Gpn/JN5AM+MgbyCjtbsZJFBY38/D+6tF8QneEgOy9GvPqbumgaoe hojLhY/L0cDj3ey08I1D/a/1Ln8FSwHTSlwvLL6iF54580pPR/EH5vz5yxbaWteg4Xc5 YpFQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:references:in-reply-to:subject:date :message-id:mime-version:thread-index:content-language; bh=kSHnwX3dOKu4CcvjYWwblTsRfMjii4sWhQdhbyxFCng=; b=Elqyxlhke//W/2278b1s9A2XKCtqtBIIwt8+etJr2I/R8C/7GA7xq3b1SolIAWRXH1 F6jAeXXeWrqN/b8MojOadDK/xNB95U2dOdVHHSI2Gn4oqmxRwhV7aPkY28gQtIOJWhpE IqYE1+2dm8JVhXriZsjkICwzdmzPajkktNimdBnaGCWDiokYfjfA/MbsBKdAPEtfQNfs SBN3DG3xeceJPrCbeaX3Kk3CiyEUI9um6ELs4cTdSOK6KjUSRhlOUgqCGbM7bzVz5Xh5 FKKxCBX84N6aUiA/H2k7rimJX5mik6JMYBWBjlh2nWrVCE2qkA64cym58C/15bFLg1V7 i9wQ==
X-Gm-Message-State: AOAM533azZ78A2N+hab34ocOVc27wINZvmvf0gnx36A9LuhM8Sarg20O POaJxjtxRRUPnKh8uOX712bQXw==
X-Google-Smtp-Source: ABdhPJxHKje9jwGPl4sYL3ZFRptQqv47eXy19GLCR+VgDYdA/JUA5YhCaLx0nfxRYWXJGmrH5XfzeA==
X-Received: by 2002:ad4:56a4:: with SMTP id bd4mr11642662qvb.6.1626460766374; Fri, 16 Jul 2021 11:39:26 -0700 (PDT)
Received: from DESKTOP42TMNEU (c-98-235-212-118.hsd1.pa.comcast.net. [98.235.212.118]) by smtp.gmail.com with ESMTPSA id u15sm3533397qtc.61.2021.07.16.11.39.24 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 16 Jul 2021 11:39:25 -0700 (PDT)
From: bmonkman@netsecopen.org
To: 'Sarah Banks' <sbanks@encrypted.net>
Cc: 'Carsten Rossenhoevel' <cross@eantc.de>, 'ALFRED MORTON' <acmorton@att.com>, "'Jack, Mike'" <Mike.Jack@spirent.com>, "'MORTON, ALFRED C (AL)'" <acm@research.att.com>, bmwg@ietf.org, 'Timothy Otto' <totto@juniper.net>, 'Bala Balarajah' <bala@netsecopen.org>
References: <413e779fd7eb4dd4b3aa8473c171e282@att.com> <f1a2b5c5-ebf2-12ab-b053-b9b2538342ad@hit.bme.hu> <047501d745bb$e22f4ab0$a68de010$@netsecopen.org> <7dc6b282-7f41-bf7c-f09c-65e7ce94b674@hit.bme.hu> <048801d745be$31424b50$93c6e1f0$@netsecopen.org> <84196d5ce7474f9196ab000be64c49fd@att.com> <02629ACE-FDA4-4ACF-9459-825521596B83@encrypted.net> <001201d75266$05979140$10c6b3c0$@netsecopen.org> <059e01d75f7d$a62a4de0$f27ee9a0$@netsecopen.org> <009b01d76c46$8063e5a0$812bb0e0$@netsecopen.org> <770F93CB-A8CC-4420-8C1B-CB7B7A2289FB@encrypted.net> <021f01d77356$7e19a2f0$7a4ce8d0$@netsecopen.org> <D1ED6898-D8C3-4C56-A3D3-221DD16B7300@encrypted.net> <004701d777f5$94844bf0$bd8ce3d0$@netsecopen.org> <SJ0PR02MB7853CAF5D40CBAFA6C3B4154D3149@SJ0PR02MB7853.namprd02.prod.outlook.com> <005201d777fa$2133a100$639ae300$@netsecopen.org> <4e51a7d5-8c59-a4fc-6c65-457ee7655c74@eantc.de> <A0A5D9D5-1D3E-439F-9009-C977BC5EA389@encrypted.net> <00de01d7780e$d2351b50$769f51f0$@netsecopen.org> <43E13361-D7EE-4882-BCF2-6FBAEA0 AAE84@encrypted.net>
In-Reply-To: <43E13361-D7EE-4882-BCF2-6FBAEA0AAE84@encrypted.net>
Date: Fri, 16 Jul 2021 14:39:22 -0400
Message-ID: <01c801d77a71$e64f5190$b2edf4b0$@netsecopen.org>
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="----=_NextPart_000_01C9_01D77A50.5F418220"
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFRbj6fJkgW8izlvCwqrrFMnkxIIgCInYIRAuwPM28Ba7bQAAJP8T6gAZobdVoCfn8Z8AGxwly+Af5jKxcCU0X4jwGVNSNKAWQ6WEgBMcFgdgGmGQPUAnYlKlUByKiZKQHqn0TDAnOylT4CPFTYAQJROggqqy/DE5A=
Content-Language: en-us
Archived-At: <https://mailarchive.ietf.org/arch/msg/bmwg/xMSw39hHV6_vwuznhK_ou6nrfBc>
X-Mailman-Approved-At: Sat, 17 Jul 2021 08:02:28 -0700
Subject: Re: [bmwg] WGLC on New version of draft-ietf-bmwg-ngfw-performance
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/bmwg/>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 16 Jul 2021 18:39:41 -0000

I’m confused. You have cited IDS as examples even though we have explicitly stated our that including IDS is out of scope. As we have not *yet* created an updated draft it does confuse things. We felt it would be a waste of time to create a new draft given several of our comments were still open.

 

I suppose since you “can’t propose” anything specific we will have to discuss it during the meeting. 

 

Thanks for taking the time to respond.

 

Brian

 

From: Sarah Banks <sbanks@encrypted.net> 
Sent: Friday, July 16, 2021 2:10 PM
To: bmonkman@netsecopen.org
Cc: Carsten Rossenhoevel <cross@eantc.de>; ALFRED MORTON <acmorton@att.com>; Jack, Mike <Mike.Jack@spirent.com>; MORTON, ALFRED C (AL) <acm@research.att.com>; bmwg@ietf.org; Timothy Otto <totto@juniper.net>; Bala Balarajah <bala@netsecopen.org>
Subject: Re: [bmwg] WGLC on New version of draft-ietf-bmwg-ngfw-performance

 

Hi Brian,

    Please see inline.

 

Thanks, 

Sarah (as a participant)





On Jul 13, 2021, at 10:45 AM, <bmonkman@netsecopen.org <mailto:bmonkman@netsecopen.org> > <bmonkman@netsecopen.org <mailto:bmonkman@netsecopen.org> > wrote:

 

Sarah,

 

I understand your comments.

 

Could you say whether changing the draft to explicitly including only NGFW and NGIPS as the target would impact your follow-on comments.

 

SB// I support the draft as a draft that covers benchmarks that cover specifically NGFW and NGIPS, provided the scope is not open ended to cover the undefined "NG*" and future devices that we haven't invented yet. :)





 

And with respect to your comment regarding the number of test cases, could you tell us what it is you would like to see. I ask because we have had the same test cases enumerated since the initial draft. This is the first time, I believe, that a suggestion has been raised that there are not enough. We think there are. However, I understand you don’t. But it would be helpful to us if you could detail what you think is needed in addition to the test cases currently included.

 

 

SB// I can certainly understand the frustration with this comment at the late hour. At first, I was focused on "can the NGIDS piece fit into this draft as is?" and "does it cover the general scope of any upcoming NG* device?" - and once I sorted that out and settled on the idea that this draft makes a lot of sense in the NGFW/IPS sense, it occurred to me - if I wanted to benchmark one NGFW versus another, or one IPS versus another, there's a lot more here I'd like to test. The features called out in section 4.2 would certainly be items I'd want to understand when comparing different but like devices. This expectation is set from the very abstract itself - where we write, "This document provides benchmarking terminology and methodology for next-generation network security devices including next-generation firewalls (NGFW), next-generation intrusion detection and prevention systems (NGIDS/NGIPS) and unified threat management (UTM) implementations." It's to these features that I think test cases would meet the stated objective in the abstract. 





Last, we use a term that is not defined - NG* - and it begs the question, what characterizes a next generation device? Let me use an IDS as the example (I realize the IDS is out of scope now! But it highlights some of the questions) - is it performance (amount of traffic a device can ingest at ingress? the amount of records/meta data/flows/alerts/events it can send out per second?) Does it extend to an architecture? Does it cover a breadth of analyzers/protocols it can perceive and interpret? Would it include a stance on the effectiveness of the detections (positive/false positive hits)? I believe the due diligence on undefined terms here is on the authors, as I'm not sure I understand what's meant by an "NGIPS" - hence the question - and hence the reason I can't propose the text for the authors. 

 





Brian

 

From: Sarah Banks <sbanks@encrypted.net <mailto:sbanks@encrypted.net> > 
Sent: Tuesday, July 13, 2021 1:24 PM
To: Carsten Rossenhoevel <cross@eantc.de <mailto:cross@eantc.de> >
Cc: bmonkman@netsecopen.org <mailto:bmonkman@netsecopen.org> ; ALFRED MORTON <acmorton@att.com <mailto:acmorton@att.com> >; MORTON, ALFRED C (AL) <acm@research.att.com <mailto:acm@research.att.com> >; bmwg@ietf.org <mailto:bmwg@ietf.org> ; Jack, Mike <Mike.Jack@spirent.com <mailto:Mike.Jack@spirent.com> >; Timothy Otto <totto@juniper.net <mailto:totto@juniper.net> >; Bala Balarajah <bala@netsecopen.org <mailto:bala@netsecopen.org> >
Subject: Re: [bmwg] WGLC on New version of draft-ietf-bmwg-ngfw-performance

 

Carsten,

     While I find some of the comments below a tad offensive, I want to focus in a bit on the draft text itself. For me, the disconnect starts in the Abstract:

 

This document provides benchmarking terminology and methodology for
   next-generation network security devices including next-generation
   firewalls (NGFW), next-generation intrusion detection and prevention
   systems (NGIDS/NGIPS) and unified threat management (UTM)
   implementations.  This document aims to strongly improve the
   applicability, reproducibility, and transparency of benchmarks and to
   align the test methodology with today's increasingly complex layer 7
   security centric network application use cases.  The main areas
   covered in this document are test terminology, test configuration
   parameters, and benchmarking methodology for NGFW and NGIDS/NGIPS to
   start with.

 

I read that and my takeaway is that the draft wants to be the RFC for all next generation security devices. I am willing to accept your assertion that perhaps I don't understand the fundamental concept of what a NG security device is, and in my feedback I asked you to point me at such a definition. What I was gently pointing out is that I'm not sure one exists, and that the text might be best suited to include one. The draft points out required features to be enabled, and I wonder, if I don't have that feature on my device, are you saying I'm not an NG security device? Am I disqualified from such a test?

 

Which leads me to my general concern - I am not supportive of a draft that covers "NG security devices", describes tests for 2 of them, and that's it. We don't know what's next to be invented, so having a blanket device that covers them seems strange to me at best. The abstract itself describes a methodology for testing "unified threat management" implementations, and while the tests to be performed are well described, they're lighter than I'd expect. Further, if I were an NGIPS in a bakeoff, wouldn't I want to test the performance of some of the features from 4.2 to be benchmarked? Do you see what I mean? I realize that this might not be the point of your testing here, but I'm trying to share that were I to come to this draft as written, I might reasonably expect that the draft cover some or all of those features with test cases. 

 

Ways forward: I believe a definition of, or pointing to such, of NG <x> is required. While I'm not entirely convinced that the draft actually covers the entire set of features I'd want to test on a NGFW or on an IPS, I'd be supportive of a draft that covered those devices, since test cases for them are called out. I am not supportive of a draft that blanket-covers what's next, and I think we should strongly consider adding more test cases that allow someone to more thoroughly benchmark the NGFW or IPS, as the abstract suggests.

 

Kind regards,

Sarah

 






On Jul 13, 2021, at 9:19 AM, Carsten Rossenhoevel <cross@eantc.de <mailto:cross@eantc.de> > wrote:

 

The question is, how much more time will be required to discuss this topic in an IETF meeting (whether regular or interim)?
Al (as chair) and Sarah (as contributor), what do you think?  Would it fit into the regular BMWG meeting at IETF111 at all anyway?

My personal view is that it will take a considerable time to explain the positions and reach consensus, specifically related to the rather fundamental questions about the scope of the next-gen security device industry. 

Sarah, is it acceptable to kindly ask you to prepare constructive editorial suggestions how to resolve her concerns?  

The WGLC process has already been delayed by yet another IETF meeting cycle, and I would like to ask for your support to avoid any further unnecessary delays.  I would like to remind everyone of RFC 7154 <https://tools.ietf.org/search/rfc7154>  (IETF Guidelines for Conduct) section 2.4. 

Best regards, Carsten







On 13.07.2021 16:17, bmonkman@netsecopen.org <mailto:bmonkman@netsecopen.org>  wrote:

Thanks for your response Al.
 
Given that the BMWG session is only 9 working days away I doubt very much we
will be able to get together as a group on our side and work through the
remaining issues. It might be possible to get our response out by the end of
next week, but I don't think that will provide enough time for it to be
reviewed and a response formulated by the meeting on July 26th.
 
I think it would be preferable to schedule an interim. 
 
Brian
 
-----Original Message-----
From: MORTON JR., AL  <mailto:acmorton@att.com> <acmorton@att.com> 
Sent: Tuesday, July 13, 2021 11:01 AM
To: bmonkman@netsecopen.org <mailto:bmonkman@netsecopen.org> ; 'Sarah Banks'  <mailto:sbanks@encrypted.net> <sbanks@encrypted.net>; 'MORTON,
ALFRED C (AL)'  <mailto:acm@research.att.com> <acm@research.att.com>
Cc: bmwg@ietf.org <mailto:bmwg@ietf.org> ; asamonte@fortinet.com <mailto:asamonte@fortinet.com> ; amritam.putatunda@keysight.com <mailto:amritam.putatunda@keysight.com> ;
'Bala Balarajah'  <mailto:bala@netsecopen.org> <bala@netsecopen.org>; 'Carsten Rossenhoevel'
 <mailto:cross@eantc.de> <cross@eantc.de>; 'Christopher Brown'  <mailto:cbrown@iol.unh.edu> <cbrown@iol.unh.edu>; 'Jack, Mike'
 <mailto:Mike.Jack@spirent.com> <Mike.Jack@spirent.com>; 'Ryan Liles (ryliles)'  <mailto:ryliles@cisco.com> <ryliles@cisco.com>;
'Timothy Carlin'  <mailto:tjcarlin@iol.unh.edu> <tjcarlin@iol.unh.edu>; 'Timothy Otto'  <mailto:totto@juniper.net> <totto@juniper.net>
Subject: RE: [bmwg] WGLC on New version of draft-ietf-bmwg-ngfw-performance
 
Hi Brian,
 
Our next opportunity to discuss this draft and comment resolution is during
the BMWG session at IETF-111.
Let's try to make that work for everyone.
 
thanks!
Al
 
 

-----Original Message-----
From: bmonkman@netsecopen.org <mailto:bmonkman@netsecopen.org>   <mailto:bmonkman@netsecopen.org> <bmonkman@netsecopen.org>
Sent: Tuesday, July 13, 2021 10:44 AM
To: 'Sarah Banks'  <mailto:sbanks@encrypted.net> <sbanks@encrypted.net>; 'MORTON, ALFRED C (AL)'
 <mailto:acm@research.att.com> <acm@research.att.com>
Cc: bmwg@ietf.org <mailto:bmwg@ietf.org> ; asamonte@fortinet.com <mailto:asamonte@fortinet.com> ; 
amritam.putatunda@keysight.com <mailto:amritam.putatunda@keysight.com> ; 'Bala Balarajah'  <mailto:bala@netsecopen.org> <bala@netsecopen.org>;

'Carsten Rossenhoevel'

 <mailto:cross@eantc.de> <cross@eantc.de>; 'Christopher Brown'  <mailto:cbrown@iol.unh.edu> <cbrown@iol.unh.edu>; 'Jack, Mike'
 <mailto:Mike.Jack@spirent.com> <Mike.Jack@spirent.com>; 'Ryan Liles (ryliles)'  <mailto:ryliles@cisco.com> <ryliles@cisco.com>; 
'Timothy Carlin'  <mailto:tjcarlin@iol.unh.edu> <tjcarlin@iol.unh.edu>; 'Timothy Otto' 
 <mailto:totto@juniper.net> <totto@juniper.net>
Subject: RE: [bmwg] WGLC on New version of 
draft-ietf-bmwg-ngfw-performance
 
Looping in the others on my original post.
 
Thanks Sarah,
 
Good to see the issues are being whittled down. Of the remaining five 
or six issues I doubt we will have a response prior to IETF 111.
 
Al,
 
Could we schedule an interim meetup/call for some time early August?
 
Brian
 
 
-----Original Message-----
From: Sarah Banks  <mailto:sbanks@encrypted.net> <sbanks@encrypted.net>
Sent: Monday, July 12, 2021 5:20 PM
To: bmonkman@netsecopen.org <mailto:bmonkman@netsecopen.org> 
Cc: bmwg@ietf.org <mailto:bmwg@ietf.org> 
Subject: Re: [bmwg] WGLC on New version of 
draft-ietf-bmwg-ngfw-performance
 
Hi Brian et al,
    First, my apologies for the delay, and I very much appreciate your 
patience. I also appreciate the time and effort that went into the 
reply to my comments, which can be even more difficult to do as a 
large group :) Please see inline.
 
Thank you,
Sarah (as a participant)
 

Hi Sarah,
 
As I mentioned in the previous message, we will remove reference to 
IDS from the draft. Given that, none of the IDS related 
comments/questions are being addressed.

SB// Makes sense, thank you.
 

- The draft aims to replace RFC3511, but expands scope past 
Firewalls, to "next generation security devices". I'm not finding a 
definition of what a "next generation security device is", nor an 
exhaustive list of the devices covered in this draft. A list that 
includes is nice, but IMO not enough to cover what would be 
benchmarked here - I'd prefer to see a definition and an exhaustive

list.

[bpm] "We avoid limiting the draft by explicitly adding a list of 
NG security devices currently available in the market only. In the 
future, there may be more and more new types of NG security devices 
that will appear on the market.

SB// I think there are 2 types of devices called out; I'm not seeing a 
definition of what a "NG security device" is, and I'm not comfortable 
with a draft that has a blanket to encompass what would come later. 
Who knows what new characteristics would arrive with that new device? 
I think the scope here is best suited for the devices we know about 
today and can point to and say we're applying knowledgeable benchmarking

tests against.

[bpm] This draft includes a list of security features that the 
security device can have ( RFC 3511 doesn't have such a list). 
Also, we will describe in the draft that the security devices must 
be

configured ""in-line"" mode.

We believe these two points qualifying the definition of next 
generation security.
 

SB// I strongly disagree. Well, I mean OK, for active inline devices 
maybe this is OK, but to say that the only way a device can be "NG" is 
to be active/inline, I disagree with. And if there is, have we 
gathered all of the items we'd want to actively test for in that case? 
For example, what about their abilities to handle traffic when a failure

occurs? (fail open/closed).

What about alerts and detections and the whole federation of tests 
around positives/false positives/false negatives, etc? I'm onboard 
with expanding the scope, but then we have to do the devices 
benchmarking justice, and I feel we're missing a lot here.
 

- What is a NGIPS or NGIDS? If there are standardized definitions 
pointing to them is fine, otherwise, there's a lot of wiggle room here.
 
[bpm] See above. We are removing NGIDS from the draft.

SB// Understood, thank you.
 

- I still have the concern I shared at the last IETF meeting, where 
here, we're putting active inline security devices in the same 
category as passive devices. On one hand, I'm not sure I'd lump 
these three together in the first place; on the other, active 
inline devices typically include additional functions to allow 
administrators to control what happens to packets in the case of 
failure, and I don't see those test cases included here.
 
[bpm] This draft focuses on ""in-line"" mode security devices only.
We will describe this in section 4 in more detail.

SB// Understood, thank you.
 

[bpm] Additionally, the draft focuses mainly on performance tests.
The DUT must be configured in ""fail close"" mode. We will describe 
this under section 4. Any failure scenarios like ""fail open"" mode 
is

out of scope.
 
SB// OK, but I think an RFC that is going to encompass this device 
under the "NG security devices" classification is missing out on large 
portions of what customers will want to test. It'll also beg for 
another draft to cover them, and then I'm not sure we're serving the

industry as well as we could.

- Section 4.1 - it reads as if ANY device in the test setup cannot 
contribute to network latency or throughput issues, including the 
DUTs
- is that what you intended?
 
[bpm] "Our intention is, if the external devices (routers and
switches) are used in the test bed, they should not negatively 
impact

DUT/SUT performance.

To address this, we added a section ( section 5 ""Test Bed
Considerations"") which recommends a pre-test.  We can rename this 
as reference test or baseline test. "
 

SB// Thank you for the clarification. I think there's still a concern

there.

Who defines what "negative impact" is? You're traversing at least 
another L2 or L3 step in the network with each bump, which contributes 
some amount of latency. If they don't serve in control plane decisions 
and are passively passing data on, then we could consider removing 
them from the setup and removing the potential skew on results.
 
 

- Option 1: It'd be nice to see a specific, clean, recommended test

bed.

There are options for multiple emulated routers. As a tester, I 
expect to see a specific, proscribed test bed that I should 
configure and test against.
 
[bpm] The draft describes that Option 1 is the recommended test setup.
However. We added emulated routers as optional in option 1. The 
reason for
that:
Some type of security devices for some deployment scenarios 
requires routers between test client/server and the DUT (e.g., 
NGFW) and some DUT/SUT doesn't need router (e.g. NGIPS )
 
- Follow on: I'm curious as to the choice of emulated routers here.
The previous test suggests you avoid routers and switches in the 
topo, but then there are emulated ones here. I'm curious as to what 
advantages you think these bring over the real deal, and why they 
aren't subject to the same limitations previously described?
 
[bpm] Comparing real router, the emulated router gives more 
advantages for
L7 testing.
 
[bpm] - Emulated router doesn't add latency. Even if it adds delay 
due to the routing process, the test equipment can report the added 
latency, or it can consider this for the latency measurement.
 
[bpm] - Emulated routers simply do routing function only. But in a

"real"

router, we are not sure what else the router is doing with the packets.

SB// Maybe I'm missing something here - a device can't perform a 
function for free, right? Even if it's impact is negligible, it's an 
impact of some sort. We're saying the emulated router is doing the 
routing - OK - but I think the same thing applies to the physical 
router - how do you know what else the emulated router is doing? if 
the test gear can call out the latency, I'd like to see clarification 
around how it's doing that and distinguishing the latency introduced 
by Device A, versus Device B, versus the DUT, etc.
 
 

[bpm] Your question regarding the need for routers:
 
[bpm] - We avoid impacting the DUT/SUT performance due to ARP or ND 
process
 
[bpm] - Represent realistic scenario (In the production environment 
the security devices will not be directly connected with the
clients.)
 
[bpm] - Routing (L3 mode) is commonly used in the NG security devices.
 
[bpm] However, in both figures we mentioned that router including 
emulated router is optional. If there is no need have routing 
functionality on the test bed (e.g., if we used very small number 
of clients and server IPs or the DUT operates in Layer 2 mode), it 
can be

ignored.

[bpm] Also, we described in Option 1, that the external devices are 
if there is need to aggregate the interfaces of the tester or DUT.
For an example, DUT has 2 Interfaces, but tester need to use it's 4 
interfaces to achieve the performance. So here we need 
switch/router to aggregate tester interface from 4 to 2.
 
- In section 4.1 the text calls out Option 1 as the preferred test 
bed, which includes L3 routing, but it's not clear why that's needed?
 
[bpm] See above.
 
- The difference between Option 1 and Option 2 is the inclusion of 
additional physical gear in Option 2 - it's not clear why that's 
needed, or why the tester can't simply directly connect the test 
equipment to the DUT and remove extraneous devices from potential

influence on results?

[bpm] See above.
 
- Section 4.2, the table for NGFW features - I'm not sure what the 
difference is between RECOMMENDED and OPTIONAL? (I realize that you 
might be saying that RECOMMENDED is the "must have enabled" 
features, where as optional is at your discretion, but would 
suggest that you make that clear)
 
[bpm] The definition for OPTIONAL and RECOMMENDED is described in, 
and recommended, RFC2119. We already referenced this under the 
section
2 "Requirements".
 

SB// Thanks!
 

- Proscribing a list of features that have to be enabled for the 
test, or at least more than 1, feels like a strange choice here - 
I'd have expected tests cases that either test the specific 
features one at a time, or suggest several combinations, but that 
ultimately, we'd tell the tester to document WHICH features were 
enabled, to make the test cases repeatable? This allows the tester 
to apply a same set of apples to apples configurations to different 
vendor gear, and omit the 1 feature that doesn't exist on a 
different NGFW (for example), but

hold a baseline that could be tested.

- Table 2: With the assumption that NGIPS/IDS are required to have 
the features under "recommended", I disagree with this list. For 
example, some customers break and inspect at the tap/agg layer of 
the network - in this case, the feed into the NGIDS might be 
decrypted, and there's no need to enable SSL inspection, for example.
 
[bpm] IDS is being removed.

SB// OK...I'm not sure this addresses the feedback though :) A NGFW 
for sure will do break/inspect as well, right?
 

- Table 3: I disagree that an NGIDS IS REQUIRED to decrypt SSL. 
This behaviour might be suitable for an NGIPS, but the NGIDS is not 
a bump on the wire, and often isn't decrypting and re-encrypting the

traffic.

[bpm] IDS is being removed.

SB// See comment above.
 

- Table 3: An NGIDS IMO is still a passive device - it wouldn't be 
blocking anything, but agree that it might tell you that it 
happened

after the fact.

[bpm] IDS is being removed.

SB// Thanks!
 

- Table 3: Anti-evasion definition - define "mitigates".
 
[bpm] Not sure why you are asking this as mitigate is not an 
uncommon term/word.
 
- Table 3: Web-filtering - not a function of an NGIDS.
 
[bpm] IDS is being removed.
 
- Table 3: DLP: Not applicable for an NGIDS.
 
[bpm] IDS is being removed.
 
- Can you expand on "disposition of all flows of traffic are logged"
- what's meant here specifically, and why do they have to be logged?
(Logging, particularly under high loads, will impact it's own 
performance marks, and colours output)
 
[bpm] We intentionally recommended enabling logging which will 
impact the performance. The draft is not aiming to get high 
performance number with minimal DUT/SUT configuration. In contrast, 
it aims to get reasonable performance number with realistic DUT

configuration.

The realistic configuration can vary based on DUT/SUT deployment

scenario.

[bpm] In most of the DUT/SUT deployment scenarios or customer 
environments, logging is enabled as default configuration.
 
[bpm] "Disposition of all flows of traffic are logged": means that 
the DUT/SUT need to log all the traffic at the flow level not each

packet.

[bpm] We will add more clarification for the meaning of 
"disposition of all flows of traffic are logged".
 

SB// Thanks!
 

- ACLs wouldn't apply to an IDS because IDS's aren't blocking 
traffic
:)
 
[bpm] IDS is being removed.
 
- It might be helpful to testers to say something like "look, 
here's one suggested set of ACLs. If you're using them, great, 
reference that, but otherwise, make note of the ACLs you use, and 
use the same ones for repeatable testing".
 
[bpm] The draft gives guidance how to choose the ACL rules. We 
describe here a methodology to create ACL.
 
- 4.3.1.1 The doc proscribes specific MSS values for v4/v6 with no 
discussion around why they're chosen - that color could be useful 
to the reader.
 
[bpm] We will add some more clarification that these are the 
default number used in most of the client operating systems currently.
 

SB// Thanks!
 

- 4.3.1.1 - there's a period on the 3rd to last line "(SYN/ACL, ACK).

and"

that should be changed.
 
[bpm] Thank you.
 
- 4.3.1.1 - As a tester with long time experience with major test 
equipment manufacturers, I can't possibly begin to guess which ones 
of them would conform to this - or even if they'd answer these

questions.

How helpful is this section to the non test houses? I suggest 
expansion here, ideally with either covering the scope of what you 
expect to cover, or hopefully which (open source/generally 
available) test tools or emulators could be considered for use as

examples.

[bpm] We extensively discussed with Ixia and Spirent about this

section.

This section was developed with significant input from these test 
tools vendors in addition to others.

SB// OK, that's really good to know, but there are plenty of us 
working with and looking for more cost effective options to Ixia and 
Spirent. :) I think the expansion would be good here.
 

- 4.3.1.3 - Do the emulated web browser attributes really apply to 
testing the NGIPS?
 
[bpm] Yes, we performed many PoC tests with test tools. Ixia and 
Spirent confirmed this.
 
- 4.3.2.3 - Do you expect to also leverage TLS 1.3 as a 
configuration option here?
 
[bpm] Yes
 
- 4.3.4 - I'm surprised to see the requirement that all sessions 
establish a distinct phase before moving on to the next. You might 
clarify why this is a requirement, and why staggering them is

specifically rejected?

[bpm] This draft doesn't describe that all sessions establish a 
distinct phase before moving on to the next. We will remove the 
word "distinct" from the 1st paragraph in section 4.3.4.

SB// Thanks!
 

[bpm] Unlike Layer 2/3 testing, Layer 7 testing requires several 
phases in the traffic load profile. The traffic load profile 
described in the draft is the common profile mostly used for Layer 
7

testing.

- 5.1 - I like the sentence, but it leaves a world of possibilities 
open as to how one confirmed that the ancillary switching, or 
routing functions didn't limit the performance, particularly the 
virtualized

components?

[bpm] The sentence says, "Ensure that any ancillary switching or 
routing functions between the system under test and the test 
equipment do not limit the performance of the traffic generator."
 
[bpm] Here we discuss the traffic generator performance, and this 
can be confirmed by doing reference test.
 
[bpm] The section 5 recommends reference test to ensure that the 
maximum desired traffic generator's performance. Based on the 
reference test results it can be identified, if the external device 
added any impact on traffic generator's performance.
 
[bpm] We will add more content in section 5 to provide more details 
about reference test.
 

SB// Thanks!
 

- 5.3 - this is a nice assertion but again, how do I reasonably 
make the assertion?
 
[bpm] We will change the word from "Assertion" to "Ensure". Also, 
we will add more clarity about reference testing.

 
SB// Thanks!
 

- 6.1 - I would suggest that the test report include the 
configuration of ancillary devices on both client/server side as 
well
 
[bpm] We believe that adding configuration of the ancillary devices 
doesn't add more value in the report. Instead of this, we will 
recommend documenting the configuration of the ancillary devices by 
doing reference test. We will add this under the section 5 "Test 
bed

consideration".
SB// I think including them assists greatly in the repeatability of 
the testing, for what it's worth.
 

- 6.3 - Nothing on drops anywhere?
 
[bpm] Are you referring to packet drops? If you are, there is no 
packet loss in stateful traffic. Instead of packet loss, the 
stateful traffic has retransmissions.
 
- 7.1.3.2 - Where are these numbers coming from? How are you 
determining the "initial inspected throughput"? Maybe I missed that 
in the document overall, but it's not clear to me where these KPIs 
are collected? I suggest this be called out.
 
[bpm] We will add more clarification in the next version. Thank you.

SB// Thanks!
 

- 7.1.3.3 - what is a "relevant application traffic mix" profile?
 
[bpm] This is described in section7.1.1  (2nd paragraph). We will 
add the word "relevant" in the 1st sentence of the 2nd pragraph.so 
the sentence will be "Based on customer use case, users can choose 
the relevant application traffic mix for this test.  The details 
about the traffic mix MUST be documented in the report.  At least 
the following traffic mix details MUST be documented and reported 
together

with the test results:
 
SB// A set of example(s) could be helpful. Not required, just helpful.
 

- 7.1.3.4 - where does this monitoring occur?
 
[bpm] The monitoring or measurement must occur in the test equipment.
Section 4.3.4 describes this.
 
- 7.1.3.4 - This looks a bit like conformance testing -  Why does 
item
(b) require a specific number/threshold?
 
[bpm] These numbers are synonymous with the zero-packet loss 
criteria for [RFC2544] Throughput and recognize the additional 
complexity of application layer performance. This was agreed by the

IETF BMWG.

- 9: Why is the cipher squite recommendation for a real deployment 
outside the scope of this document?
 
[bpm] Because new cipher suites are frequently developed. Given 
that the draft will not be easily updated once it is accepted as an 
RFC we wanted to ensure there was flexibility to use future 
developed cipher

suites.

Brian Monkman on behalf of....
 
Alex Samonte (Fortinet), Amritam Putatunda (Ixia/Keysight), Bala 
Balarajah (NetSecOPEN), Carsten Rossenhoevel (EANTC), Chris Brown 
(UNH-IOL), Mike Jack (Spirent), Ryan Liles (Cisco), Tim Carlin 
(UNH-IOL), Tim Otto (Juniper)
 

 






-- 
Carsten Rossenhövel
Managing Director, EANTC AG (European Advanced Networking Test Center)
Salzufer 14, 10587 Berlin, Germany
office +49.30.3180595-21, fax +49.30.3180595-10, mobile +49.177.2505721
cross@eantc.de, https://www.eantc.de <https://www.eantc.de/> 
 
Place of Business/Sitz der Gesellschaft: Berlin, Germany
Chairman/Vorsitzender des Aufsichtsrats: Herbert Almus
Managing Directors/Vorstand: Carsten Rossenhövel, Gabriele Schrenk
Registered: HRB 73694, Amtsgericht Charlottenburg, Berlin, Germany
EU VAT No: DE812824025

_______________________________________________
bmwg mailing list
bmwg@ietf.org <mailto:bmwg@ietf.org> 
https://www.ietf.org/mailman/listinfo/bmwg

 

_______________________________________________
bmwg mailing list
 <mailto:bmwg@ietf.org> bmwg@ietf.org
 <https://www.ietf.org/mailman/listinfo/bmwg> https://www.ietf.org/mailman/listinfo/bmwg