Re: [bmwg] WGLC on New version of draft-ietf-bmwg-ngfw-performance

Sarah Banks <sbanks@encrypted.net> Tue, 29 June 2021 17:32 UTC

Return-Path: <sbanks@encrypted.net>
X-Original-To: bmwg@ietfa.amsl.com
Delivered-To: bmwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 63F903A3BA2 for <bmwg@ietfa.amsl.com>; Tue, 29 Jun 2021 10:32:47 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.897
X-Spam-Level:
X-Spam-Status: No, score=-1.897 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_NONE=0.001, URIBL_BLOCKED=0.001, WEIRD_QUOTING=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 8MTztgzpWPom for <bmwg@ietfa.amsl.com>; Tue, 29 Jun 2021 10:32:42 -0700 (PDT)
Received: from xyz.hosed.xyz (xyz.hosed.xyz [71.114.67.91]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id AAA073A3B9D for <bmwg@ietf.org>; Tue, 29 Jun 2021 10:32:41 -0700 (PDT)
Received: from localhost (localhost [127.0.0.1]) by xyz.hosed.xyz (Postfix) with ESMTP id E11B513C1CEA; Tue, 29 Jun 2021 13:32:40 -0400 (EDT)
X-Virus-Scanned: Debian amavisd-new at xyz.hosed.xyz
Received: from xyz.hosed.xyz ([127.0.0.1]) by localhost (xyz.hosed.xyz [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id xGj-3mlhoibV; Tue, 29 Jun 2021 13:32:40 -0400 (EDT)
Received: from smtpclient.apple (c-73-71-250-98.hsd1.ca.comcast.net [73.71.250.98]) by xyz.hosed.xyz (Postfix) with ESMTPSA id DBD4D13C0206; Tue, 29 Jun 2021 13:32:39 -0400 (EDT)
Content-Type: text/plain; charset="us-ascii"
Mime-Version: 1.0 (Mac OS X Mail 14.0 \(3654.100.0.2.22\))
From: Sarah Banks <sbanks@encrypted.net>
In-Reply-To: <009b01d76c46$8063e5a0$812bb0e0$@netsecopen.org>
Date: Tue, 29 Jun 2021 10:32:38 -0700
Cc: ALFRED MORTON <acmorton@att.com>, "Jack, Mike" <Mike.Jack@spirent.com>, Bala Balarajah <bm.balarajah@gmail.com>, "MORTON, ALFRED C (AL)" <acm@research.att.com>, bmwg@ietf.org, Bala Balarajah <bala@netsecopen.org>
Content-Transfer-Encoding: quoted-printable
Message-Id: <6FF0C82F-FFA2-49F9-9CBB-A4EE27A714F2@encrypted.net>
References: <413e779fd7eb4dd4b3aa8473c171e282@att.com> <f1a2b5c5-ebf2-12ab-b053-b9b2538342ad@hit.bme.hu> <047501d745bb$e22f4ab0$a68de010$@netsecopen.org> <7dc6b282-7f41-bf7c-f09c-65e7ce94b674@hit.bme.hu> <048801d745be$31424b50$93c6e1f0$@netsecopen.org> <84196d5ce7474f9196ab000be64c49fd@att.com> <02629ACE-FDA4-4ACF-9459-825521596B83@encrypted.net> <001201d75266$05979140$10c6b3c0$@netsecopen.org> <059e01d75f7d$a62a4de0$f27ee9a0$@netsecopen.org> <009b01d76c46$8063e5a0$812bb0e0$@netsecopen.org>
To: bmonkman@netsecopen.org
X-Mailer: Apple Mail (2.3654.100.0.2.22)
Archived-At: <https://mailarchive.ietf.org/arch/msg/bmwg/Gbe1Tw3djmwq7vQ7Rm3hofoN_ag>
Subject: Re: [bmwg] WGLC on New version of draft-ietf-bmwg-ngfw-performance
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/bmwg/>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 29 Jun 2021 17:32:47 -0000

Hi Brian,
   I apologize, I'm super swamped at work right now and haven't had the bandwidth to read through these. I will aim to complete the review by Friday. My apologies for the delay, and thank you for the patience. 

Thanks,
Sarah


> On Jun 28, 2021, at 10:53 AM, <bmonkman@netsecopen.org> <bmonkman@netsecopen.org> wrote:
> 
> Sarah and Al,
> 
> Given that 15 days have gone by, may I safely assume that all of our
> comments/edits are accepted, and we can submit a new draft? That is hoping
> the new draft will pass muster easily.
> 
> Brian
> 
> -----Original Message-----
> From: bmonkman@netsecopen.org <bmonkman@netsecopen.org> 
> Sent: Saturday, June 12, 2021 7:25 AM
> To: 'Sarah Banks' <sbanks@encrypted.net>; 'ALFRED MORTON' <acmorton@att.com>
> Cc: 'Gabor LENCSE' <lencse@hit.bme.hu>; 'Bala Balarajah'
> <bala@netsecopen.org>; 'Bala Balarajah' <bm.balarajah@gmail.com>;
> bmwg@ietf.org; 'MORTON, ALFRED C (AL)' <acm@research.att.com>;
> asamonte@fortinet.com; amritam.putatunda@keysight.com; 'Carsten
> Rossenhoevel' <cross@eantc.de>; 'Jack, Mike' <Mike.Jack@spirent.com>;
> 'Christopher Brown' <cbrown@iol.unh.edu>; 'Ryan Liles (ryliles)'
> <ryliles@cisco.com>
> Subject: RE: [bmwg] WGLC on New version of draft-ietf-bmwg-ngfw-performance
> 
> Hi Sarah,
> 
> As I mentioned in the previous message, we will remove reference to IDS from
> the draft. Given that, none of the IDS related comments/questions are being
> addressed.
> 
> Sorry for the delay in responding. I was unexpectedly out of the office. 
> 
> Our responses are below, preceded by [bpm]. However, please note that this
> is not a response from a single person, but from multiple people,
> representing security product vendors, test labs and test tool vendors.
> 
> Brian
> 
>>>> 
> 
> - The draft aims to replace RFC3511, but expands scope past Firewalls, to
> "next generation security devices". I'm not finding a definition of what a
> "next generation security device is", nor an exhaustive list of the devices
> covered in this draft. A list that includes is nice, but IMO not enough to
> cover what would be benchmarked here - I'd prefer to see a definition and an
> exhaustive list.
> 
> [bpm] "We avoid limiting the draft by explicitly adding a list of NG
> security devices currently available in the market only. In the future,
> there may be more and more new types of NG security devices that will appear
> on the market.
> 
> [bpm] This draft includes a list of security features that the security
> device can have ( RFC 3511 doesn't have such a list). Also, we will describe
> in the draft that the security devices must be configured ""in-line"" mode.
> We believe these two points qualifying the definition of next generation
> security.
> 
> - What is a NGIPS or NGIDS? If there are standardized definitions pointing
> to them is fine, otherwise, there's a lot of wiggle room here.
> 
> [bpm] See above. We are removing NGIDS from the draft.
> 
> - I still have the concern I shared at the last IETF meeting, where here,
> we're putting active inline security devices in the same category as passive
> devices. On one hand, I'm not sure I'd lump these three together in the
> first place; on the other, active inline devices typically include
> additional functions to allow administrators to control what happens to
> packets in the case of failure, and I don't see those test cases included
> here.
> 
> [bpm] This draft focuses on ""in-line"" mode security devices only. We will
> describe this in section 4 in more detail.
> 
> [bpm] Additionally, the draft focuses mainly on performance tests. The DUT
> must be configured in ""fail close"" mode. We will describe this under
> section 4. Any failure scenarios like ""fail open"" mode is out of scope.
> 
> - Section 4.1 - it reads as if ANY device in the test setup cannot
> contribute to network latency or throughput issues, including the DUTs - is
> that what you intended?
> 
> [bpm] "Our intention is, if the external devices (routers and switches) are
> used in the test bed, they should not negatively impact DUT/SUT performance.
> To address this, we added a section ( section 5 ""Test Bed Considerations"")
> which recommends a pre-test.  We can rename this as reference test or
> baseline test. "
> 
> - Option 1: It'd be nice to see a specific, clean, recommended test bed.
> There are options for multiple emulated routers. As a tester, I expect to
> see a specific, proscribed test bed that I should configure and test
> against. 
> 
> [bpm] The draft describes that Option 1 is the recommended test setup.
> However. We added emulated routers as optional in option 1. The reason for
> that:
> Some type of security devices for some deployment scenarios requires routers
> between test client/server and the DUT (e.g., NGFW) and some DUT/SUT doesn't
> need router (e.g. NGIPS )
> 
> - Follow on: I'm curious as to the choice of emulated routers here. The
> previous test suggests you avoid routers and switches in the topo, but then
> there are emulated ones here. I'm curious as to what advantages you think
> these bring over the real deal, and why they aren't subject to the same
> limitations previously described?
> 
> [bpm] Comparing real router, the emulated router gives more advantages for
> L7 testing.
> 
> [bpm] - Emulated router doesn't add latency. Even if it adds delay due to
> the routing process, the test equipment can report the added latency, or it
> can consider this for the latency measurement.
> 
> [bpm] - Emulated routers simply do routing function only. But in a "real"
> router, we are not sure what else the router is doing with the packets.
> 
> [bpm] Your question regarding the need for routers:
> 
> [bpm] - We avoid impacting the DUT/SUT performance due to ARP or ND process
> 
> [bpm] - Represent realistic scenario (In the production environment the
> security devices will not be directly connected with the clients.)
> 
> [bpm] - Routing (L3 mode) is commonly used in the NG security devices.
> 
> [bpm] However, in both figures we mentioned that router including emulated
> router is optional. If there is no need have routing functionality on the
> test bed (e.g., if we used very small number of clients and server IPs or
> the DUT operates in Layer 2 mode), it can be ignored.
> 
> [bpm] Also, we described in Option 1, that the external devices are if there
> is need to aggregate the interfaces of the tester or DUT.  For an example,
> DUT has 2 Interfaces, but tester need to use it's 4 interfaces to achieve
> the performance. So here we need switch/router to aggregate tester interface
> from 4 to 2.
> 
> - In section 4.1 the text calls out Option 1 as the preferred test bed,
> which includes L3 routing, but it's not clear why that's needed?
> 
> [bpm] See above.
> 
> - The difference between Option 1 and Option 2 is the inclusion of
> additional physical gear in Option 2 - it's not clear why that's needed, or
> why the tester can't simply directly connect the test equipment to the DUT
> and remove extraneous devices from potential influence on results?
> 
> [bpm] See above.
> 
> - Section 4.2, the table for NGFW features - I'm not sure what the
> difference is between RECOMMENDED and OPTIONAL? (I realize that you might be
> saying that RECOMMENDED is the "must have enabled" features, where as
> optional is at your discretion, but would suggest that you make that clear)
> 
> [bpm] The definition for OPTIONAL and RECOMMENDED is described in,  and
> recommended, RFC2119. We already referenced this under the section 2
> "Requirements".
> 
> - Proscribing a list of features that have to be enabled for the test, or at
> least more than 1, feels like a strange choice here - I'd have expected
> tests cases that either test the specific features one at a time, or suggest
> several combinations, but that ultimately, we'd tell the tester to document
> WHICH features were enabled, to make the test cases repeatable? This allows
> the tester to apply a same set of apples to apples configurations to
> different vendor gear, and omit the 1 feature that doesn't exist on a
> different NGFW (for example), but hold a baseline that could be tested.
> 
> - Table 2: With the assumption that NGIPS/IDS are required to have the
> features under "recommended", I disagree with this list. For example, some
> customers break and inspect at the tap/agg layer of the network - in this
> case, the feed into the NGIDS might be decrypted, and there's no need to
> enable SSL inspection, for example. 
> 
> [bpm] IDS is being removed.
> 
> - Table 3: I disagree that an NGIDS IS REQUIRED to decrypt SSL. This
> behaviour might be suitable for an NGIPS, but the NGIDS is not a bump on the
> wire, and often isn't decrypting and re-encrypting the traffic.
> 
> [bpm] IDS is being removed.
> 
> - Table 3: An NGIDS IMO is still a passive device - it wouldn't be blocking
> anything, but agree that it might tell you that it happened after the fact.
> 
> [bpm] IDS is being removed.
> 
> - Table 3: Anti-evasion definition - define "mitigates". 
> 
> [bpm] Not sure why you are asking this as mitigate is not an uncommon
> term/word. 
> 
> - Table 3: Web-filtering - not a function of an NGIDS.
> 
> [bpm] IDS is being removed.
> 
> - Table 3: DLP: Not applicable for an NGIDS.
> 
> [bpm] IDS is being removed.
> 
> - Can you expand on "disposition of all flows of traffic are logged" -
> what's meant here specifically, and why do they have to be logged? (Logging,
> particularly under high loads, will impact it's own performance marks, and
> colours output)
> 
> [bpm] We intentionally recommended enabling logging which will impact the
> performance. The draft is not aiming to get high performance number with
> minimal DUT/SUT configuration. In contrast, it aims to get reasonable
> performance number with realistic DUT configuration. The realistic
> configuration can vary based on DUT/SUT deployment scenario. 
> 
> [bpm] In most of the DUT/SUT deployment scenarios or customer environments,
> logging is enabled as default configuration.
> 
> [bpm] "Disposition of all flows of traffic are logged": means that the
> DUT/SUT need to log all the traffic at the flow level not each packet.
> 
> [bpm] We will add more clarification for the meaning of "disposition of all
> flows of traffic are logged".
> 
> - ACLs wouldn't apply to an IDS because IDS's aren't blocking traffic :)
> 
> [bpm] IDS is being removed.
> 
> - It might be helpful to testers to say something like "look, here's one
> suggested set of ACLs. If you're using them, great, reference that, but
> otherwise, make note of the ACLs you use, and use the same ones for
> repeatable testing".
> 
> [bpm] The draft gives guidance how to choose the ACL rules. We describe here
> a methodology to create ACL.  
> 
> - 4.3.1.1 The doc proscribes specific MSS values for v4/v6 with no
> discussion around why they're chosen - that color could be useful to the
> reader.
> 
> [bpm] We will add some more clarification that these are the default number
> used in most of the client operating systems currently.
> 
> - 4.3.1.1 - there's a period on the 3rd to last line "(SYN/ACL, ACK). and"
> that should be changed.
> 
> [bpm] Thank you.
> 
> - 4.3.1.1 - As a tester with long time experience with major test equipment
> manufacturers, I can't possibly begin to guess which ones of them would
> conform to this - or even if they'd answer these questions. How helpful is
> this section to the non test houses? I suggest expansion here, ideally with
> either covering the scope of what you expect to cover, or hopefully which
> (open source/generally available) test tools or emulators could be
> considered for use as examples.
> 
> [bpm] We extensively discussed with Ixia and Spirent about this section.
> This section was developed with significant input from these test tools
> vendors in addition to others.
> 
> - 4.3.1.3 - Do the emulated web browser attributes really apply to testing
> the NGIPS?
> 
> [bpm] Yes, we performed many PoC tests with test tools. Ixia and Spirent
> confirmed this.
> 
> - 4.3.2.3 - Do you expect to also leverage TLS 1.3 as a configuration option
> here?
> 
> [bpm] Yes
> 
> - 4.3.4 - I'm surprised to see the requirement that all sessions establish a
> distinct phase before moving on to the next. You might clarify why this is a
> requirement, and why staggering them is specifically rejected?
> 
> [bpm] This draft doesn't describe that all sessions establish a distinct
> phase before moving on to the next. We will remove the word "distinct" from
> the 1st paragraph in section 4.3.4.
> 
> [bpm] Unlike Layer 2/3 testing, Layer 7 testing requires several phases in
> the traffic load profile. The traffic load profile described in the draft is
> the common profile mostly used for Layer 7 testing.
> 
> - 5.1 - I like the sentence, but it leaves a world of possibilities open as
> to how one confirmed that the ancillary switching, or routing functions
> didn't limit the performance, particularly the virtualized components?
> 
> [bpm] The sentence says, "Ensure that any ancillary switching or routing
> functions between the system under test and the test equipment do not limit
> the performance of the traffic generator."
> 
> [bpm] Here we discuss the traffic generator performance, and this can be
> confirmed by doing reference test.
> 
> [bpm] The section 5 recommends reference test to ensure that the maximum
> desired traffic generator's performance. Based on the reference test results
> it can be identified, if the external device added any impact on traffic
> generator's performance.
> 
> [bpm] We will add more content in section 5 to provide more details about
> reference test.
> 
> - 5.3 - this is a nice assertion but again, how do I reasonably make the
> assertion?
> 
> [bpm] We will change the word from "Assertion" to "Ensure". Also, we will
> add more clarity about reference testing.
> 
> - 6.1 - I would suggest that the test report include the configuration of
> ancillary devices on both client/server side as well
> 
> [bpm] We believe that adding configuration of the ancillary devices doesn't
> add more value in the report. Instead of this, we will recommend documenting
> the configuration of the ancillary devices by doing reference test. We will
> add this under the section 5 "Test bed consideration".
> 
> - 6.3 - Nothing on drops anywhere?
> 
> [bpm] Are you referring to packet drops? If you are, there is no packet loss
> in stateful traffic. Instead of packet loss, the stateful traffic has
> retransmissions.
> 
> - 7.1.3.2 - Where are these numbers coming from? How are you determining the
> "initial inspected throughput"? Maybe I missed that in the document overall,
> but it's not clear to me where these KPIs are collected? I suggest this be
> called out.
> 
> [bpm] We will add more clarification in the next version. Thank you.
> 
> - 7.1.3.3 - what is a "relevant application traffic mix" profile?
> 
> [bpm] This is described in section7.1.1  (2nd paragraph). We will add the
> word "relevant" in the 1st sentence of the 2nd pragraph.so the sentence will
> be "Based on customer use case, users can choose the relevant application
> traffic mix for this test.  The details about the traffic mix MUST be
> documented in the report.  At least the following traffic mix details MUST
> be documented and reported together with the test results:
> 
> - 7.1.3.4 - where does this monitoring occur?
> 
> [bpm] The monitoring or measurement must occur in the test equipment.
> Section 4.3.4 describes this.
> 
> - 7.1.3.4 - This looks a bit like conformance testing -  Why does item (b)
> require a specific number/threshold?
> 
> [bpm] These numbers are synonymous with the zero-packet loss criteria for
> [RFC2544] Throughput and recognize the additional complexity of application
> layer performance. This was agreed by the IETF BMWG.
> 
> - 9: Why is the cipher squite recommendation for a real deployment outside
> the scope of this document?
> 
> [bpm] Because new cipher suites are frequently developed. Given that the
> draft will not be easily updated once it is accepted as an RFC we wanted to
> ensure there was flexibility to use future developed cipher suites.  
> 
> Brian Monkman on behalf of....
> 
> Alex Samonte (Fortinet), Amritam Putatunda (Ixia/Keysight), Bala Balarajah
> (NetSecOPEN), Carsten Rossenhoevel (EANTC), Chris Brown (UNH-IOL), Mike Jack
> (Spirent), Ryan Liles (Cisco), Tim Carlin (UNH-IOL), Tim Otto (Juniper)
> 
> 
> -----Original Message-----
> From: bmonkman@netsecopen.org <bmonkman@netsecopen.org>
> Sent: Wednesday, May 26, 2021 3:34 PM
> To: 'Sarah Banks' <sbanks@encrypted.net>; 'ALFRED MORTON' <acmorton@att.com>
> Cc: 'Gabor LENCSE' <lencse@hit.bme.hu>; 'Bala Balarajah'
> <bala@netsecopen.org>; 'Bala Balarajah' <bm.balarajah@gmail.com>;
> bmwg@ietf.org; 'MORTON, ALFRED C (AL)' <acm@research.att.com>;
> asamonte@fortinet.com; amritam.putatunda@keysight.com; 'Carsten
> Rossenhoevel' <cross@eantc.de>; 'Jack, Mike' <Mike.Jack@spirent.com>;
> 'Christopher Brown' <cbrown@iol.unh.edu>; 'Ryan Liles (ryliles)'
> <ryliles@cisco.com>
> Subject: RE: [bmwg] WGLC on New version of draft-ietf-bmwg-ngfw-performance
> 
> Hi Sarah,
> 
> Just wanted to let you know that members of the team working on the draft
> just met. We will be sending a more detailed response soon. But in the
> meantime, We thought we would share a couple of decisions/comments.
> 
> First, your suggestion that IDS be removed from the document is reasonable.
> Thank you for your input. IDS will be removed from the draft. As such, we
> will ignore any item you raised that was related directly to IDS.
> 
> Your comment, "As a tester with long time experience with major test
> equipment manufacturers, I can't possibly begin to guess which ones of them
> would conform to this - or even if they'd answer these questions" warrants
> an immediate comment.  Both Ixia and Spirent have been closely involved with
> the drafting of this document, have implemented the tests as documented and
> have been extraordinarily willing to answer any questions related to the
> requirements outlined within. The creation of this draft has been actively
> supported by every member of our org. That is one of the reasons it has
> taken us this long to reach where we are today. 
> 
> Our goal from day one was to produce guidance on how to test a network
> security product in a manner that would provide realistic results based on
> real world deployment needs in a manner that can be reproduced and would
> provide results that can be comparable regardless of the test tool or test
> house used. Additionally, we feel strongly that the resulting guidelines
> should become part of the public domain. We believed, and still believe, the
> IETF is the right place. We have been very pleased with the input received
> and the spirit it was provided. It appears that everyone who has commented
> wants this work to be a useful addition to the knowledgebase of BMWG test
> specs that have come before.
> 
> We believe the differences that are evidenced by your comments/questions
> result from philosophical differences between the approach to testing
> documented within the draft and your approach. It is indeed possible we can
> both be right. I hope we can find compromise. 
> 
> We will get the responses to the rest of your items posted as soon as
> possible. Thank you for your patience.
> 
> Brian Monkman on behalf of....
> 
> Alex Samonte (Fortinet), Amritam Putatunda (Ixia/Keysight), Bala Balarajah
> (NetSecOPEN), Carsten Rossenhoevel (EANTC), Chris Brown (UNH-IOL), Mike Jack
> (Spirent), Ryan Liles (Cisco), Tim Carlin (UNH-IOL), Tim Otto (Juniper)
> 
> -----Original Message-----
> From: Sarah Banks <sbanks@encrypted.net>
> Sent: Thursday, May 20, 2021 4:35 PM
> To: ALFRED MORTON <acmorton@att.com>
> Cc: bmonkman@netsecopen.org; Gabor LENCSE <lencse@hit.bme.hu>; Bala
> Balarajah <bala@netsecopen.org>; Bala Balarajah <bm.balarajah@gmail.com>;
> bmwg@ietf.org; MORTON, ALFRED C (AL) <acm@research.att.com>
> Subject: Re: [bmwg] WGLC on New version of draft-ietf-bmwg-ngfw-performance
> 
> Hi,
>    Sharing feedback with my participant hat on. I do apologize for sending
> it so late. A few high level comments:
> 
> - I still land in the "I don't agree that this doc clearly covers "all next
> gen security devices" - and they're not clearly defined, either. For the
> three that are covered, I think there's a lumping of the IPS and IDS that
> would be better served separated. From my perspective, an IPS *is* a bump in
> the wire, and does make "blocking" decisions, as called out in this
> document. An IDS however, does not - it tells you it happened after the
> fact, and it's a passive device. This draft is really keyed around "active
> inline" functions - ie/ a bump in the wire - and I think keeping the focus
> there makes sense, and lumping the IDS in does more harm than good. 
> 
> In short, I don't support this draft moving forward with the inclusion of
> IDS - it doesn't make sense to me. One way forward might be to remove the
> IDS as a device from scope (as a suggestion).
> 
> 
> - I've been testing for a long time, but I find this draft ... uncomfortable
> to approach. I expected, and would have preferred, something that walks me
> through a specific set of recommended scenarios with specifics on what to
> configure for repeatable tests that I could compare results with, but there
> are a lot of moving parts here and a lot of assertions that have to be made
> for the test beds as a whole where I think the test results could vary
> wildly when the same topology is handed to Test Person A and Test Person B. 
> 
> - Most of the feedback below omits the IDS pieces, because so much of it
> didn't apply. 
> 
> Thanks,
> Sarah
> 
> 
> 
> - The draft aims to replace RFC3511, but expands scope past Firewalls, to
> "next generation security devices". I'm not finding a definition of what a
> "next generation security device is", nor an exhaustive list of the devices
> covered in this draft. A list that includes is nice, but IMO not enough to
> cover what would be benchmarked here - I'd prefer to see a definition and an
> exhaustive list.
> - What is a NGIPS or NGIDS? If there are standardized definitions pointing
> to them is fine, otherwise, there's a lot of wiggle room here.
> - I still have the concern I shared at the last IETF meeting, where here,
> we're putting active inline security devices in the same category as passive
> devices. On one hand, I'm not sure I'd lump these three together in the
> first place; on the other, active inline devices typically include
> additional functions to allow administrators to control what happens to
> packets in the case of failure, and I don't see those test cases included
> here.
> - Section 4.1 - it reads as if ANY device in the test setup cannot
> contribute to network latency or throughput issues, including the DUTs - is
> that what you intended?
> - Option 1: It'd be nice to see a specific, clean, recommended test bed.
> There are options for multiple emulated routers. As a tester, I expect to
> see a specific, proscribed test bed that I should configure and test
> against. 
> - Follow on: I'm curious as to the choice of emulated routers here. The
> previous test suggests you avoid routers and switches in the topo, but then
> there are emulated ones here. I'm curious as to what advantages you think
> these bring over the real deal, and, why they aren't subject to the same
> limitations previously described?
> - In section 4.1 the text calls out Option 1 as the preferred test bed,
> which includes L3 routing, but it's not clear why that's needed?
> - The difference between Option 1 and Option 2 is the inclusion of
> additional physical gear in Option 2 - it's not clear why that's needed, or
> why the tester can't simply directly connect the test equipment to the DUT
> and remove extraneous devices from potential influence on results?
> - Section 4.2, the table for NGFW features - I'm not sure what the
> difference is between RECOMMENDED and OPTIONAL? (I realize that you might be
> saying that RECOMMENDED is the "must have enabled" features, where as
> optional is at your discretion, but would suggest that you make that clear)
> - Proscribing a list of features that have to be enabled for the test, or at
> least more than 1, feels like a strange choice here - I'd have expected
> tests cases that either test the specific features one at a time, or suggest
> several combinations, but that ultimately, we'd tell the tester to document
> WHICH features were enabled, to make the test cases repeatable? This allows
> the tester to apply a same set of apples to apples configurations to
> different vendor gear, and omit the 1 feature that doesn't exist on a
> different NGFW (for example), but hold a baseline that could be tested.
> - Table 2: With the assumption that NGIPS/IDS are required to have the
> features under "recommended", I disagree with this list. For example, some
> customers break and inspect at the tap/agg layer of the network - in this
> case, the feed into the NGIDS might be decrypted, and there's no need to
> enable SSL inspection, for example. 
> - Table 3: I disagree that an NGIDS IS REQUIRED to decrypt SSL. This
> behaviour might be suitable for an NGIPS, but the NGIDS is not a bump on the
> wire, and often isn't decrypting and re-encrypting the traffic.
> - Table 3: An NGIDS IMO is still a passive device - it wouldn't be blocking
> anything, but agree that it might tell you that it happened after the fact.
> - Table 3: Anti-evasion definition - define "mitigates". 
> - Table 3: Web-filtering - not a function of an NGIDS.
> - Table 3: DLP: Not applicable for an NGIDS.
> - Can you expand on "disposition of all flows of traffic are logged" -
> what's meant here specifically, and why do they have to be logged? (Logging,
> particularly under high loads, will impact it's own performance marks, and
> colours output)
> - ACLs wouldn't apply to an IDS because IDS's aren't blocking traffic :)
> - It might be helpful to testers to say something like "look, here's one
> suggested set of ACLs. If you're using them, great, reference that, but
> otherwise, make note of the ACLs you use, and use the same ones for
> repeatable testing".
> - 4.3.1.1 The doc proscribes specific MSS values for v4/v6  with no
> discussion around why they're chosen - that color could be useful to the
> reader.
> - 4.3.1.1 - there's a period on the 3rd to last line "(SYN/ACL, ACK). and"
> that should be changed.
> - 4.3.1.1 - As a tester with long time experience with major test equipment
> manufacturers, I can't possibly begin to guess which ones of them would
> conform to this - or even if they'd answer these questions. How helpful is
> this section to the non test houses? I suggest expansion here, ideally with
> either covering the scope of what you expect to cover, or hopefully which
> (open source/generally available) test tools or emulators could be
> considered for use as examples.
> - 4.3.1.3 - Do the emulated web browser attributes really apply to testing
> the NGIPS?
> - 4.3.2.3 - Do you expect to also leverage TLS 1.3 as a configuration option
> here?
> - 4.3.4 - I'm surprised to see the requirement that all sessions establish a
> distinct phase before moving on to the next. You might clarify why this is a
> requirement, and why staggering them is specifically rejected?
> - 5.1 - I like the sentence, but it leaves a world of possibilities open as
> to how one confirmed that the ancillary switching or routing functions
> didn't limit the performance, particularly the virtualized components?
> - 5.3 - this is a nice assertion but again, how do I reasonably make the
> assertion?
> - 6.1 - I would suggest that the test report include the configuration of
> ancillary devices on both client/server side as well
> - 6.3 - Nothing on drops anywhere?
> - 7.1.3.2 - Where are these numbers coming from? How are you determining the
> "initial inspected throughput"? Maybe I missed that in the document overall,
> but it's not clear to me where these KPIs are collected? I suggest this be
> called out.
> - 7.1.3.3 - what is a "relevant application traffic mix" profile?
> - 7.1.3.4 - where does this monitoring occur?
> - 7.1.3.4 - This looks a bit like conformance testing -  Why does item (b)
> require a specific number/threshold?
> - 9: Why is the cipher squite recommendation for a real deployment outside
> the scope of this document?
> 
> 
> 
> 
> 
> 
> 
> 
> _______________________________________________
> bmwg mailing list
> bmwg@ietf.org
> https://www.ietf.org/mailman/listinfo/bmwg