[bmwg] Roman Danyliw's Discuss on draft-ietf-bmwg-ngfw-performance-13: (with DISCUSS and COMMENT)

Roman Danyliw via Datatracker <noreply@ietf.org> Wed, 02 February 2022 22:39 UTC

Return-Path: <noreply@ietf.org>
X-Original-To: bmwg@ietf.org
Delivered-To: bmwg@ietfa.amsl.com
Received: from ietfa.amsl.com (localhost [IPv6:::1]) by ietfa.amsl.com (Postfix) with ESMTP id E64593A2396; Wed, 2 Feb 2022 14:39:37 -0800 (PST)
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
From: Roman Danyliw via Datatracker <noreply@ietf.org>
To: The IESG <iesg@ietf.org>
Cc: draft-ietf-bmwg-ngfw-performance@ietf.org, bmwg-chairs@ietf.org, bmwg@ietf.org, Al Morton <acm@research.att.com>, acm@research.att.com
X-Test-IDTracker: no
X-IETF-IDTracker: 7.44.0
Auto-Submitted: auto-generated
Precedence: bulk
Reply-To: Roman Danyliw <rdd@cert.org>
Message-ID: <164384157725.26994.16348654460944534798@ietfa.amsl.com>
Date: Wed, 02 Feb 2022 14:39:37 -0800
Archived-At: <https://mailarchive.ietf.org/arch/msg/bmwg/NRdhp7GSpw8NIIx8qHaZWWa6B8o>
Subject: [bmwg] Roman Danyliw's Discuss on draft-ietf-bmwg-ngfw-performance-13: (with DISCUSS and COMMENT)
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.29
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/bmwg/>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 02 Feb 2022 22:39:38 -0000

Roman Danyliw has entered the following ballot position for
draft-ietf-bmwg-ngfw-performance-13: Discuss

When responding, please keep the subject line intact and reply to all
email addresses included in the To and CC lines. (Feel free to cut this
introductory paragraph, however.)


Please refer to https://www.ietf.org/blog/handling-iesg-ballot-positions/
for more information about how to handle DISCUSS and COMMENT positions.


The document, along with other ballot positions, can be found here:
https://datatracker.ietf.org/doc/draft-ietf-bmwg-ngfw-performance/



----------------------------------------------------------------------
DISCUSS:
----------------------------------------------------------------------

** A key element of successfully running the throughput tests described in
Section 7, appears to be ensuring how to configure the device under test. 
Section 4.2. helpfully specifies feature sets with recommendations
configurations.  However, it appears there are elements of under-specification
given the level of detail specified with normative language.  Specifically:

-- Section 4.2.1 seems unspecified regarding all the capabilities in Table 1
and 2.  The discussion around vulnerabilities (CVEs) does not appear to be
relevant to configuration of anti-spyware, anti-virus, anti-botnet, DLP, and
DDOS.

-- Recognizing that NGFW, NGIPS and UTM are not precise product categories,
offerings in this space commonly rely on statistical models or AI techniques
(e.g., machine learning) to improve detection rates and reduce false positives
to realize the capabilities in Table 1 and 2.  If even possible, how should
these settings be tuned?  How should the training period be handled when
describing the steps of the test regime (e.g., in Section 4.3.4? Section 7.2.4?)

** Appendix A.  The KPI measures don’t seem precise here – CVEs are unlikely to
be the measure seen on the wire.  Wouldn’t it be exploits associated with a
particular vulnerability (that’s numbered via CVE)?  There can be a one-to-many
relationship between the vulnerability and exploits (e.g., multiple products
affected by a single CVE); or the multiple implementations of an exploit.


----------------------------------------------------------------------
COMMENT:
----------------------------------------------------------------------

** Abstract.  NGFW, NGIPS and UTM are fuzzy product categories.  Do you want to
them somewhere?  How do they differ in functionality?  UTM is mentioned here,
but not again in the document.

** Section 1.
The requirements
   for network security element performance and effectiveness have
   increased tremendously since then.  In the eighteen years since
   [RFC3511] was published, recommending test methodology and
   terminology for firewalls, requirements and expectations for network
   security elements has increased tremendously.

I don’t follow how the intent of these two sentences is different.  Given the
other text in this paragraph, these sentences also appear redundant.

** Section 3. Per “This document focuses on advanced, …”, what makes a testing
method “advanced”?

** Section 4.2.  The abstract said that testing for NGFW, NGIPS and UTM would
be provided.  This section is silent on UTM.

** Section 4.2.  Should the following additional features be noted as a feature
of NGFWs and NGIPS (Tables 1 and 2)?

-- reconnaissance detection

-- geolocation or network topology-based classification/filtering

** Section 4.2. Thanks for the capability taxonomies describe here.  Should it
be noted that “Table 1 and 2 are approximate taxonomies of features commonly
found in currently deployed NGFW and NGIDS.  The features provided by specific
implementations may be named differently and not necessarily have configuration
settings that align to the taxonomy.”

** Table 1.  Is there a reason that DPI and Anti-Evasion (listed in Table 2 for
NGIPS) are not mentioned here (for NGFW).  I don’t see how many (all?) of the
features listed as RECOMMENDED could be done without it.

** Table 3.  For Anti-Botnet, should it read “detects and blocks”?

** Table 3.  For Web Filtering, is this scoped to be classification and threat
detection by URI?

** Table 3.  This table is missing a description for DoS from Table 1 and DPI
and Anti-Evasion from Table 2.

** Section 4.2.  Per “Logging SHOULD be enabled.”  How does this “SHOULD” align
with “logging and reporting” being a RECOMMENDED in Table 1 and 2?  Same
question on “Application Identification and Control SHOULD be configured”

** Section 4.3.1.1.  Why is such well-formed and well-behaved traffic assumed
for a security device?

** Section 4.3.1.  What cipher suites should be used for TLS 1.3 based tests?
The text is prescriptive for TLS 1.2 (using a RECOMMEND) but simply restates
all of those registered by RFC8446.

** Section 9.  Given that the configurations of these test will include working
exploits, it would be helpful to provide a reminder on the need control access
to them.

** Section A.1.
In parallel, the CVEs will be sent to the DUT/SUT as
   encrypted and as well as clear text payload formats using a traffic
   generator.

This guidance doesn’t seem appropriate for all cases.  Couldn’t the
vulnerability being exploited involve a payload in the unencrypted part or a
phase in the communication exchange before a secure channel is negotiated?

** Editorial nits
-- Section 1.  Editorial. s/for firewalls initially/for firewalls/

-- Section 5.  Typo. s/as test equipments/as test equipment/