[bmwg] Comments on "draft-ietf-bmwg-firewall-05.txt"

Brian Ford <brford@cisco.com> Tue, 02 July 2002 01:50 UTC

Received: from optimus.ietf.org (ietf.org [132.151.1.19] (may be forged)) by ietf.org (8.9.1a/8.9.1a) with ESMTP id VAA13264 for <bmwg-archive@odin.ietf.org>; Mon, 1 Jul 2002 21:50:50 -0400 (EDT)
Received: from optimus.ietf.org (localhost [127.0.0.1]) by optimus.ietf.org (8.9.1a/8.9.1) with ESMTP id VAA26044; Mon, 1 Jul 2002 21:50:48 -0400 (EDT)
Received: from ietf.org (odin [132.151.1.176]) by optimus.ietf.org (8.9.1a/8.9.1) with ESMTP id VAA26017 for <bmwg@optimus.ietf.org>; Mon, 1 Jul 2002 21:50:44 -0400 (EDT)
Received: from mail1.cisco.com (mail1.cisco.com [171.68.225.60]) by ietf.org (8.9.1a/8.9.1a) with ESMTP id VAA13205 for <bmwg@ietf.org>; Mon, 1 Jul 2002 21:49:57 -0400 (EDT)
Received: from brford-w2k1.cisco.com (brford-frame1.cisco.com [171.68.89.146]) by mail1.cisco.com (8.8.6 (PHNE_14041)/CISCO.SERVER.1.2) with ESMTP id SAA21050; Mon, 1 Jul 2002 18:48:49 -0700 (PDT)
Message-Id: <4.3.2.7.2.20020701134824.0315b040@sj-email.cisco.com>
X-Sender: brford@sj-email.cisco.com
X-Mailer: QUALCOMM Windows Eudora Version 4.3.2
Date: Mon, 01 Jul 2002 21:48:39 -0400
To: brooks.hickman@spirentcom.com, dnewman@networktest.com, saldju.Tadjudin@spirentcom.com, tmartin@gvnw.com
From: Brian Ford <brford@cisco.com>
Cc: bmwg@ietf.org
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"; format="flowed"
Subject: [bmwg] Comments on "draft-ietf-bmwg-firewall-05.txt"
Sender: bmwg-admin@ietf.org
Errors-To: bmwg-admin@ietf.org
X-Mailman-Version: 1.0
Precedence: bulk
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
X-BeenThere: bmwg@ietf.org

To: Brooks Hickman,  David Newman, Saldju Tadjudin, Terry Martin;

From: Brian Ford, Consulting Engineer, Cisco Systems

Regarding: Comments on "draft-ietf-bmwg-firewall-05.txt"

---------------------------------

In section 4 of the draft "Test Setup" you stated:

>One interface of the firewall is attached to the unprotected
>    network, typically the public network(Internet). The other interface
>    is connected to the protected network, typically the internal LAN.

You stated "One interface of the firewall is attached to the unprotected 
network, typically the public network(Internet).".  Given that this draft 
addresses Firewall performance I would suggest that you limit the 
definition to protected and an unprotected networks.  Attempting to measure 
performance of Firewalls at the Internet edge where other devices (some of 
which you have technically described in the draft) are not used limits the 
usefulness of this draft.

>Tri-homed[1] configurations employ a third segment called a
>    Demilitarized Zone(DMZ). With tri-homed configurations, servers
>    accessible to the public network are attached to the DMZ. Tri-Homed
>    configurations offer additional security by separating server(s)
>    accessible to the public network from internal hosts.

I want to point out several problems with this statement.

OK.  This is a nit picking point.  There is no need to refer to a third (or 
additional Firewall interface) as a "DMZ".  In fact there is nothing 
"militarized" about a Firewall.  I'd suggest a better choice of wording 
would be "perimeter network".

You stated "servers accessible to the public network are attached to the 
DMZ"; when in fact a perimeter network is used to implement an often new 
security policy on an additional network segment.  Those servers don't have 
to be servicing the public network and could instead be servicing the 
inside network.

Further you state "Tri-Homed configurations offer additional security by 
separating server(s) accessible to the public network from internal 
hosts.".  I would suggest that the security policy implemented on the 
perimeter interface offers additional security; and not the fact that the 
interface was installed and is in use.

In section 4.2 you discuss "Virtual Clients / Servers":

You explained that data sources may emulate multiple users and hosts; which 
your methodology refers to as virtual servers and clients.  You stated that 
the test report SHOULD indicate the number of virtual clients and 
servers.  You stated that "Testers MUST synchronize all data sources 
participating in a test.".  Could you elaborate as to how the data sources 
"MUST" be synchronized?  What's the reasoning behind that "MUST"?

In section 4.3 Test Traffic Requirements you state:

>For the purposes of benchmarking firewall performance this document
>    references HTTP 1.1 or higher as the application layer entity,
>    although the methodologies may be used as a template for
>    benchmarking with other applications.

Elsewhere in the document you stated that many different types of Firewalls 
could potentially be under test; yet you only call for HTTP to be used to 
develop a performance metric.

Another issue with HTTP testing is that it limits the type of rules that 
can be implemented and tested to the HTTP application only.

Why not include FTP transfers of several fixed sized files?  Why limit the 
specification to just HTTP?

In section 4.5 Multiple Client/Server Testing:

You stated "Each virtual client MUST initiate connections in a round-robin 
fashion.".  But wouldn't this "round robin" behavior create a tailored 
stream of traffic?  Would'nt this type of testing better emulate a Firewall 
behind a load balancing device (when in fact the majority of Firewalls are 
installed without a load balancing front end)?  I'd suggest that the type 
of test described doesn't adequately reflect real world conditions and that 
other methods should be considered.

Also see RFC 2544 section 21. regarding Bursty Traffic as an alternative to 
steady state (stream) traffic.

In section 4.7 Rule Sets:

I thought you did a good job of defining rule set functionality.  I was 
surprised that you didn't come out strongly for or against zero or single 
rule set tests.  I think they are irrelevant and turn otherwise interesting 
Firewall performance studies into a discussion of forwarding. But in some 
Firewalls it is worthwhile to test the default security policy.

I would like to see you go further in defining how many rules should be 
included in any test.  I'd also like to see you go further in defining 
locations were basic rule sets could be discovered.  See RFC 2544 section 
11.4.1 on Filter Addresses.  I suggest that something like that is needed 
in this RFC.  For example; somewhere in the DUT / SUT there should probably 
be an RFC 1918 Private Address Filter (you did earlier make the case for 
Internet connectivity).  There are plenty of recommendations about default 
rule sets at the SANS and SecurityFocus; as well as almost all Firewall 
vendors sites.

in section 4.8 in your discussion of Authentication you point out that 
"Authentication is usually performed by devices external to the firewall 
itself, such as an authentication server(s) and may add to the latency of 
the system.".  But you did not go so far as to require that an external 
authentication source be used.  I think you should require the 
authentication database (at least) to be external to the Firewall SUT / DUT.

Not included in Section 4 was any discussion of logging.  Reporting is 
discussed in section 5 but Syslog is not required. At a minimum Syslog MUST 
be supported and enabled on a DUT / SUT.  The Syslog server MUST be a 
separate device (so that Syslog is not recorded on the candidate 
Firewall).  Berkeley Standard Syslog (RFC 3164) should be used.

The log file that is called for in section 5.1.2 would be of little use in 
an operational Firewall.  I think that a test tool used to create the test 
environment might better create the type of log file discussed here.

In section 5 the end condition for the tests seems to be anything more than 
zero packet loss.  I'd suggest that zero packet loss is one way of ending 
the test but is really only realistic for higher end appliance 
Firewalls.  You may want to suggest that some defined amount of packet loss 
that does not exceed some number (say .25, .5, or 1%) should be the end 
condition.

Also of interest is the DUT / SUT "overloaded behavior" as defined by 
Bradner in RFC 1242.  Can the DUT / SUT recover from an overload event?

Several times in section 5 it is stated:

>Between each iteration, it is RECOMMENDED that the tester issue a
>    TCP RST referencing all connections attempted for the previous
>    iteration, regardless of whether or not the connection attempt was
>    successful. The tester will wait for age time before continuing to
>    the next iteration.

In a network each of the individual connections would be terminated with a 
RST.  Why this RST for all connections?  Shouldn't there be some section 
were each of the connections gets an individual RST?

Might you want to test whether and how the Firewall deals with connections 
that don't close?    Shouldn't the Firewall apply a connection 
timer?  Shouldn't that be tested?

Check section 5.4.3 as there seeming to be an incomplete section of text 
(typo?) repeated with that heading number.

In section 5.11 Latency you should note that Bradner states in RFC 1242 
under his discussion of testing latency:

>Measurements should be
>                 taken for a spectrum of frame sizes without changing
>                 the device setup.

and require that the device setup not be changed during Firewall testing.

Liberty for All,

Brian






Brian Ford
Consulting Engineer
Corporate Consulting Engineering, Office of the Chief Technology Officer
Cisco Systems, Inc.
http://www.cisco.com
e-mail: brford@cisco.com



_______________________________________________
bmwg mailing list
bmwg@ietf.org
https://www1.ietf.org/mailman/listinfo/bmwg