WG Action: RECHARTER: Benchmarking Methodology (bmwg)

IESG Secretary <iesg-secretary@ietf.org> Mon, 01 November 2010 21:58 UTC

Return-Path: <wwwrun@core3.amsl.com>
X-Original-To: ietf-announce@ietf.org
Delivered-To: ietf-announce@core3.amsl.com
Received: by core3.amsl.com (Postfix, from userid 30) id 09D873A67B2; Mon, 1 Nov 2010 14:58:35 -0700 (PDT)
From: IESG Secretary <iesg-secretary@ietf.org>
To: IETF Announcement list <ietf-announce@ietf.org>
Subject: WG Action: RECHARTER: Benchmarking Methodology (bmwg)
Content-Type: text/plain; charset="utf-8"
Mime-Version: 1.0
Message-Id: <20101101215836.09D873A67B2@core3.amsl.com>
Date: Mon, 1 Nov 2010 14:58:36 -0700 (PDT)
Cc: acmorton@att.com, bmwg@ietf.org
X-BeenThere: ietf-announce@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: "IETF announcement list. No discussions." <ietf-announce.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/ietf-announce>, <mailto:ietf-announce-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/ietf-announce>
List-Post: <mailto:ietf-announce@ietf.org>
List-Help: <mailto:ietf-announce-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ietf-announce>, <mailto:ietf-announce-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 01 Nov 2010 21:58:36 -0000

The Benchmarking Methodology (bmwg) working group in the Operations and
Management Area of the IETF has been rechartered.  For additional
information, please contact the Area Directors or the working group
Chairs.

Benchmarking Methodology (bmwg)
---------------------------------------------------
Current Status: Active Working Group

Chair:
  Al Morton <acmorton@att.com>

Operations and Management Area Directors:
  Ronald Bonica <rbonica@juniper.net>
  Dan Romascanu <dromasca@avaya.com>

Operations and Management Area Advisor:
  Ronald Bonica <rbonica@juniper.net>

Mailing Lists:
  Address:	bmwg@ietf.org
  To Subscribe:	bmwg-request@ietf.org
  Archive:	http://www.ietf.org/mail-archive/web/bmwg/

Description of Working Group:

The Benchmarking Methodology Working Group (BMWG) will continue to 
produce a series of recommendations concerning the key performance 
characteristics of internetworking technologies, or benchmarks for 
network devices, systems, and services. Taking a view of networking 
divided into planes, the scope of work includes benchmarks for the 
management, control, and forwarding planes.

Each recommendation will describe the class of equipment, system, or
service being addressed; discuss the performance characteristics that
are pertinent to that class; clearly identify a set of metrics that aid
in the description of those characteristics; specify the methodologies
required to collect said metrics; and lastly, present the requirements
for the common, unambiguous reporting of benchmarking results.

The set of relevant benchmarks will be developed with input from the 
community of users (e.g, network operators and testing organizations) 
and from those affected by the benchmarks when they are published 
(networking and test equipment manufacturers). When possible, the 
benchmarks and other terminology will be developed jointly with 
organizations that are willing to share their expertise. Joint review 
requirements for a specific work area will be included in the detailed 
description of the task, as listed below.

To better distinguish the BMWG from other measurement initiatives in the
IETF, the scope of the BMWG is limited to the characterization of 
implementations of various internetworking technologies
using controlled stimuli in a laboratory environment. Said differently,
the BMWG does not attempt to produce benchmarks for live, operational
networks. Moreover, the benchmarks produced by this WG shall strive to
be vendor independent or otherwise have universal applicability to a
given technology class.

Because the demands of a particular technology may vary from deployment
to deployment, a specific non-goal of the Working Group is to define
acceptance criteria or performance requirements.

An ongoing task is to provide a forum for discussion regarding the
advancement of measurements designed to provide insight on the 
capabilities and operation of inter-networking technology 
implementations.

The BMWG will communicate with the operations community through 
organizations such as NANOG, RIPE, and APRICOT.

In addition to its current work plan, the BMWG is explicitly tasked to
develop benchmarks and methodologies for the following technologies:

* BGP Control-plane Convergence Methodology (Terminology is complete): 
With relevant performance characteristics identified, BMWG will prepare 
a Benchmarking Methodology Document with review from the Routing Area 
(e.g., the IDR working group and/or the RTG-DIR). The Benchmarking 
Methodology will be Last-Called in all the groups that previously 
provided input, including another round of network operator input during 
the last call. 

* SIP Networking Devices: Develop new terminology and methods to
characterize the key performance aspects of network devices using
SIP, including the signaling plane scale and service rates while
considering load conditions on both the signaling and media planes. This
work will be harmonized with related SIP performance metric definitions
prepared by the PMOL working group.

* Flow Export and Collection: Develop terminology and methods to
characterize network devices flow monitoring, export, and collection.
The goal is a methodology to assess the maximum IP flow rate that a
network device can sustain without losing any IP flow information or
compromising the accuracy of information exported on the IP flows,
and to asses the forwarding plane performance (if the forwarding 
function is present) in the presence of  Flow Monitoring. 

* Data Center Bridging Devices:
Some key concepts from BMWG's past work are not meaningful when testing
switches that implement new IEEE specifications in the area of data 
center bridging. For example, throughput as defined in RFC 1242 cannot 
be measured when testing devices that implement three new IEEE
specifications: priority-based flow control (802.1Qbb); priority groups
(802.1Qaz); and congestion notification (802.1Qau).
Since devices that implement these new congestion-management
specifications should never drop frames, and since the metric of
throughput distinguishes between non-zero and zero drop rates, no
throughput measurement is possible using the existing methodology.
The current emphasis is on the Priority Flow Control aspects of
Data Center Bridging, and the work will include an investigation
into whether TRILL RBridges require any specific treatment in the 
methodology. This work will update RFC 2455 and exchange periodic 
Liaisons with IEEE 802.1 DCB Task Group, especially at WG Last Call.

* Content Aware Devices: 
New classes of network devices that operate above the IP layer of the 
network stack require a new methodology to perform adequate 
benchmarking.  Existing BMWG RFCs (RFC2647 and RFC3511) provides useful 
measurement and performance statistics, though they may not reflect the 
actual performance of the device when deployed in production networks.  
Operating within the limitations of the charter, namely blackbox 
characterization in laboratory environments, the BMWG will develop a 
methodology that more closely relates the performance of these devices 
to performance in an operational setting. In order to confirm or 
identify key performance characteristics, BMWG will solicit input from 
operations groups such as NANOG, RIP and APRICOT.

* LDP Dataplane Convergence:
In order to identify key LDP convergence performance characteristics, 
BMWG will solicit input from operations groups such as NANOG, RIP and 
APRICOT. When relevant performance characteristics have been identified, 
BMWG will jointly prepare a Benchmarking Terminology Document with the 
Routing Area (e.g., the MPLS working group and or the RTG-DIR), which 
would define metrics relevant to LDP convergence. The Benchmark 
definition document would be Last-Called in all the working groups that 
produced it, and solicit operator input during the last call. The work 
will then continue in BMWG to define the test methodology, with input 
and review from the aforementioned parties.

Goals and  Milestones

Done      Expand the current Ethernet switch benchmarking methodology 
          draft to define the metrics and methodologies particular to 
          the general class of connectionless, LAN switches.
Done      Edit the LAN switch draft to reflect the input from BMWG. 
          Issue a new version of document for comment.  If appropriate, 
          ascertain consensus on whether to recommend the draft for 
          consideration as an RFC.
Done      Take controversial components of multicast draft to mailing 
          list for discussion.  Incorporate changes to draft and reissue 
          appropriately.
Done      Submit workplan for initiating work on Benchmarking 
          Methodology for LAN Switching Devices.
Done      Submit workplan for continuing work on the Terminology for 
          Cell/Call Benchmarking draft.
Done      Submit initial draft of Benchmarking Methodology for LAN 
          Switches.
Done      Submit Terminology for IP Multicast Benchmarking draft for AD 
          Review.
Done      Submit Benchmarking Terminology for Firewall Performance for 
          AD review
Done      Progress ATM benchmarking terminology draft to AD review.
Done      Submit Benchmarking Methodology for LAN Switching Devices 
          draft for AD review.
Done      Submit first draft of Firewall Benchmarking Methodology.
Done      First Draft of Terminology for FIB related Router Performance 
          Benchmarking.
Done      First Draft of Router Benchmarking Framework
Done      Progress Frame Relay benchmarking terminology draft to AD 
          review.
Done      Methodology for ATM Benchmarking for AD review.
Done      Terminology for ATM ABR Benchmarking for AD review.
Done      Terminology for FIB related Router Performance Benchmarking to 
          AD review.
Done      Firewall Benchmarking Methodology to AD Review
Done      First Draft of Methodology for FIB related Router Performance 
          Benchmarking.
Done      First draft Net Traffic Control Benchmarking Methodology.
Done      Methodology for IP Multicast Benchmarking to AD Review.
Done      Resource Reservation Benchmarking Terminology to AD Review
Done      First I-D on IPsec Device Benchmarking Terminology
Done      EGP Convergence Benchmarking Terminology to AD Review
Done      Resource Reservation Benchmarking Methodology to AD Review
Done      Net Traffic Control Benchmarking Terminology to AD Review
Done      IGP/Data-Plane Terminology I-D to AD Review
Done      IGP/Data-Plane Methodology and Considerations I-Ds to AD 
          Review
Done      Hash and Stuffing I-D to AD Review
Done      IPv6 Benchmarking Methodology to AD Review
Done      IPsec Device Benchmarking Terminology to IESG Review
Done      IPsec Device Benchmarking Methodology to IESG Review

Updated Milestones

Done  	  Terminology For Protection Benchmarking to AD Review 
Sep 2010  Networking Device Reset Benchmark (Updates RFC 2544) to IESG 
          Review 
Dec 2010  Methodology For Protection Benchmarking to IESG Review
Jun 2011  Terminology for SIP Device Benchmarking to IESG Review
Jun 2011  Methodology for SIP Device Benchmarking to IESG Review
Jul 2010  Basic BGP Convergence Benchmarking Methodology to IESG Review.

Feb 2011  Methodology for Flow Export and Collection Benchmarking to 
          IESG Review
Jun 2011  Methodology for Data Center Bridging Benchmarking to IESG 
          Review
Dec 2011  Terminology for Content Aware Device Benchmarking to IESG 
          Review
Dec 2011  Methodology for Content Aware Device Benchmarking to IESG 
          Review
Dec 2011  Terminology for LDP Convergence Benchmarking to IESG Review
Dec 2011  Methodology for LDP Convergence Benchmarking to IESG Review