[bmwg] WG Action: Rechartered Benchmarking Methodology (bmwg)

The IESG <iesg-secretary@ietf.org> Thu, 12 June 2014 18:00 UTC

Return-Path: <iesg-secretary@ietf.org>
X-Original-To: bmwg@ietfa.amsl.com
Delivered-To: bmwg@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 605941B2AD4; Thu, 12 Jun 2014 11:00:57 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level:
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id BUgndPA9PXif; Thu, 12 Jun 2014 11:00:54 -0700 (PDT)
Received: from ietfa.amsl.com (localhost [IPv6:::1]) by ietfa.amsl.com (Postfix) with ESMTP id 10A0F1B2AD9; Thu, 12 Jun 2014 11:00:54 -0700 (PDT)
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
From: The IESG <iesg-secretary@ietf.org>
To: IETF-Announce <ietf-announce@ietf.org>
X-Test-IDTracker: no
X-IETF-IDTracker: 5.5.0.p2
Auto-Submitted: auto-generated
Precedence: bulk
Message-ID: <20140612180054.16608.72680.idtracker@ietfa.amsl.com>
Date: Thu, 12 Jun 2014 11:00:54 -0700
Archived-At: http://mailarchive.ietf.org/arch/msg/bmwg/OwvQySDWDnUft6KtQ83a8P6T-hA
Cc: bmwg WG <bmwg@ietf.org>
Subject: [bmwg] WG Action: Rechartered Benchmarking Methodology (bmwg)
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.15
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/bmwg/>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Jun 2014 18:00:57 -0000

The Benchmarking Methodology (bmwg) working group in the Operations and
Management Area of the IETF has been rechartered. For additional
information please contact the Area Directors or the WG Chairs.

Benchmarking Methodology (bmwg)
------------------------------------------------
Current Status: Active WG

Chairs:
  Sarah Banks <sbanks@akamai.com>
  Al Morton <acmorton@att.com>

Assigned Area Director:
  Joel Jaeggli <joelja@bogus.com>

Mailing list
  Address: bmwg@ietf.org
  To Subscribe: bmwg-request@ietf.org
  Archive: http://www.ietf.org/mail-archive/web/bmwg/

Charter:

The Benchmarking Methodology Working Group (BMWG) will continue to
produce a series of recommendations concerning the key performance
characteristics of internetworking technologies, or benchmarks for
network devices, systems, and services. Taking a view of networking
divided into planes, the scope of work includes benchmarks for the
management, control, and forwarding planes.

Each recommendation will describe the class of equipment, system, or
service being addressed; discuss the performance characteristics that
are pertinent to that class; clearly identify a set of metrics that aid
in the description of those characteristics; specify the methodologies
required to collect said metrics; and lastly, present the requirements
for the common, unambiguous reporting of benchmarking results.

The set of relevant benchmarks will be developed with input from the
community of users (e.g., network operators and testing organizations)
and from those affected by the benchmarks when they are published
(networking and test equipment manufacturers). When possible, the
benchmarks and other terminologies will be developed jointly with
organizations that are willing to share their expertise. Joint review
requirements for a specific work area will be included in the detailed
description of the task, as listed below.

To better distinguish the BMWG from other measurement initiatives in the
IETF, the scope of the BMWG is limited to the characterization of
implementations of various internetworking technologies
using controlled stimuli in a laboratory environment. Said differently,
the BMWG does not attempt to produce benchmarks for live, operational
networks. Moreover, the benchmarks produced by this WG shall strive to
be vendor independent or otherwise have universal applicability to a
given technology class.

Because the demands of a particular technology may vary from deployment
to deployment, a specific non-goal of the Working Group is to define
acceptance criteria or performance requirements.

An ongoing task is to provide a forum for development and
advancement of measurements which provide insight on the
capabilities and operation of implementations of inter-networking
technology.

Ideally, BMWG should communicate with the operations community 
through organizations such as NANOG, RIPE, and APRICOT.

The BMWG is explicitly tasked to develop benchmarks and methodologies 
for the following technologies:

BGP Control-plane Convergence Methodology (Terminology is complete):
With relevant performance characteristics identified, BMWG will prepare
a Benchmarking Methodology Document with review from the Routing Area
(e.g., the IDR working group and/or the RTG-DIR). The Benchmarking
Methodology will be Last-Called in all the groups that previously
provided input, including another round of network operator input during
the last call.

SIP Networking Devices: Develop new terminology and methods to
characterize the key performance aspects of network devices using
SIP, including the signaling plane scale and service rates while
considering load conditions on both the signaling and media planes. This
work will be harmonized with related SIP performance metric definitions
prepared by the PMOL working group.

Traffic Management: Develop the methods to characterize the capacity 
of traffic management features in network devices, such as
classification, 
policing, shaping, and active queue management. Existing terminology 
will be used where appropriate. Configured operation will be verified 
as a part of the methodology. The goal is a methodology to assess the 
maximum forwarding performance that a network device can sustain without 
dropping or impairing packets, or compromising the accuracy of multiple 
instances of traffic management functions. This is the benchmark for 
comparison between devices. Another goal is to devise methods that 
utilize flows with congestion-aware transport as part of the traffic 
load and still produce repeatable results in the isolated test
environment.

IPv6 Neighbor Discovery: Large address space in IPv6 subnets presents 
several networking challenges, as described in RFC 6583. Indexes to 
describe the performance of network devices, such as the number of 
reachable devices on a sub-network, are useful benchmarks to the 
operations community. The BMWG will develop the necessary 
terminology and methodologies to measure such benchmarks.

In-Service Software Upgrade: Develop new methods and benchmarks to 
characterize the upgrade of network devices while in-service, 
considering both data and control plane operations and impacts. 
These devices are generally expected to maintain control plane session 
integrity, including routing connections. Quantification of upgrade 
impact will include packet loss measurement, and other forms of recovery 
behavior will be noted accordingly. The work will produce a definition 
of ISSU, which will help refine the scope. Liaisons will be established 
as needed.

Data Center Benchmarking: This work will define additional terms, 
benchmarks, and methods applicable to data center performance
evaluations. 
This includes data center specific congestion scenarios, switch buffer 
analysis, microburst, head of line blocking, while also using a wide mix 
of traffic conditions. Some aspects from BMWG's past work are not 
meaningful when testing switches that implement new IEEE specifications 
in the area of data center bridging. For example, throughput as defined 
in RFC 1242 cannot be measured when testing devices that implement three 
new IEEE specifications: priority-based flow control (802.1Qbb); 
priority groups (802.1Qaz); and congestion notification (802.1Qau).
This work will update RFC1242, RFC2544, RFC2889 (and other key RFCs), 
and exchange Liaisons with relevant SDOs, especially at WG Last Call.

VNF and Related Infrastructure Benchmarking: Benchmarking Methodologies 
have reliably characterized many physical devices. This work item extends

and enhances the methods to virtual network functions (VNF) and their 
unique supporting infrastructure. A first deliverable from this activity 
will be a document that considers the new benchmarking space to ensure
that common issues are recognized from the start, using background 
materials from industry and SDOs (e.g., IETF, ETSI NFV).
Benchmarks for platform capacity and performance characteristics of 
virtual routers, switches, and related components will follow, including 
comparisons between physical and virtual network functions. In many
cases,
the traditional benchmarks should be applicable to VNFs, but the lab 
set-ups, configurations, and measurement methods will likely need to 
be revised or enhanced.

Milestones:
  Jun 2014 - Basic BGP Convergence Benchmarking Methodology to IESG
Review
  Jul 2014 - Terminology for SIP Device Benchmarking to IESG Review
  Jul 2014 - Methodology for SIP Device Benchmarking to IESG Review
  Aug 2014 - Draft on Traffic Management Benchmarking to IESG Review
  Dec 2014 - Draft on IPv6 Neighbor Discovery to IESG Review
  Mar 2015 - Draft on In-Service Software Upgrade Benchmarking to IESG
Review
  Aug 2015 - Draft on VNF Benchmarking Considerations to IESG Review
  Dec 2015 - Drafts on Data Center Benchmarking to IESG Review