Re: [bmwg] Second WGLC: draft-ietf-bmwg-dcbench-terminology and methodology

"MORTON, ALFRED C (AL)" <acmorton@att.com> Tue, 13 September 2016 19:09 UTC

Return-Path: <acmorton@att.com>
X-Original-To: bmwg@ietfa.amsl.com
Delivered-To: bmwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0481412B505 for <bmwg@ietfa.amsl.com>; Tue, 13 Sep 2016 12:09:09 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -5.709
X-Spam-Level:
X-Spam-Status: No, score=-5.709 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RP_MATCHES_RCVD=-1.508, SPF_HELO_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id umz1sTldPFJ2 for <bmwg@ietfa.amsl.com>; Tue, 13 Sep 2016 12:09:05 -0700 (PDT)
Received: from mail-pink.research.att.com (mail-pink.research.att.com [204.178.8.22]) by ietfa.amsl.com (Postfix) with ESMTP id 414D612B024 for <bmwg@ietf.org>; Tue, 13 Sep 2016 12:09:05 -0700 (PDT)
Received: from mail-green.research.att.com (H-135-207-255-15.research.att.com [135.207.255.15]) by mail-pink.research.att.com (Postfix) with ESMTP id 419AF12024E for <bmwg@ietf.org>; Tue, 13 Sep 2016 15:23:10 -0400 (EDT)
Received: from exchange.research.att.com (njfpsrvexg0.research.att.com [135.207.255.124]) by mail-green.research.att.com (Postfix) with ESMTP id 9AA1EE331E for <bmwg@ietf.org>; Tue, 13 Sep 2016 15:06:32 -0400 (EDT)
Received: from NJFPSRVEXG0.research.att.com ([fe80::108a:1006:9f54:fd90]) by NJFPSRVEXG0.research.att.com ([fe80::108a:1006:9f54:fd90%25]) with mapi; Tue, 13 Sep 2016 15:09:01 -0400
From: "MORTON, ALFRED C (AL)" <acmorton@att.com>
To: "bmwg@ietf.org" <bmwg@ietf.org>
Date: Tue, 13 Sep 2016 15:09:00 -0400
Thread-Topic: [bmwg] Second WGLC: draft-ietf-bmwg-dcbench-terminology and methodology
Thread-Index: AdIN7ipUvgcuqLiJQaa1fiP/8uz3cQAA72xQ
Message-ID: <4AF73AA205019A4C8A1DDD32C034631D4598DC7370@NJFPSRVEXG0.research.att.com>
References: <4AF73AA205019A4C8A1DDD32C034631D4598DC7356@NJFPSRVEXG0.research.att.com>
In-Reply-To: <4AF73AA205019A4C8A1DDD32C034631D4598DC7356@NJFPSRVEXG0.research.att.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
acceptlanguage: en-US
Content-Type: multipart/mixed; boundary="_002_4AF73AA205019A4C8A1DDD32C034631D4598DC7370NJFPSRVEXG0re_"
MIME-Version: 1.0
Archived-At: <https://mailarchive.ietf.org/arch/msg/bmwg/WN4JZMD_lv_6cmSCTZxG6dn-Fdk>
X-Mailman-Approved-At: Tue, 13 Sep 2016 12:13:30 -0700
Subject: Re: [bmwg] Second WGLC: draft-ietf-bmwg-dcbench-terminology and methodology
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/bmwg/>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 13 Sep 2016 19:09:09 -0000

Hi Lucien and Jacob,

Here are my comments on the meth draft, below.

Al
(as a participant)



INTRODUCTION, paragraph 12:
OLD:

    Copyright (c) 2013 IETF Trust and the persons identified as the
    document authors.  All rights reserved.

NEW:

 AM Copyright (c) 2016 IETF Trust and the persons identified as the
    document authors.  All rights reserved.


Section 1., paragraph 1:
OLD:

    Traffic patterns in the data center are not uniform and are
    constantly changing. They are dictated by the nature and variety of
    applications utilized in the data center. It can be largely east-west
    traffic flows in one data center and north-south in another, while
    some may combine both. Traffic patterns can be bursty in nature and
    contain  many-to-one, many-to-many, or one-to-many flows. Each flow
    may also be small and latency sensitive or large and throughput
    sensitive while containing a mix of UDP and TCP traffic. All of which
    can coexist in a single cluster and flow through a single network
    device all at the same time. Benchmarking of network devices have
    long used RFC1242, RFC2432, RFC2544, RFC2889 and RFC3918. These
    benchmarks have largely been focused around various latency
    attributes and max throughput of the Device Under Test [DUT] being
    benchmarked. These standards are good at measuring theoretical max
    throughput, forwarding rates and latency under testing conditions
    however, they do not represent real traffic patterns that may affect
    these networking devices.

NEW:

    Traffic patterns in the data center are not uniform and are
    constantly changing. They are dictated by the nature and variety of
    applications utilized in the data center. It can be largely east-west
    traffic flows in one data center and north-south in another, while
    some may combine both. Traffic patterns can be bursty in nature and
    contain  many-to-one, many-to-many, or one-to-many flows. Each flow
    may also be small and latency sensitive or large and throughput
    sensitive while containing a mix of UDP and TCP traffic. All of which
    can coexist in a single cluster and flow through a single network
    device all at the same time. Benchmarking of network devices have
 AM long used RFC1242, RFC2432, RFC2544, RFC2889 and RFC3918. These <<<add [ref#s]
    benchmarks have largely been focused around various latency
 AM attributes and Throughput [2] of the Device Under Test (DUT) being
 AM benchmarked. These standards are good at measuring theoretical
 AM Throughput, forwarding rates and latency under testing conditions
    however, they do not represent real traffic patterns that may affect
    these networking devices.


Section 1.2., paragraph 4:
OLD:

    -Reporting Format

NEW:

 AM -Reporting Format: Additional interpretation of RFC2119 terms:


Section 1.2., paragraph 5:
OLD:

    MUST: minimum test for the scenario described

NEW:

 AM MUST: required metric or benchmark for the scenario described (minimum)


Section 1.2., paragraph 6:
OLD:

    SHOULD: recommended test for the scenario described

NEW:

 AM SHOULD or RECOMMENDED: strongly suggested metric for the scenario described


Section 1.2., paragraph 7:
OLD:

    MAY: ideal test for the scenario described

NEW:

 AM MAY: Comprehensive metric for the scenario described


Section 1.2., paragraph 8:
OLD:

    For each test methodology described, it is key to obtain
    repeatability of the results. The recommendation is to perform enough
    iterations of the given test to make sure the result is accurate,
    this is especially important for section 3) as the buffering testing
    has been historically the least reliable.

NEW:

 AM For each test methodology described, it is critical to obtain
 AM repeatability in the results. The recommendation is to perform enough
 AM iterations of the given test and to make sure the result is consistent,
 AM this is especially important for section 3, as the buffering testing
    has historically been the least reliable.


Section 2.1, paragraph 1:
OLD:

    Provide at maximum rate test for the performance values for
    throughput, latency and jitter. It is meant to provide the tests to
    run and methodology to verify that a DUT is capable of forwarding
    packets at line rate under non-congested conditions.

NEW:

 AM Provide a maximum rate test for the performance values for
 AM Throughput, latency and jitter. It is meant to provide the tests to
 AM perform and methodology to verify that a DUT is capable of forwarding
    packets at line rate under non-congested conditions.


Section 2.2, paragraph 1:
OLD:

    A traffic generator SHOULD be connected to all ports on the DUT. Two
    tests MUST be conducted: a port-pair test [RFC 2544/3918 compliant]
    and also in a full mesh type of DUT test [RFC 2889/3918 compliant].

NEW:

    A traffic generator SHOULD be connected to all ports on the DUT. Two
 AM tests MUST be conducted: a port-pair test [RFC 2544/3918 section ?? compliant]
 AM and also in a full mesh type of DUT test [RFC 2889/3918 section ?? compliant].


Section 2.2, paragraph 2:
OLD:

    For all tests, the percentage of traffic per port capacity sent MUST
    be 99.98% at most, with no PPM adjustment to ensure stressing the DUT
    in worst case conditions. Tests results at a lower rate MAY be
    provided for better understanding of performance increase in terms of
    latency and jitter when the rate is lower than 99.98%. The receiving
    rate of the traffic needs to be captured during this test in % of
    line rate.

NEW:

    For all tests, the percentage of traffic per port capacity sent MUST
    be 99.98% at most, with no PPM adjustment to ensure stressing the DUT
    in worst case conditions. Tests results at a lower rate MAY be
    provided for better understanding of performance increase in terms of
    latency and jitter when the rate is lower than 99.98%. The receiving
 AM rate of the traffic should be reported during this test in % of
    line rate.


Section 2.2, paragraph 3:
OLD:

    The test MUST provide the latency values for minimum, average and
    maximum, for the exact same iteration of the test.

NEW:

 AM The test MUST provide the statistics of minimum, average and
 AM maximum of the latency distribution, for the exact same iteration of the test.


Section 2.2, paragraph 4:
OLD:

    The test MUST provide the jitter values for minimum, average and
    maximum, for the exact same iteration of the test.

NEW:

 AM The test MUST provide the statistics of minimum, average and
 AM maximum of the jitter distribution, for the exact same iteration of the test.


Section 2.3, paragraph 2:
OLD:

    -physical layer calibration information as defined into (Placeholder
    for definitions draft)

NEW:

 AM -physical layer calibration information as defined into (Placeholder
 AM for definitions draft)   ????? ref to terms draft section ???


Section 2.3, paragraph 4:
OLD:

    -reading for throughput received in percentage of bandwidth, while
    sending 99.98% of port capacity on each port, across packet size from
    64 byte all the way to 9216. As guidance, an increment of 64 byte
    packet size between each iteration being ideal, a 256 byte and 512
    bytes being also often time used, the most common packets sizes order
    for the report is: 64b,128b,256b,512b,1024b,1518b,4096,8000,9216b.

NEW:

 AM -reading for Throughput received in percentage of bandwidth, while
 AM sending 99.98% of port capacity on each port, for each packet size from
 AM 64 bytes to 9216 bytes. As guidance, an increment of 64 byte
    packet size between each iteration being ideal, a 256 byte and 512
    bytes being also often time used, the most common packets sizes order
    for the report is: 64b,128b,256b,512b,1024b,1518b,4096,8000,9216b.


Section 2.3, paragraph 5:
OLD:

    The pattern for testing can be expressed using RFC 6985 [IMIX Genome:
    Specification of Variable Packet Sizes for Additional Testing]

NEW:

 AM For IMIX testing, the pattern for testing can be expressed using RFC 6985 [IMIX Genome:
 AM Specification of Variable Packet Sizes for Additional Testing] << add to refs!


Section 2.3, paragraph 6:
OLD:

    -throughput needs to be expressed in % of total transmitted frames
    -for packet drops, they MUST be expressed in packet count value and
    SHOULD be expressed in % of line rate

NEW:

    -throughput needs to be expressed in % of total transmitted frames

 AM -for packet drops, they MUST be expressed as a count of packets and
    SHOULD be expressed in % of line rate


Section 2.3, paragraph 10:
OLD:

    -The tests for throughput, latency and jitter MAY be conducted as
    individual independent events, with proper documentation in the
    report but SHOULD be conducted at the same time.

NEW:

 AM -The tests for Throughput, latency and jitter MAY be conducted as
 AM individual independent trials, with proper documentation in the
    report but SHOULD be conducted at the same time.


Section 3.2, paragraph 2:
OLD:

    The methodology for measuring buffering for a data-center switch is
    based on using known congestion of known fixed packet size along with
    maximum latency value measurements. The maximum latency will increase
    until the first packet drop occurs. At this point, the maximum
    latency value will remain constant. This is the point of inflexion of
    this maximum latency change to a constant value. There MUST be
    multiple ingress ports receiving known amount of frames at a known
    fixed size, destined for the same egress port in order to create a
    known congestion event. The total amount of packets sent from the
    oversubscribed port minus one, multiplied by the packet size
    represents the maximum port buffer size at the measured inflexion
    point.

NEW:

    The methodology for measuring buffering for a data-center switch is
    based on using known congestion of known fixed packet size along with
    maximum latency value measurements. The maximum latency will increase
    until the first packet drop occurs. At this point, the maximum
    latency value will remain constant. This is the point of inflexion of
    this maximum latency change to a constant value. There MUST be
    multiple ingress ports receiving known amount of frames at a known
    fixed size, destined for the same egress port in order to create a
 AM known congestion condition. The total amount of packets sent from the
    oversubscribed port minus one, multiplied by the packet size
    represents the maximum port buffer size at the measured inflexion
    point.


Section 3.2, paragraph 4:
OLD:

    First iteration: ingress port 1 sending line rate to egress port 2,
    while port 3 sending a known low amount of over subscription traffic
    (1% recommended) with a packet size of 64 bytes to egress port 2.
    Measure the buffer size value of the number of frames sent from the
    port sending the oversubscribed traffic up to the inflexion point
    multiplied by the frame size.

NEW:

    First iteration: ingress port 1 sending line rate to egress port 2,
 AM while port 3 sending a known low amount of over-subscription traffic
    (1% recommended) with a packet size of 64 bytes to egress port 2.
    Measure the buffer size value of the number of frames sent from the
    port sending the oversubscribed traffic up to the inflexion point
    multiplied by the frame size.


Section 3.2, paragraph 5:
OLD:

    Second iteration: ingress port 1 sending line rate to egress port 2,
    while port 3 sending a known low amount of over subscription traffic
    (1% recommended) with same packet size 65 bytes to egress port 2.
    Measure the buffer size value of the number of frames sent from the
    port sending the oversubscribed traffic up to the inflexion point
    multiplied by the frame size.

NEW:

    Second iteration: ingress port 1 sending line rate to egress port 2,
 AM while port 3 sending a known low amount of over-subscription traffic
    (1% recommended) with same packet size 65 bytes to egress port 2.
    Measure the buffer size value of the number of frames sent from the
    port sending the oversubscribed traffic up to the inflexion point
    multiplied by the frame size.


Section 3.2, paragraph 6:
OLD:

    Last iteration: ingress port 1 sending line rate to egress port 2,
    while port 3 sending a known low amount of over subscription traffic
    (1% recommended) with same packet size B bytes to egress port 2.
    Measure the buffer size value of the number of frames sent from the
    port sending the oversubscribed traffic up to the inflexion point
    multiplied by the frame size..

NEW:

 AM Continuing iterations: ingress port 1 sending line rate to egress port 2,
 AM while port 3 sending a known low amount of over-subscription traffic
    (1% recommended) with same packet size B bytes to egress port 2.
    Measure the buffer size value of the number of frames sent from the
    port sending the oversubscribed traffic up to the inflexion point
 AM multiplied by the frame size.


Section 3.2, paragraph 7:
OLD:

    When the B value is found to provide the highest buffer size, this is
    the highest buffer efficiency

NEW:

 AM When the B value is found to provide the largest buffer size, then
 AM size B allows the highest buffer efficiency.


Section 3.2, paragraph 9:
OLD:

    At fixed packet size B determined in 3.2.1, for a fixed default COS
    value of 0 and for unicast traffic proceed with the following:

NEW:

 AM At fixed packet size B determined in procedure 1), for a fixed default DSCP ??
    value of 0 and for unicast traffic proceed with the following:


Section 3.2, paragraph 10:
OLD:

    First iteration: ingress port 1 sending line rate to egress port 2,
    while port 3 sending a known low amount of over subscription traffic
    (1% recommended) with same packet size to the egress port 2. Measure
    the buffer size value by multiplying the number of extra frames sent
    by the frame size.

NEW:

    First iteration: ingress port 1 sending line rate to egress port 2,
 AM while port 3 sending a known low amount of over-subscription traffic
    (1% recommended) with same packet size to the egress port 2. Measure
    the buffer size value by multiplying the number of extra frames sent
    by the frame size.


Section 3.2, paragraph 11:
OLD:

    Second iteration:  ingress port 2 sending line rate to egress port 3,
    while port 4 sending a known low amount of over subscription traffic
    (1% recommended) with same packet size to the egress port 3. Measure
    the buffer size value by multiplying the number of extra frames sent
    by the frame size.

NEW:

    Second iteration:  ingress port 2 sending line rate to egress port 3,
 AM while port 4 sending a known low amount of over-subscription traffic
    (1% recommended) with same packet size to the egress port 3. Measure
    the buffer size value by multiplying the number of extra frames sent
    by the frame size.


Section 3.2, paragraph 12:
OLD:

    Last iteration: ingress port N-2 sending line rate traffic to egress
    port N-1, while port N sending a known low amount of over
    subscription traffic (1% recommended) with same packet size to the
    egress port N Measure the buffer size value by multiplying the number
    of extra frames sent by the frame size.

NEW:

    Last iteration: ingress port N-2 sending line rate traffic to egress
 AM port N-1, while port N sending a known low amount of over-
    subscription traffic (1% recommended) with same packet size to the
    egress port N Measure the buffer size value by multiplying the number
    of extra frames sent by the frame size.


Section 3.2, paragraph 13:
OLD:

    This test series MAY be repeated using all different COS values of
    traffic and then using Multicast type of traffic, in order to find if
    there is any COS impact on the buffer size.

NEW:

 AM This test series MAY be repeated using all different DSCP? values of
    traffic and then using Multicast type of traffic, in order to find if
 AM there is any DSCP? impact on the buffer size.


Section 3.2, paragraph 18:
OLD:

    This test series MAY be repeated using all different COS values of
    traffic and then using Multicast type of traffic.

NEW:

 AM This test series MAY be repeated using all different DSCP? values of
    traffic and then using Multicast type of traffic.


Section 3.2, paragraph 23:
OLD:

    This test series MAY be repeated using all different COS values of
    traffic and then using Multicast type of traffic.

NEW:

 AM This test series MAY be repeated using all different DSCP? values of
    traffic and then using Multicast type of traffic.


Section 3.2, paragraph 25:
OLD:

    Also the COS value for the packets SHOULD be provided for each test
    iteration as the buffer allocation size MAY differ per COS value. It
    is RECOMMENDED that the ingress and egress ports are varied in a
    random, but documented fashion in multiple tests to measure the
    buffer size for each port of the DUT.

NEW:

 AM Also the DSCP? value for the packets SHOULD be provided for each test
    iteration as the buffer allocation size MAY differ per COS value. It
    is RECOMMENDED that the ingress and egress ports are varied in a
    random, but documented fashion in multiple tests to measure the
    buffer size for each port of the DUT.


Section 3.3, paragraph 2:
OLD:

     - The packet size used for the most efficient buffer used, along
    with COS value

NEW:

     - The packet size used for the most efficient buffer used, along
 AM with DSCP? value


Section 3.3, paragraph 6:
OLD:

     - The amount of over subscription if different than 1%

NEW:

 AM  - The amount of over-subscription if different than 1%


Section 4.2, paragraph 1:
OLD:

    A traffic generator MUST be connected to all ports on the DUT. In
    order to cause congestion, two or more ingress ports MUST bursts
    packets destined for the same egress port. The simplest of the setups
    would be two ingress ports and one egress port (2-to-1).

NEW:

    A traffic generator MUST be connected to all ports on the DUT. In
 AM order to cause congestion, two or more ingress ports MUST send bursts of
    packets destined for the same egress port. The simplest of the setups
    would be two ingress ports and one egress port (2-to-1).


Section 4.2, paragraph 2:
OLD:

    The burst MUST be measure with an intensity of 100%, meaning the
    burst of packets will be sent with a minimum inter-packet gap. The
    amount of packet contained in the burst will be variable and increase
    until there is a non-zero packet loss measured. The aggregate amount
    of packets from all the senders will be used to calculate the maximum
    amount of microburst the DUT can sustain.

NEW:

 AM The burst MUST be sent with an intensity of 100%, meaning the
    burst of packets will be sent with a minimum inter-packet gap. The
 AM amount of packets contained in the burst will be trial variable and increased
    until there is a non-zero packet loss measured. The aggregate amount
    of packets from all the senders will be used to calculate the maximum
    amount of microburst the DUT can sustain.


Section 4.3, paragraph 2:
OLD:

     - The maximum value of packets received per ingress port with the
    maximum burst size obtained with zero packet loss

NEW:

 AM  - The maximum number of packets received per ingress port with the
    maximum burst size obtained with zero packet loss


Section 4.3, paragraph 5:
OLD:

     - The repeatability of the test needs to be indicated: number of
    iteration of the same test and percentage of variation between
    results (min, max, avg)

NEW:

     - The repeatability of the test needs to be indicated: number of
 AM iterations of the same test and percentage of variation between
    results (min, max, avg)


Section 5.1, paragraph 1:
OLD:

    Head-of-line blocking (HOL blocking) is a performance-limiting
    phenomenon that occurs when packets are held-up by the first packet
    ahead waiting to be transmitted to a different output port. This is
    defined in RFC 2889 section 5.5. Congestion Control. This section
    expands on RFC 2889 in the context of Data Center Benchmarking
    The objective of this test is to understand the DUT behavior under
    head of line blocking scenario and measure the packet loss.

NEW:

    Head-of-line blocking (HOL blocking) is a performance-limiting
    phenomenon that occurs when packets are held-up by the first packet
    ahead waiting to be transmitted to a different output port. This is
 AM defined in RFC 2889 section 5.5, Congestion Control. This section
    expands on RFC 2889 in the context of Data Center Benchmarking

 AM The objective of this test is to understand the DUT behavior under a
    head of line blocking scenario and measure the packet loss.


Section 5.2, paragraph 1:
OLD:

    In order to cause congestion, head of line blocking, groups of four
    ports are used. A group has 2 ingress and 2 egress ports. The first
    ingress port MUST have two flows configured each going to a different
    egress port. The second ingress port will congest the second egress
    port by sending line rate. The goal is to measure if there is loss
    for the first egress port which is not not oversubscribed.

NEW:

 AM In order to cause congestion in the form of head of line blocking, groups of four
    ports are used. A group has 2 ingress and 2 egress ports. The first
    ingress port MUST have two flows configured each going to a different
    egress port. The second ingress port will congest the second egress
    port by sending line rate. The goal is to measure if there is loss
 AM on the flow for the first egress port which is not oversubscribed.


Section 9., paragraph 3:
OLD:

    2) Measure with N/4 groups with N DUT ports

    First iteration: Expand to fully utilize all the DUT ports in
    increments of four. Repeat the methodology of 1) with all the group
    of ports possible to achieve on the device and measure for each port
    group the amount of traffic loss.

NEW:

    2) Measure with N/4 groups with N DUT ports
 AM QUESTION: Is the traffic from ingress split across 4 egress ports (25%)??
    First iteration: Expand to fully utilize all the DUT ports in
    increments of four. Repeat the methodology of 1) with all the group
    of ports possible to achieve on the device and measure for each port
    group the amount of traffic loss.


Section 5.3, paragraph 3:
OLD:

    - If HOLB was observed

NEW:

 AM - If HOLB was observed (Need to say what measurement supports this conclusion)


Section 6.1, paragraph 1:
OLD:

    The objective of this test is to measure the effect of TCP Goodput
    and latency with a mix of large and small flows. The test is designed
    to simulate a mixed environment of stateful flows that require high
    rates of goodput and stateless flows that require low latency.

NEW:

 AM The objective of this test is to measure the values for TCP Goodput
    and latency with a mix of large and small flows. The test is designed
    to simulate a mixed environment of stateful flows that require high
    rates of goodput and stateless flows that require low latency.


Section 6.2, paragraph 8:
OLD:

    First Iteration: 1 Ingress port receiving stateful TCP traffic and 1
    Ingress port receiving stateless traffic destined to 1 Egress Ports

NEW:

    First Iteration: 1 Ingress port receiving stateful TCP traffic and 1
 AM Ingress port receiving stateless traffic destined to 1 Egress Port


Section 6.2, paragraph 9:
OLD:

    Second Iteration: 2 Ingress port receiving stateful TCP traffic and 1
    Ingress port receiving stateless traffic destined to 1 Egress Ports

NEW:

 AM Second Iteration: 2 Ingress ports receiving stateful TCP traffic and 1
 AM Ingress port receiving stateless traffic destined to 1 Egress Port


Section 6.2, paragraph 10:
OLD:

    Last Iteration: N-2 Ingress port receiving stateful TCP traffic and 1
    Ingress port receiving stateless traffic destined to 1 Egress Ports

NEW:

 AM Last Iteration: N-2 Ingress ports receiving stateful TCP traffic and 1
 AM Ingress port receiving stateless traffic destined to 1 Egress Port


Section 6.2, paragraph 12:
OLD:

    During Iterations number of Egress ports MAY vary as well. First
    Iteration: 1 Ingress port receiving stateful TCP traffic and 1
    Ingress port receiving stateless traffic destined to 1 Egress Ports

NEW:

    During Iterations number of Egress ports MAY vary as well. First
    Iteration: 1 Ingress port receiving stateful TCP traffic and 1
 AM Ingress port receiving stateless traffic destined to 1 Egress Port


Section 6.2, paragraph 13:
OLD:

    Second Iteration: 1 Ingress port receiving stateful TCP traffic and 2
    Ingress port receiving stateless traffic destined to 1 Egress Ports

NEW:

    Second Iteration: 1 Ingress port receiving stateful TCP traffic and 2
 AM Ingress port receiving stateless traffic destined to 1 Egress Port


Section 6.2, paragraph 14:
OLD:

    Last Iteration: 1 Ingress port receiving stateful TCP traffic and N-2
    Ingress port receiving stateless traffic destined to 1 Egress Ports

NEW:

    Last Iteration: 1 Ingress port receiving stateful TCP traffic and N-2

 AM Ingress port receiving stateless traffic destined to 1 Egress Port


Section 6.3, paragraph 2:
OLD:

    - Number of ingress and egress ports along with designation of
    stateful or stateless.

NEW:

    - Number of ingress and egress ports along with designation of
 AM stateful or stateless flow assignment.


Section 6.3, paragraph 3:
OLD:

    - Stateful goodput

NEW:

 AM - Stateful flow goodput


Section 6.3, paragraph 4:
OLD:

    - Stateless latency

NEW:

 AM - Stateless flow latency


Section 7.2., paragraph 3:
OLD:

    [5]  Stopp D. and Hickman B., "Methodology for IP Multicast
          Benchmarking", BCP 26, RFC 3918, October 2004.

 7.3.  URL References

NEW:

    [5]  Stopp D. and Hickman B., "Methodology for IP Multicast
 AM       Benchmarking", RFC 3918, October 2004.


Section 7.2., paragraph 4:
OLD:

    [6]  Yanpei Chen, Rean Griffith, Junda Liu, Randy H. Katz, Anthony D.
          Joseph, "Understanding TCP Incast Throughput Collapse in
          Datacenter Networks",
          http://www.eecs.berkeley.edu/~ychen2/professional/TCPIncastWREN2009.pdf".

NEW:

 AM (7.3 heading removed)
    [6]  Yanpei Chen, Rean Griffith, Junda Liu, Randy H. Katz, Anthony D.
          Joseph, "Understanding TCP Incast Throughput Collapse in
          Datacenter Networks",
          http://www.eecs.berkeley.edu/~ychen2/professional/TCPIncastWREN2009.pdf".


> -----Original Message-----
> From: bmwg [mailto:bmwg-bounces@ietf.org] On Behalf Of MORTON, ALFRED C
> (AL)
> Sent: Tuesday, September 13, 2016 2:43 PM
> To: bmwg@ietf.org
> Subject: [bmwg] Second WGLC: draft-ietf-bmwg-dcbench-terminology and
> methodology
>
> *** Security Advisory: This Message Originated Outside of AT&T ***.
> Reference http://cso.att.com/EmailSecurity/IDSP.html for more
> information.
>
> BMWG:
>
> A WG Last Call period for the Internet-Drafts on
> Data Center Benchmarking Terminology and Methodology:
>
> https://tools.ietf.org/html/draft-ietf-bmwg-dcbench-terminology-05
> https://tools.ietf.org/html/draft-ietf-bmwg-dcbench-methodology-02
>
> will be open from 13 September 2016 through 27 September 2016.
>
> The first WGLC closed on 8 April 2016 with comments.
>
> These drafts are continuing the BMWG Last Call Process. See
> http://www1.ietf.org/mail-archive/web/bmwg/current/msg00846.html
>
> Please read and express your opinion on whether or not these
> Internet-Drafts should be forwarded to the Area Directors for
> publication as Informational RFCs.  Send your comments
> to this list or to the co-chairs acmorton@att.com and
> sbanks@encrypted.net
>
> for the co-chairs,
> Al
>
> _______________________________________________
> bmwg mailing list
> bmwg@ietf.org
> https://www.ietf.org/mailman/listinfo/bmwg