RE: draft-so-yong-rtgwg-cl-framework - part 2 - diffs

Iftekhar Hussain <IHussain@infinera.com> Fri, 15 June 2012 20:55 UTC

Return-Path: <IHussain@infinera.com>
X-Original-To: rtgwg@ietfa.amsl.com
Delivered-To: rtgwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9038B21F845E for <rtgwg@ietfa.amsl.com>; Fri, 15 Jun 2012 13:55:15 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.599
X-Spam-Level:
X-Spam-Status: No, score=-4.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, GB_I_LETTER=-2]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id kws32l-l+FhZ for <rtgwg@ietfa.amsl.com>; Fri, 15 Jun 2012 13:55:13 -0700 (PDT)
Received: from sv-casht-prod1.infinera.com (sv-casht-prod1.infinera.com [8.4.225.24]) by ietfa.amsl.com (Postfix) with ESMTP id 8D94821F85C0 for <rtgwg@ietf.org>; Fri, 15 Jun 2012 13:55:13 -0700 (PDT)
Received: from SV-EXDB-PROD1.infinera.com ([fe80::dc68:4e20:6002:a8f9]) by sv-casht-prod1.infinera.com ([10.100.97.218]) with mapi id 14.02.0283.003; Fri, 15 Jun 2012 13:55:13 -0700
From: Iftekhar Hussain <IHussain@infinera.com>
To: "curtis@occnc.com" <curtis@occnc.com>
Subject: RE: draft-so-yong-rtgwg-cl-framework - part 2 - diffs
Thread-Topic: draft-so-yong-rtgwg-cl-framework - part 2 - diffs
Thread-Index: AQHNSo9GAbQNJ0Gto0Cljgw/PXPqb5b73LRA
Date: Fri, 15 Jun 2012 20:55:12 +0000
Message-ID: <D7D7AB44C06A2440B716F1F1F5E70AE534658F14@SV-EXDB-PROD1.infinera.com>
References: Your message of "Sun, 06 May 2012 14:14:03 EDT." <201205061814.q46IE3GQ099189@gateway.ipv6.occnc.com> <201206150038.q5F0coBt036155@gateway.ipv6.occnc.com>
In-Reply-To: <201206150038.q5F0coBt036155@gateway.ipv6.occnc.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [10.100.96.93]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "rtgwg@ietf.org" <rtgwg@ietf.org>
X-BeenThere: rtgwg@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Routing Area Working Group <rtgwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/rtgwg>, <mailto:rtgwg-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/rtgwg>
List-Post: <mailto:rtgwg@ietf.org>
List-Help: <mailto:rtgwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/rtgwg>, <mailto:rtgwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 15 Jun 2012 20:55:15 -0000

Curtis,

Thanks for the follow up. These changes address my comments.

Regards,
Iftekhar
-----Original Message-----
From: Curtis Villamizar [mailto:curtis@occnc.com] 
Sent: Thursday, June 14, 2012 5:39 PM
To: curtis@occnc.com
Cc: Iftekhar Hussain; rtgwg@ietf.org
Subject: Re: draft-so-yong-rtgwg-cl-framework - part 2 - diffs


In a prior email I wrote:

  Diffs will be sent in a separate email - too long to attach.

Here they are:


Index: draft-so-yong-rtgwg-cl-framework.xml
===================================================================
RCS file: /home/cvs/CVS-occnc/customers/ietf/rtg-cl/draft-so-yong-rtgwg-cl-framework.xml,v
retrieving revision 1.10
diff -u -w -r1.10 draft-so-yong-rtgwg-cl-framework.xml
--- draft-so-yong-rtgwg-cl-framework.xml	7 Mar 2012 21:35:14 -0000	1.10
+++ draft-so-yong-rtgwg-cl-framework.xml	14 Jun 2012 20:46:17 -0000
@@ -61,23 +61,17 @@
 <?rfc inline="yes" ?>
 
 <rfc category="info" ipr="trust200902"
-     docName="draft-so-yong-rtgwg-cl-framework-05">
+     docName="draft-so-yong-rtgwg-cl-framework-06-preview1">
 
   <front>
     <title abbrev="Composite Link Framework">
       Composite Link Framework in Multi Protocol Label Switching (MPLS)</title>
 
     <author
-	    fullname="So Ning" initials="N." surname="So">
-      <organization>Verizon</organization>
+	    fullname="So Ning" initials="S." surname="Ning">
+      <organization>Tata Communications</organization>
       <address>
-        <postal>
-          <street>2400 N. Glenville Ave.</street>
-          <city>Richardson, TX</city>
-	  <code>75082</code>
-        </postal>
-        <phone>+1 972-729-7905</phone>
-        <email>ning.so@verizonbusiness.com</email>
+        <email>ning.so@tatacommunications.com</email>
       </address>
     </author>
 
@@ -107,12 +101,12 @@
       <organization>Huawei USA</organization>
       <address>
         <postal>
-          <street>1700 Alma Dr. Suite 500</street>
+          <street>5340 Legacy Dr.</street>
           <city>Plano, TX</city>
-	  <code>75075</code>
+	  <code>75025</code>
         </postal>
-        <phone>+1 469-229-5387</phone>
-        <email>lucyyong@huawei.com</email>
+        <phone>+1 469-277-5837</phone>
+        <email>lucy.yong@huawei.com</email>
       </address>
     </author>
 
@@ -380,7 +374,8 @@
 	described in greater detail than given requirements alone.
       </t>
 
-      <section title="Flow Identification">
+      <section anchor="sect.flow-id"
+	       title="Flow Identification">
 
 	<t>
 	  Traffic mapping to component links is a data plane @@ -399,17 +394,18 @@
 	</t>
 
 	<t>
-	  Operator may have other objectives such as placing a
-	  bidirectional flow or LSP on the same component link in
-	  both direction, load balance over component links, composite
-	  link energy saving, and etc.  These new requirements are
-	  described in <xref target="I-D.ietf-rtgwg-cl-requirement"
-	  />.
+	  The network operator may have other objectives such as
+	  placing a bidirectional flow or LSP on the same component
+	  link in both direction, load balance over component links,
+	  composite link energy saving, and etc.  These new
+	  requirements are described in <xref
+	  target="I-D.ietf-rtgwg-cl-requirement" />.
 	</t>
 
 	<!--
-	  The feasibility of symetric paths for all flows is questionable!
-	  Perhaps the emphasis on this (mis)feature should be reduced.
+	  The feasibility of symetric paths for all flows is
+	  questionable!  Perhaps the emphasis on this (mis)feature in
+	  the above paragraph should be reduced.
 	-->
 
 	<t>
@@ -474,29 +470,35 @@
 	  flow identification.  If some LSP may require strict packet
 	  ordering but those LSP cannot be distinguished from others,
 	  then only the top label can be used as a flow identifier.
-	  If only the top label is used, then there may not be
-	  adequate flow granularity to accomplish well balanced
-	  traffic distribution and it will not be possible to carry
-	  LSP that are larger than any individual component link.
+	  If only the top label is used (for example, as specified by
+	  <xref target="RFC4201" /> when the "all-ones" component
+	  described in <xref target="RFC4201" /> is not used), then
+	  there may not be adequate flow granularity to accomplish
+	  well balanced traffic distribution and it will not be
+	  possible to carry LSP that are larger than any individual
+	  component link.
 	</t>
 
 	<t>
 	  The number of flows can be extremely large.  This may be the
 	  case when the entire label stack is used and is always the
 	  case when IP addresses are used in provider networks
-	  carrying Internet traffic.  Current practice at the time of
-	  writing were documented in <xref target="RFC2991" /> and
-	  <xref target="RFC2992" />.  These practices as described,
-	  make use of IP addresses.  The common practices were
-	  extended to include the MPLS label stack and the common
-	  practice of looking at IP addresses within the MPLS payload.
-	  These extended practices are described in <xref
-	  target="RFC4385" /> and <xref target="RFC4928" /> due to
-	  their impact on pseudowires without a PWE3 Control Word.
+	  carrying Internet traffic.  Current practice for native IP
+	  load balancing at the time of writing were documented in
+	  <xref target="RFC2991" />, <xref target="RFC2992" />.  These
+	  practices as described, make use of IP addresses.  The
+	  common practices were extended to include the MPLS label
+	  stack and the common practice of looking at IP addresses
+	  within the MPLS payload.  These extended practices are
+	  described in <xref target="RFC4385" /> and <xref
+	  target="RFC4928" /> due to their impact on pseudowires
+	  without a PWE3 Control Word.  Additional detail on current
+	  multipath practices can be found in the appendices of <xref
+	  target="I-D.symmvo-rtgwg-cl-use-cases" />.
 	</t>
 
 	<t>
-	  Using only the top label supports too course a traffic
+	  Using only the top label supports too coarse a traffic
 	  balance.  Using the full label stack or IP addresses as flow
 	  identification provides a sufficiently fine traffic balance,
 	  but is capable of identifying such a high number of distinct @@ -506,9 +508,39 @@
 	  technique.  Other means of grouping flows may be possible.
 	</t>
 
+	<t>
+	  In summary:
+	  <list style="numbers">
+	    <t>
+	      Load balancing using only the MPLS label stack provides
+	      too coarse a granularity of load balance.
+	    </t>
+	    <t>
+	      Tracking every flow is not scalable due to the extremely
+	      large number of flows in provider networks.
+	    </t>
+	    <t>
+	      Existing techniques, IP source and destination hash in
+	      particular, have proven in over two decades of
+	      experience to be an excellent way of identifying groups
+	      of flows.
+	    </t>
+	    <t>
+	      If a better way to identify groups of flows is
+	      discovered, then that method can be used.
+	    </t>
+	    <t>
+	      IP address hashing is not required, but use of this
+	      technique is strongly encouraged given the technique's
+	      long history of successful deployment.
+	    </t>
+	  </list>
+	</t>
+
       </section>
 
-      <section title="Composite Link in Control Plane">
+      <section anchor="sect.control-plane"
+	       title="Composite Link in Control Plane">
 
 	<t>
 	  A composite Link is advertised as a single logical interface @@ -648,17 +680,18 @@
 	</t>
 
 	<t>
-	  Composite link capacity is aggregated capacity and MAY be
-	  larger than individual component link capacity.  Any
-	  aggregated LSP can determine a bounds on the largest
-	  microflow that could be carried and this constraint can be
-	  handled as follows.
+	  Composite link capacity is aggregated capacity.  LSP
+	  capacity MAY be larger than individual component link
+	  capacity.  Any aggregated LSP can determine a bounds on the
+	  largest microflow that could be carried and this constraint
+	  can be handled as follows.
 	</t>
 
 	<t>
 	  <list style="numbers">
 	    <t>
-	      If no other information is available the largest
+	      If no information is available through signaling,
+	      management plane, or configuration, the largest
 	      microflow is bound by one of the following:
 	      <list style="letters">
 		<t>
@@ -770,24 +803,26 @@
 
       </section>
 
-      <section title="Composite Link in Data Plane">
+      <section anchor="sect.data-plane"
+	       title="Composite Link in Data Plane">
 
 	<t>
-	  The traffic over a composite link is distributed over
-	  individual component links.  Traffic distribution may be
-	  determined by or constrained by control plane or management
-	  plane.  Traffic distribution may be changed due to component
-	  link status change, subject to constraints imposed by either
-	  the management plane or control plane.  The distribution
-	  function is local to the routers in which a composite link
-	  belongs to and is not specified here.
-	  <!-- corouted LSP are coordinated by the PATH/RESV -
-	    However, if a bi-directional LSP is required to be placed
-	    on the same component link in both directions, the routers
-	    at both composite link end points need incorporation in
-	    determining the component link for the LSP. The protocol
-	    extension of that is for further study.
-	  -->
+	  The data plane must first identify groups of flows.  Flow
+	  identification is covered in <xref target="sect.flow-id" />.
+	  Having identified groups of flows the groups must be placed
+	  on individual component links.  This second step is called
+	  traffic distribution or traffic placement.  The two steps
+	  together are known as traffic balancing or load balancing.
+	</t>
+
+	<t>
+	  Traffic distribution may be determined by or constrained by
+	  control plane or management plane.  Traffic distribution may
+	  be changed due to component link status change, subject to
+	  constraints imposed by either the management plane or
+	  control plane.  The distribution function is local to the
+	  routers in which a composite link belongs to and is not
+	  specified here.
 	</t>
 
 	<t>
@@ -795,34 +830,38 @@
 	  differentiate multicast traffic vs. unicast traffic.
 	</t>
 
-	<!-- protection already covered -
-	A component link in a composite link may fail
-	independently. The routers at a composite link MUST maintain
-	each component link status. Two routers may use the control
-	plane to sync up a component link state. When a component link
-	fails, the routers of a composite link MUST re-assign impacted
-	flows to other active component links in minimal disruptive
-	manner. This is local function and do not incorporate with LSP
-	head-end routers.
-	-->
+	<t>
+	  In order to maintain scalability, existing data plane
+	  forwarding retains state associated with the top label only.
+	  The use of flow group identification is in a second step in
+	  the forwarding process.  Data plane forwarding makes use of
+	  the top label to select a composite link, or a group of
+	  components within a composite link or for the case where an
+	  LSP is pinned (see <xref target="RFC4201" />), a specific
+	  component link.  For those LSP for which the LSP selects
+	  only the composite link or a group of components within a
+	  composite link, the load balancing makes use of the flow
+	  group identification.
+	</t>
 
-	<!-- remove this - add to CL-req in management section
-	The composite link functions provide component link fault
-	notification and composite link fault notification. Component
-	link fault notification MUST be sent to the management
-	plane. Composite link fault notification MUST be sent to
-	management plane and distribute via link state message in IGP.
-	-->
+	<t>
+	  The most common traffic placement techniques uses the a flow
+	  group identification as an index into a table.  The table
+	  provides an indirection.  The number of bits of hash is
+	  constrained to keep table size small.  While this is not the
+	  best technique, it is the most common.  Better techniques
+	  exist but they are outside the scope of this document and
+	  some are considered proprietary.
+	</t>
 
-	<!-- remove this - add to CL-req in management section
-	Operator may want to perform an optimization function such as
-	load balance or energy saving over a composite link, which may
-	conduct some traffic moving from one component link to
-	another. The process MUST support locally and gracefully
-	traffic movement process among component links. The protocol
-	that facilitates this process between two composite link end
-	points is for further study.
-	-->
+	<t>
+	  Requirements to limit frequency of load balancing can be
+	  adhered to by keeping track of when a flow group was last
+	  moved and imposing a minimum period before that flow group
+	  can be moved again.  This is straightforward for a table
+	  approach.  For other approaches it may be less
+	  straightforward but is acheivable.
+	</t>
 
       </section>
 
@@ -833,12 +872,20 @@
 
       <t>
 	Scalability and stability are critical considerations in
-	protocol design where protocols may be used in a large
-	network.  Composite Link is applicable to large networks, and
-	therefore scalability must be a major consideration.  Some of
-	the requirements of Composite Link require additional
-	information to be carried in situations where component links
-	differ in some significant way.
+	protocol design where protocols may be used in a large network
+	such as today's service provider networks.  Composite Link is
+	applicable to networks which are large enough to require that
+	traffic be split over multiple paths.  Scalability is a major
+	consideration for networks that reach a capacity large enough
+	to require Composite Link.
+      </t>
+
+      <t>
+	Some of the requirements of Composite Link could potentially
+	have a negative impact on scalability.  For example, Composite
+	Link requires additional information to be carried in
+	situations where component links differ in some significant
+	way.
       </t>
 
       <section anchor="sect.scalability"
@@ -851,15 +898,32 @@
 	  adequate to meet requirements.  Routing information is
 	  aggregated to reduce the amount of information exchange
 	  related to routing and to simplify route computation (see
-	  <xref target="sect.routing-tradeoff" />).  Reducing the
-	  amount of information allows the exchange of information
-	  during a large routing change to be accomplished more
-	  quickly, and simplifying route computation improves
-	  convergence time after very significant network faults which
-	  cannot be handled by preprovisioned or precomputed
-	  protection mechanisms.  Aggregating smaller LSP into larger
-	  LSP is a means to reduce RSVP-TE signaling (see <xref
-	  target="sect.signaling-tradeoff" />).
+	  <xref target="sect.routing-tradeoff" />).
+	</t>
+
+	<t>
+	  In an MPLS network large routing changes can occur when a
+	  single fault occurs.  For example, a single fault may impact
+	  a very large number of LSP traversing a given link.  As new
+	  LSP are signaled to avoid the fault, resources are consumed
+	  elsewhere, and routing protocol announcements must flood the
+	  resource changes.  If protection is in place, there is less
+	  urgency to converging quickly.  If multiple faults occur
+	  that are not covered by shared risk groups (SRG), then some
+	  protection may fail, adding urgency to converging quickly
+	  even where protection was deployed.
+	</t>
+
+	<t>
+	  Reducing the amount of information allows the exchange of
+	  information during a large routing change to be accomplished
+	  more quickly and simplifies route computation.  Simplifying
+	  route computation improves convergence time after very
+	  significant network faults which cannot be handled by
+	  preprovisioned or precomputed protection mechanisms.
+	  Aggregating smaller LSP into larger LSP is a means to reduce
+	  path computation load and reduce RSVP-TE signaling (see
+	  <xref target="sect.signaling-tradeoff" />).
 	</t>
 
 	<t>
@@ -905,7 +969,7 @@
 	  sensible choices regarding the amount of change to link
 	  parameters that require link readvertisement.  For example,
 	  if delay measurements include queuing delay, then a much
-	  more course granularity of delay measurement would be called
+	  more coarse granularity of delay measurement would be called
 	  for than if the delay does not include queuing and is
 	  dominated by geographic delay (speed of light delay).
 	</t>
@@ -1133,7 +1197,7 @@
 	  <t>
 	    path symmetry requires extensions and is particularly
 	    challenging for very large LSP (see
-	    target="sect.path-symmetry" />),
+	    <xref target="sect.path-symmetry" />),
 	  </t>
 	  <t>
 	    accommodating a very wide range of requirements among @@ -1142,7 +1206,7 @@
 	    reduce scalability if a large number of aggregates are
 	    used to provide a too fine a reflection of the
 	    requirements in the contained LSP (see
-	    target="sect.contained-lsp" />),
+	    <xref target="sect.contained-lsp" />),
 	  </t>
 	  <t>
 	    backwards compatibility is somewhat limited due to the @@ -1150,13 +1214,13 @@
 	    provide too little information regarding their configured
 	    default behavior, and legacy LSP which provide too little
 	    information regarding their requirements (see
-	    target="sect.compat" />),
+	    <xref target="sect.compat" />),
 	  </t>
 	  <t>
 	    data plane challenges include those of accommodating very
 	    large LSP, large microflows, traffic ordering constraints
 	    imposed by a subsent of LSP, and accounting for IP and LDP
-	    traffic (see target="sect.dp-challenge" />).
+	    traffic (see <xref target="sect.dp-challenge" />).
 	  </t>
 	</list>
       </t>
@@ -1391,59 +1455,14 @@
 	       title="Data Plane Challenges">
 
 	<t>
-	  Regardless of implementation choices, there are tradeoffs
-	  regarding the flow identification granularity.  If flow
-	  identification granularity is very course, such as using top
-	  lable only, LSP larger than the size of a component link are
-	  not feasible.  In practice using the MPLS label stack alone
-	  has proven too course to acheive a reasonably good load
-	  balance, due to bin-packing issues and discrpencies between
-	  signaled bandwidth and actual traffic loads on LSP.  If a
-	  finer granualrity is based on IP addresses, then a table
-	  approach is infeasible, due to the extremely large number of
-	  IP address pairs found in Internet traffic.  The preferred
-	  implementation has been to use a hash over the IP address
-	  pairs to provide a fine granuarity but with a feasible
-	  implementation.  Where hashing is used, the hash itself can
-	  be done at ingress and placed in a fat-pw label or entropy
-	  label (see <xref target="sect.entropy" />) to avoid
-	  performing the deeper MPLS header parse in the network core
-	  and the hash in the network core.
-	</t>
-
-	<t>
-	  Other implementations are possible, but must still face the
-	  problem that not using IP addresses provides a granularity
-	  which is too course and using IP host pairs yields a table
-	  size which is so impractical as to be considered an
-	  infeasible solution.  This section assumes that this problem
-	  is addressed, but makes no assumption as to the
-	  implementation.  Where a hash based solution is mentioned,
-	  such mention should be considered a reference approach, not
-	  a requirement.
-	</t>
-
-	<t>
-	  In order to maintain scalability, existing data plane
-	  forwarding retains state associated with the top label only.
-	  Data plane forwarding makes use of the top label to select a
-	  composite link, or a group of components within a composite
-	  link or for the case where an LSP is pinned (see <xref
-	  target="RFC4201" />), a specific component link.  For those
-	  LSP for which the LSP selects only the composite link or a
-	  group of components within a composite link, the load
-	  balancing may make use of the entire label stack and in some
-	  cases may make use of information in the payload, though no
-	  state on specific contained LSP is retained.
-	</t>
-
-	<t>
-	  Load balancing makes use of techniques which allow large
-	  sets of flows to be moved to rearrange traffic.  These large
-	  sets of flows may be at a finer granularity than contained
-	  LSP.  Requirements to limit frequency of load balancing
-	  rearrangement can be adhered to by constraining the
-	  frequency at which these large sets of flows are moved.
+	  Flow identification is briefly discussed in
+	  <xref target="sect.flow-id" />.
+	  Traffic distribution is briefly discussed in
+	  <xref target="sect.data-plane" />.
+
+	  This section discusses issues specific to particular
+	  requirements specified in
+	  <xref target="I-D.ietf-rtgwg-cl-requirement" />.
 	</t>
 
 	<section anchor="sect.large-lsp"
@@ -1453,8 +1472,18 @@
 	    Very large LSP may exceed the capacity of any single
 	    component of a composite link.  In some cases contained
 	    LSP may exceed the capacity of any single component.
-	    These LSP require the use of the equivalent of the
-	    all-ones component of a link bundle.
+	    These LSP may the use of the equivalent of the all-ones
+	    component of a link bundle, or may use a subset of
+	    components which meet the LSP requirements.
+	  </t>
+
+	  <t>
+	    Very large LSP can be accommodated as long as they can be
+	    subdivided (see <xref target="sect.large-flows" />).  A
+	    very large LSP cannot have a requirement for symetric
+	    paths unless complex protocol extensions are proposed (see
+	    <xref target="sect.control-plane" /> and <xref
+	    target="sect.path-symmetry" />).
 	  </t>
 
 	</section>
@@ -1464,8 +1493,8 @@
 
 	  <t>
 	    Within a very large LSP there may be very large
-	    microflows, or very large flows which cannot be further
-	    subdivided for other reasons.  Flows which cannot be
+	    microflows.  A very large microflow is a very large flows
+	    which cannot be further subdivided.  Flows which cannot be
 	    subdivided must be no larger that the capacity of any
 	    single component.
 	  </t>
@@ -2122,8 +2151,8 @@
 	  <t>
 	    FR#11 explicitly calls for dynamic load balancing similar
 	    to existing adaptive multipath.  In implementations where
-	    flow identification uses a course granularity, the
-	    adjustments would have to be equally course, in the worst
+	    flow identification uses a coarse granularity, the
+	    adjustments would have to be equally coarse, in the worst
 	    case moving entire LSP.  The impact of flow identification
 	    granularity and potential adaptive multipath approaches
 	    may need to be documented in greater detail than provided @@ -2506,9 +2535,22 @@
 
       <t>
 	Authors would like to thank Adrian Farrel, Fred Jounay, Yuji
-	Kamite for his extensive comments and suggestions, Ron Bonica,
-	Nabil Bitar, Eric Gray, Lou Berger, and Kireeti Kompella for
-	their reviews and great suggestions.
+	Kamite for his extensive comments and suggestions regarding
+	early versions of this document, Ron Bonica, Nabil Bitar,
+	Eric Gray, Lou Berger, and Kireeti Kompella for their reviews
+	of early versions and great suggestions.
+      </t>
+      <t>
+	Authors would like to thank Iftekhar Hussain for review and
+	suggestions regarding recent versions of this document.
+      </t>
+      <t>
+	In the interest of full disclosure of affiliation and in the
+	interest of acknowledging sponsorship, past affiliations of
+	authors are noted.  Much of the work done by Ning So occurred
+	while Ning was at Verizon.  Much of the work done by Curtis
+	Villamizar occurred while at Infinera.  Infinera continues to
+	sponsor this work on a consulting basis.
       </t>
 
     </section>