[Diffserv] PDB draft
Dan Grossman <dan@dma.isg.mot.com> Fri, 07 July 2000 19:05 UTC
Received: from optimus.ietf.org (ietf.org [132.151.1.19] (may be forged)) by ietf.org (8.9.1a/8.9.1a) with ESMTP id PAA10162 for <diffserv-archive@odin.ietf.org>; Fri, 7 Jul 2000 15:05:33 -0400 (EDT)
Received: from optimus.ietf.org (localhost [127.0.0.1]) by optimus.ietf.org (8.9.1a/8.9.1) with ESMTP id OAA19529; Fri, 7 Jul 2000 14:15:47 -0400 (EDT)
Received: from ietf.org (odin [132.151.1.176]) by optimus.ietf.org (8.9.1a/8.9.1) with ESMTP id SAA00072 for <diffserv@ns.ietf.org>; Thu, 6 Jul 2000 18:35:08 -0400 (EDT)
Received: from motgate.mot.com (motgate.mot.com [129.188.136.100]) by ietf.org (8.9.1a/8.9.1a) with ESMTP id SAA26884 for <diffserv@ietf.org>; Thu, 6 Jul 2000 18:35:07 -0400 (EDT)
Received: [from mothost.mot.com (mothost.mot.com [129.188.137.101]) by motgate.mot.com (motgate 2.1) with ESMTP id PAA25348 for <diffserv@ietf.org>; Thu, 6 Jul 2000 15:35:06 -0700 (MST)]
Received: [from noah.dma.isg.mot.com (noah.dma.isg.mot.com [150.21.2.29]) by mothost.mot.com (MOT-mothost 2.0) with ESMTP id PAA00696 for <diffserv@ietf.org>; Thu, 6 Jul 2000 15:35:05 -0700 (MST)]
Received: from dma.isg.mot.com (nrlab-08.dma.isg.mot.com [150.21.50.46]) by noah.dma.isg.mot.com (8.8.8+Sun/8.8.8) with ESMTP id SAA14074 for <diffserv@ietf.org>; Thu, 6 Jul 2000 18:35:04 -0400 (EDT)
Message-Id: <200007062235.SAA14074@noah.dma.isg.mot.com>
X-Mailer: exmh version 1.6.7 5/3/96
To: diffserv@ietf.org
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Thu, 06 Jul 2000 18:35:04 -0400
From: Dan Grossman <dan@dma.isg.mot.com>
Subject: [Diffserv] PDB draft
Sender: diffserv-admin@ietf.org
Errors-To: diffserv-admin@ietf.org
X-Mailman-Version: 1.0
Precedence: bulk
List-Id: Diffserv Discussion List <diffserv.ietf.org>
X-BeenThere: diffserv@ietf.org
I have reviewed the long-awaited PDB draft, and have numerous comments and questions. In general, I found this draft to be very difficult to read. Is it that I'm dense, or did others find it excessively challenging? Authors should break up run-on sentences, remove redundant blocks of text, simplify some of their arguments, avoid belaboring examples and OMIT UNNECESSARY WORDS. By the time this goes out for last call, it needs to be rendered clear and concise. Otherwise, we will sustain this industry-wide confusion. Frankly,in my 'day job', I have had more than enough frustration with people who misinterpret Diffserv work product. They do this in part because the base documents are too hard to understand. This draft should clarify, rather than confuse things further. The draft needs to do a better job of weaving existing concepts from RFC 2475, RFC 2475, the terminology draft, and the model draft into PDBs. This list has been a venue for frequent complaints about too many new terms. I think the problem really is too many interrelated concepts, some almost redundant and many of them too-little used. For example, the draft keeps talking about "the rules" rather than applying our existing concepts of TCS and SLS, and about these amorphous groups of packets rather than traffic streams. The relationship between "behavior aggregate", "traffic aggregate" and "traffic stream" needs to be explicitly defined. The early parts of the draft tend to have a bit too much advocacy. Various companies' marketing folks can do a fine job of producing purple prose, and don't need our help, thank you very much :-) That said, comments inline: Abstract The diffserv WG has defined the general architecture for differen- tiated services (RFC 2475) and has been focused on the definition and standardization of the forwarding path behavior required in >> EF and AF are not mandatory. Try "definition and standardization of >> router forwarding path behavior for differentiated services" routers, known as "per-hop forwarding behaviors" (or PHBs) (RFCs 2474, 2597, and 2598). The differentiated services frame- work creates services within a network by applying rules at the network edges to create traffic aggregates and coupling these with a specific forwarding path treatment for the aggregate. The WG has also discussed the behavior required at diffserv network edges or boundaries for conditioning packet aggregates, such elements >> all very good for now, but temporal and not particularly meaningful >> to ultimate users of this memo. This should be reworked to talk about >> Differentiated Services, rather than the diffserv working group. as policers and shapers [MODEL, MIB]. A major feature of the diffserv architecture is that only the components applying the rules at the edge need to be changed in response to short-term changes in QoS goals in the network, rather than reconfiguring the interior behaviors. >> This is somewhat misleading. Interior nodes still have to deal with >> resource allocations (as in reconfiguring schedulers) for aggregates. The next step for the WG is to formulate examples of how the for- warding path components (PHBs, classifiers, and traffic condi- tioners) can be used within the architectural framework to compose traffic aggregates whose packets experience specific forwarding characteristics as they transit a differentiated services domain. The WG has decided to use the term per-domain behav- ior, or PDB, to describe the behavior experienced by packets of a particular traffic aggregate as they cross a DS domain. PDBs can be used to characterize, by specific metrics, the treatment individ- ual packets with a particular DSCP (or set of DSCPs) will receive as it crosses a DS domain. However, no microflow information should be required as packets transit a differentiated services net- work. A PDB is an expression of a fowarding path treatment, but due to the role that particular choices of edge and PHB configura- tion play in its resulting attributes, it is where the forwarding path and the control plane interact. This document defines and discusses Per Domain Behaviors in detail and lays out the format and required content for contribu- tions to the Diffserv WG on PDBs and the rules that will be applied for individual PDB specifications to advance as WG products. This format is specified to expedite working group review of PDB submissions. A pdf version of this document is available at: ftp://www.packet- design.com/outgoing/ietf/pdb_def.pdf. >> This was useful and made review easier, but there is still the problem >> of what happens post-RFC editor. In particular, the figures will be >> difficult to render as ASCII art. Obviously a POISSON issue rather than >> a Diffserv issue, but sure makes troubles for Diffserv. Table of Contents <snip> 1.0 Introduction Differentiated Services allows an approach to IP QoS that is mod- >> QoS is not a noun, at least in technical documents. Try "... to >> providing QoS objectives for IP". ular, high performance, incrementally deployable, and scalable >> sounds like a marketing glossy. Try: "that is believed to posess >> scalability and deployment advantages in high-speed IP backbones." [RFC2475]. Although an ultimate goal is interdomain quality of service, there remain many untaken steps on the road to achieving this goal. One essential step, the evolution of the business models for interdomain QoS, will necessarily develop outside of the IETF. A goal of the diffserv WG is to provide the firm technical foundation that allows these business models to develop. >> This is true only when Diffserv is used by ISPs, and when the issue is >> related to peering. There are other uses of Diffserv where two domains >> might want to interconnect to provide service from the edge of one to the >> edge of the other. Nor should this draft appear to discourage early >> deployment of multidomain services. The Diffserv WG has finished the first phase of standardizing the behaviors required in the forwarding path of all network nodes, the per-hop forwarding behaviors or PHBs. The PHBs defined in RFCs 2474, 2597 and 2598 give a rich toolbox for differential >> xxxxxx ^an initial packet handling. A diffserv Conceptual Model [MODEL] describes a model of traffic conditioning and other forwarding behaviors. Although business models will have to evolve over time, there also remain technical issues in moving "beyond the box" to QoS models that apply within a single network domain. Providing QoS on a per-domain basis is useful in itself and will provide use- ful deployment experience for further IETF work as well as for the evolution of business models. The step of specifying forward- ing path attributes on a per-domain basis for a traffic aggregate distinguished only by the mark in the DS field of individual pack- ets is critical in the evolution of Diffserv QoS and should provide the technical input that will aid in the construction of business models. The ultimate goal of creating end to end QoS in the Inter- net imposes the requirement that we can create and quantify a behavior for a group of packets that is preserved when they are aggregated with other packets. This document defines and speci- fies the term "Per-Domain Behavior" or PDB to describe QoS attributes across a DS domain. >> This paragraph is redundant and consists almost entirely of run-on sentences. >> Intserv does claim to provide end-to-end QoS in the Internet. >> Try "It is useful to specify the means for providing service level >> specifications that apply at the boundaries of a single DS domain. >> This is considered a stepping stone to future multi-domain deployments. >> including providing any technical guidance for constructing business models. >> Doing so within the Diffserv framework requires creation and quantification >> of behaviors whose characteristics are preserved despite traffic aggregation >> within the Diffserv domain. This memo specifies the structure of these >> Per-Domain Behaviors (PDBs). " In diffserv, rules are imposed on packets arriving at the boundary of a DS domain through use of classification and traffic condi- tioning which are set to reflect the policy and traffic goals for >> "At the boundary of a DS-domain, packets are classified into traffic streams >> and these traffic streams are conditioned according to a TCS. The TCS >> reflects policy and traffic goals for the domain, as well as being part of > > an SLS. that domain. Once packets have crossed the DS boundary, adherence to diffserv principles makes it possible to group packets solely according to the behavior they receive at each hop. This approach has well-known scaling advantages, both in the forwarding path >> is beleived to have scaling advantages and in the control plane. Less well recognized is that these scaling >> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXT properties only result if the per-hop behavior definition gives rise to a particular type of invariance under aggregation. Since the >> behavior is invariant under aggregation. per-hop behavior must be equivalent for every node in the domain while the set of packets marked for that PHB may be different at every node, a PHB should be defined such that its defining char- acteristics don't depend on the volume of the associated BA on a >> how does a BA have volume? Does this mean 'rate'? router's ingress link nor on a particular path through the DS domain taken by the packets marked for it. If the properties of a >> Run-on sentence. Break it up. PDB using a particular PHB hold regardless of how the marked aggregate mutates as it traverses the domain, then that PDB scales. If there are limits to where the properties hold, that translates to a limit on the size or topology of a DS domain that can use that PDB. Although useful single-link DS domains might exist, PDBs that are invariant with network size or that have sim- ple relationships with network size and whose properties can be recovered by reapplying rules (that is, forming another diffserv boundary or edge to re-enforce the rules for the aggregate) are needed for building scalable end-to-end quality of service. >> Run-on sentence. There is a clear distinction between the definition of a Per- Domain Behavior in a DS domain and a service that might be specified in a Service Level Agreement. The PDB definition is a >> RFC 2475 defines a service as: "the overall treatment of a defined subset >> of a customer's traffic within a DS domain or end-to-end." In this sense, >> a PDB is the service provided to a Traffic Stream within a DS domain (where >> the TCS, PHBs and attributes are the overall treatment, and a traffic >> stream entering the DS domain may (or may not) become part of a Traffic >> aggregate). This is an important point that ought to be explicitly >> mentioned. Especially since the outside world seems to be clamoring for >> Diffserv to define services. technical building block that couples rules, specific PHBs, and configurations with a resulting set of specific observable attributes which may be characterized in a variety of ways. These definitions are intended to be useful tools in configuring DS domains, but the PDB (or PDBs) used by a provider are not expected to be visible to customers any more than the specific PHBs employed in the provider's network would be. Network >> I would expect that a customer would go to a service provider and ask >> for a such-and-such PDB, with a particular TCS and particular attributes. >> I would also expect that groups like 3GPP and ATM Forum will map services >> onto PDBs. providers are expected to select their own measures to make cus- tomer-visible in contracts and these may be stated quite differ- ently from the technical attributes specified in a PDB definition. Similarly, specific PDBs are intended as tools for ISPs to con- struct differentiated services offerings; each may choose different sets of tools, or even develop their own, in order to achieve particular externally observable metrics. >> So how are customers supposed to manage (and particularly validate) SLSs if >> Diffserv is not going to define them and create MIB objects for monitoring >> them? Are vendors supposed to have different monitoring tools for every ISP? This document defines Differentiated Services Per-Domain Behaviors and specifies the format that must be used for submis- sions of particular PDBs to the Diffserv WG. 2.0 Definitions The following definitions are stated in RFCs 2474 and 2475 and are repeated here for easy reference: o Behavior Aggregate: a collection of packets with the same codepoint crossing a link in a particular direction. The terms "aggregate" and "behavior aggregate" are used interchangeably in this document. o Differentiated Services Domain: a contiguous portion of the Internet over which a consistent set of differentiated services policies are administered in a coordinated fashion. A differentiated services domain can represent different administrative domains or autono- mous systems, different trust regions, different network technologies (e.g., cell/frame), hosts and routers, etc. Also DS domain. o Differentiated Services Boundary: the edge of a DS domain, where classifiers and traffic conditioners are likely to be deployed. A differentiated services boundary can be further sub-divided into ingress and egress nodes, where the ingress/egress nodes are the down- stream/upstream nodes of a boundary link in a given traffic direction. A differentiated services boundary typically is found at the ingress to the first-hop differentiated services-compliant router (or network node) that a host's packets traverse, or at the egress of the last-hop differentiated services-compliant router or network node that packets traverse before arriving at a host. This is sometimes referred to as the boundary at a leaf router. A differentiated services boundary may be co-located with a host, subject to local policy. Also DS boundary. To these we add: >> Horrors! Two new terms! Quick, call the new terms police! :-) o Traffic Aggregate: a collection of packets with a codepoint that >> ^ DS maps to the same PHB, usually in a DS domain or some subset of a DS >> (or PHB group) domain. A traffic aggregate marked for a the foo PHB is referred to as the "foo traffic aggregate" or the "foo aggregate" interchangeably. o Per-Domain Behavior: the expected treatment that an identifiable or target group of packets will receive from "edge to edge" of a DS >> why not "traffic stream"? domain. (Also PDB.) A particular PHB (or, if applicable, list of PHBs) and traffic conditioning requirements are associated with each PDB. 3.0 The Value of Defining Edge-to-Edge Behavior >> The rationale material is redundant. It should either be removed or >> moved to an appendix. Networks of DS domains can be connected to create end-to-end services, but where DS domains are independently administered, the evolution of the necessary business agreements and future sig- naling arrangements will take some time. Early deployments will be within a single administrative domain. Specification of the transit expectations of packets matching a target for a particular diffserv behavior across a DS domain both assists in the deploy- ment of single-domain QoS and will help enable the composition of end-to-end, cross domain services to proceed. Putting aside the business issues, the same technical issues that arise in intercon- necting DS domains with homogeneous administration will arise in interconnecting the autonomous systems (ASs) of the Internet. Today's Internet is composed of multiple independently adminis- tered domains or Autonomous Systems (ASs), represented by the circles in figure 1. To deploy ubiquitous end-to-end quality of ser- vice in the Internet, business models must evolve that include issues of charging and reporting that are not in scope for the IETF. In the meantime, there are many possible uses of quality of >> ^ aren't accounting models in the scope of the AAA WG? service within an AS and the IETF can address the technical issues in creating an intradomain QoS within a Differentiated Services framework. In fact, this approach is quite amenable to incremental deployment strategies. Figure 1: Interconnection of ASs and DS Domains A single AS (for example, AS2 in figure 1) may be composed of subnetworks and, as the definition allows, these can be separate DS domains. For a number of reasons, it might be useful to have multiple DS domains in an AS, most notable being to follow topological and/or technological boundaries and to separate the allocation of resources. If we confine ourselves to the DS bound- aries between these "interior" DS domains, we avoid the non- technical problems of setting up a service and can address the issues of creating characterizable PDBs. The incentive structure for differentiated services is based on upstream domains ensuring their traffic conforms to agreed upon >> ^ one or more TCS rules and downstream domains enforcing that conformance, thus >> . T metrics associated with PDBs can be sensibly computed. The rectangular boxes in figure 1 represent the DS boundary routers and thus would contain the traffic conditioners that ensure and enforce conformance (e.g., shapers and policers). Although we expect that policers and shapers will be required at the DS bound- aries of ASs (dark rectangles), they might appear anywhere, or nowhere, inside the AS. Thus, the boxes at the DS boundaries internal to the AS (shaded rectangles) may or may not condition traffic. Understanding a particular PDB's characteristics under aggregation and multiple hops will result in guidelines for the placement and configuration of DS boundaries. This approach continues the separation of forwarding path and control plane decribed in RFC 2474. The forwarding path charac- >> where?? teristics are addressed by considering what happens at every hop of a packet's path and what behaviors can be characterized under the merging and branching through multiple hops. The control plane only needs to be employed in the configuration of the DS boundaries. A PDB provides a link between the DS domain level >> Interior routers still need to be configured with necessary >> reservations in order to ensure that traffic aggregates get the necessary >> scheduling and buffering. at which control is exercised to form traffic aggregates with qual- ity-of-service goals across the domain and the per-hop and per- link treatments packets receive that results in meeting the quality- of-service goals. 4.0 Understanding PDBs 4.1 Defining PDBs RFCs 2474 and 2475 define a Differentiated Services Behavior Aggregate as "a collection of packets with the same DS codepoint crossing a link in a particular direction" and further state that packets with the same DSCP get the same per-hop forwarding treatment (or PHB) everywhere inside a single DS domain. Note that even if multiple DSCPs map to the same PHB, this must hold for each DSCP individually. In section 2 of this document, we >> I thought that a DS-domain could have more than one instance of the >> same PHB, each configured with different resources and thus having >> different attributes. AF classes, as we've generalized them, are an >> example of this. introduced a more general definition of a traffic aggregate in the diffserv sense so that we might easily refer to the packets which are mapped to the same PHB everywhere within a DS domain. >> Run-on sentence. The relationship between BAs and Traffic Aggregates >> should be stated here. Section 2 also presented a short definition of PDBs which we expand upon in this section: >> this is a confusing way to construct a document. The definition >> should be in one place. Why not start a new section here, called >> "Characteristics of PDBs" or something like that. Per-Domain Behavior: the expected treatment that an identifiable or target group of packets will receive from "edge to edge" of a DS domain. A particular PHB (or, if applicable, list of PHBs) and traffic conditioning requirements are associated with each PDB. Measurable, quantifiable, attributes are associated with each PDB >> I'm not convinced that this is where we've been going. Certainly, >> if somebody were to try to turn the Olympic Service into a PDB, it >> wouldn't have measureable or quantifiable attributes. At least not the >> way that AF is designed. and these can be used to describe what will happen to packets of that PDB as they cross the DS domain. These derive from the rules that are enforced during the entry of packets into the DS domain and the forwarding treatment (PHB) the packets get inside the domain. PDB attributes may be absolute or statistical and they may be parameterized by network properties. For exam- ple, a loss attribute might be expressed as "no more than 0.1% of packets will be dropped when measured over any time period larger than T", a delay attribute might be expressed as "50% of deliverd packets will see less than a delay of d milliseconds, 30% will see a delay less than 2d ms, 20% will see a delay of less than 3d ms." A wide range of metrics is possible. Identification of the target group of packets is carried out using classification. The Per-Domain Behavior applied to that group of packet is characterized in two parts: 1) the relationship between this target group of packets to the marked traffic aggregate which results from the application of rules (through the use of traffic conditioning) to the identified (classified) packets to create a traf- fic aggregate marked for the associated PHB (see figure 2) and 2) the attributes which result from the treatment experienced by packets from the same traffic aggregate transiting the interior of a DS domain, between and inside of DS boundaries. >> I really struggled with this paragraph, especially the run-on clause >> (1). What I think it is trying to say is: >> Packets entering a DS-edge node are classified into traffic streams. >> Traffic conditioners apply the TCS to the traffic stream, and (re)marks >> it is so that it can join a traffic aggregate. >> >> The performance attributes experienced by a traffic stream is affected >> by two factors. These are: >> 1) the action of the traffic conditioner on the traffic stream, including >> discarding, shaping and/or marking of packets in non-conforming traffic >> streams; and >> 2) treatment experienced in DS-nodes in the interior of the DS-domain by >> the traffic aggregate, including buffering and discarding due to contention >> for resources with other traffic aggregates. Figure 2: Relationship of the traffic aggregate associated with a PDB to arriving packets The first part is more straightforward than the second, but might depend on the arriving traffic pattern as well as the configuration of the traffic conditioners. For example, if the EF PHB >> I don't understand this first sentence. How do you mean 'straightforward"? >> Is the real point of this that "The degree of conformance (or non- > > conformance) of incoming traffic streams to the TCS determines the degree to >> which performance attributes are affected by traffic conditioning." In >> which case, most of this paragraph could be eliminated? [RFC2598] and a strict policer of rate R are associated with the foo PDB, then the first part of characterizing the foo PDB is to write the relationship between the arriving target packets and the departing foo traffic aggregate. This would be formulated as the rate of the emerging foo traffic aggregate being less than or equal to the smaller of R and the arrival rate of the target group of pack- ets and additional temporal characteristics of the packets (e.g., burst) would be specified as desired. Thus, there is a "loss rate" that results to the original target group from sending too much traffic or the traffic with the wrong temporal characteristics that should be entirely preventable (or controllable) by the upstream sender conforming to the traffic conditioning associated with the PDB specification. A PDB might also apply traffic conditioning at egress at a DS boundary. This would be treated similarly to the ingress characteristics (the authors may develop more text on this in the future, but it does not materially affect the ideas pre- sented in this document.) In section 4.3, we will revisit this dis- cussion for PHB groups. This aspect of "who is in control" of the loss (or demotion) rate helps to clearly delineate the first part of characterizing packet performance of a PDB from the second part. Further, the relation- >> So what you're really trying to say is that objectives apply only to >> conforming traffic aggregates (or perhaps to an arbitrary portion of a >> traffic aggregate equal to the TCS)? ship of the traffic aggregate to the arriving target packet group can usually be expressed more simply that the traffic aggregate's tran- sit attributes and depends on different elements. The second part is illustrated in figure 3 as the quantifiable metrics that can be used to characterize the transit of any packet of a particular traffic aggregate between any two edges of the DS domain boundary shown in figure 3, including those indicated with arrows. Note that the DS domain boundary runs through the DS boundary rout- ers since the traffic aggregate is generally formed in the boundary router before the packets are queued and scheduled for output. (In most cases, this distinction is expected to be important.) Figure 3: Range of applicability of attributes of a traffic aggregate associated with a PDB The traffic aggregate associated with a PDB is formed by the application of rules, through classification and traffic condition- ing, to packets arriving at the DS boundary. Packets that conform to the rules are marked with a DSCP that maps to a particular PHB within a domain. DSCPs should not mutate in the interior of >> What is meant by "mutate"? a DS domain as there are no rules being applied. If it is necessary to reapply the kind of rules that could result in remarking, there should probably be a DS domain boundary at that point; an inte- rior one that can have "lighter weight" rules. Thus, if measuring attributes between locations as indicated in figure 3, the DSCP at the egress side can be assumed to have held throughout the domain. >> Why is this paragraph important? Though a DS domain may be as small as a single node, more complex topologies are expected to be the norm, thus the PDB definition must hold as its traffic aggregate is split and merged on the interior links of a DS domain. Packet flow in a network is not part of the PDB definition; the application of rules as packets enter the DS domain and the consistent PHB through the DS domain must suffice. A PDB's definition does not have to hold for arbitrary topologies of networks, but the limits on the range of applicability for a specific PDB must be clearly specified. >> How invariant is invariant? For example, if we take Anna and Jean-Yves' work >> at face value, can we claim that kind of invariance for VW? In general, though, a PDB operates between N ingress points and M egress points at the DS domain boundary. Even in the degener- ate case where N=M=1, PDB attributes are more complex than the definition of PHB attributes since the concatenation of the behavior of intermediate nodes affects the former. A complex case with N and M both greater than one involves splits and merges in the traffic path and is non-trivial to analyze. Analytic, simulation, and experimental work will all be necessary to under- stand even the simplest PDBs. >> um... does this mean that we don't know that any of this will work? Did >> somebody tell marketing? :-) 4.2 Constructing PDBs A DS domain is configured to meet the network operator's traffic engineering goals for the domain independently of the perfor- mance goals for a particular flow of a traffic aggregate. Once the interior routers are configured for the number of distinct traffic aggregates that the network will handle, each PDB's allocation at the edge comes from meeting the desired performance goals for the PDB's traffic aggretae subject to that configuration of link >> aggregate schedulers and bandwidth. The rules at the edge may be altered >> run-on sentence by provisioning or admission control but the decision about which PDB to use and how to apply the rules comes from match- ing performance to goals. >> I don't quite understand how this paragraph fits here, how it fits together >> or why it's not redundant. For example, consider the diffserv domain of figure 3. A PDB with an attribute of an explicit bound on loss must have rules at the edge to ensure that on the average no more packets are admit- ted than can emerge. Though, queueing internal to the network >> X may result in a difference between input and output traffic over >> input and output traffic rate some timescales, the averaging timescale should not exceed what might be expected for reasonably sized buffering inside the net- work. Thus if bursts are allowed to arrive into the interior of the network, there must be enough capacity to ensure that losses don't exceed the bound. Note that explicit bounds on the loss level can be particularly difficult as the exact way in which pack- ets merge inside the network affects the burstiness of the PDB's traffic aggregate and hence, loss. PHBs give explicit expressions of the treatment a traffic aggre- gate can expect at each hop. For a PDB, this behavior must apply > which behavior? to merging and diverging traffic aggregates, thus characterizing a > . T PDB requires exploring what happens to a PHB under aggrega- tion. Rules must be recursively applied to result in a known > Elsewhere in the draft, "rules" were edge rules, i.e., TCSs. Here it seems > to be interior rules, which the draft said elsewhere do not exist. behavior. As an example, since maximum burst sizes grow with the number of microflows or aggregate flows merged, a PDB specification must address this. A clear advantage of constructing behaviors that aggregate is the ease of concatenating PDBs so that > I thought PDB was a TYPE. Why would you concatenate different TYPES? > Do you mean Traffic Aggregates (INSTANCES of a PDB?) the associated traffi aggregate has known attributes that span inte- > c rior DS domains and, eventually, farther. For example, in figure 1 > run-on sentence. assume that we have configured the foo PDB on the interior DS domains of AS2. Then traffic aggregates associated with the foo PDB in each interior DS domain of AS2 can be merged at the shaded interior boundary routers. Using the same (or fewer) rules as were applied to create the traffic aggregates at the entrance to AS2, there should be confidence that the attributes of the foo PDB can continue to be used to quantify by the expected behav- >> does not parse unless you remove the word 'by' ior. Explicit expressions of what happens to the behavior under aggregation, possibly parameterized by node in-degrees or net- work diameters are necessary to determine what to do at the inter- nal aggregation points. One approach might be to completely reapply the edge rules at these points. Another might employ >> of course, the whole premise of Diffserv was supposed to be that TCBs >> need only be present in the edges, avoiding the need for TCBs in high-speed >> core nodes. So what we're doing here is modifying the goal to TCBs only >> in some high speed core nodes. some limited rate-based remarking only. Multiple PDBs might use the same PHB. In the specification of a PDB, there might be a list of PHBs and their required configura- tion, all of which would result in the same characteristics. In >> we don't have a current example of this, do we? operation, though, it is expected that a single domain will use a single PHB to implement a particular PDB. A single PHB might beselected within a domain by a list of DSCPs. Multiple PDBs >> ^ might use the same PHB in which case the transit performance of traffic aggregates of these PDBs will, of necessity, be the same. >> Have we another TYPE/INSTANCE ambiguity here? For example, AF 2x >> and AF3x are two separate INSTANCES of the AF PHB group, but are >> assigned different resources in each DS-node. If there are multiple >> INSTANCES of a PHB, each with a different DSCP and different resources, >> then the transit characteristics of each of these instances will differ. >> If two PDBs use the same INSTANCE of a PHB, then the transit characteristics >> of the two PDBs will be the same. >> By the way, a similar type-instance relationship could apply to PDBs. For >> example, a DS domain might support a very low jitter VW and a moderately low >> jitter VW. Yet, the particular characteristics that the PDB designer wishes to claim as attributes may vary, so two PDBs that use the same PHB might not be specified with the same list of attributes. The specification of the transit expectations of behavior aggre- gates across domains both assists in the deployment of QoS >> assists in offering QoS objectives within a DS domain and helps enable the composition of end-to- end, cross-domain services to proceed. >> This last bit should be in the future tense. 4.3 PDBs using PHB Groups When a set of related PDBs are defined using a PHB group, they should be defined in the same document. This would be particu- >> Why did you choose to do it this way? It would make more sense to have >> PDBs use a PHB or PHB group, and characterize the behavior of the PDB >> according to whatever attribute differentiates the members of the PHB >> group. larly appropriate if the application of the edge rules that create the traffic aggregates associated with each PDB had some relation- ships and interdependencies, as one would expect for the AF PHB group [RFC2597]. Characterizing the traffic conditioning effects should then be described for these PDBs together. The transit attributes will depend on the PHB associated with the PDB and will not be the same for all PHBs in the group, thus each should have a clearly separate treatment, though there may be some parameterized interrelationship between the attributes of each of these PDBs. >> run-on sentence For example, if the traffic conditioner described in RFC 2698 is used to mark arriving packets for three different AF1x PHBs, then the most reasonable approach is to define and quantify the rela- tionship between the arriving packets and the emerging traffic aggregates as they relate to one another. The transit characteris- tics of packets of each separate AF1x traffic aggregate should be described separately. A set of PDBs might be defined using Class Selector Compliant PHBs [RFC2474] in such a way that the edge rules that create the traffic aggregates are not related, but the transit performance of each traffic aggregate has some parametric relationship to the the other. If it makes sense to specify them in the same document, then the author(s) should do so. >> wasn't class selector a PHB group? 4.4 Forwarding path vs. control plane A PDB's associated PHB and edge traffic conditioners are in the packet forwarding path and operate at line rates while the config- uration of the DS domain edge to enforce rules on who gets to use the PDB and how the PDB should behave temporally is done by >> temporal behavior is also affected by configuration of interior nodes. the control plane on a very different time scale. For example, con- >> run-on sentence. figuration of PHBs might only occur monthly or quarterly. The edge rules might be reconfigured at a few regular intervals during the day or might happen in response to signalling decisions thou- sands of times a day. Even at the shortest time scale, control plane actions are not expected to happen per-packet. Much of the con- trol plane work is still evolving and is outside the charter of the Diffserv WG. We note that this is quite appropriate since the manner in which the configuration is done and the time scale at which it is done should not affect the PDB attributes. >> assuming it's "done right". There is this hidden issue of resource alloc- >> ation in the interior of the DS domain and its relation to admission control >> and policy. Clearly, the transit characteristics of a traffic stream will >> not meet some of its objective attributes (or QoS objectives) unless this >> configuration is done in a coordinated and timely fashion. 5.0 Format for Specification of Diffserv Per-Domain Behaviors PDBs arise from a particular relationship between edge and inte- rior (which may be parameterized). The quantifiable characteris- >> between edge and interior what? tics of a PDB must be independent of whether the network edge is configured statically or dynamically. The particular configuration of traffic conditioners at the DS domain edge is critical to how a PDB performs, but the act(s) of configuring the edge is a control >> and the interior plane action which can be separated from the specification of the PDB. The following sections must be present in any specification of a Differentiated Services PDB. Of necessity, their length and con- tent will vary greatly. 5.1 Applicability Statement All PDB specs must have an applicability statement that outlines the intended use of this PDB and the limits to its use. 5.2 Rules This section describes the rules to be followed in the creation of this PDB. Rules should be distinguished with "may", "must" and "should." The rules specify the edge behavior and configuration >> ref RFC 2119 and the PHB (or PHBs) to be used and any additional require- ments on their configuration beyond that contained in RFCs. >> this is about TCSs and PHBs, isn't it? Can we create some kind of >> tablular form for this? 5.3 Attributes A PDB's attributes tell how it behaves under ideal conditions if configured in a specified manner (where the specification may be parameterized). These might include drop rate, throughput, delay bounds measured over some time period. They may be absolute bounds or statistical bounds (e.g., "90% of all packets measured over intervals of at least 5 minutes will cross the DS domain in less than 5 milliseconds"). A wide variety of characteristics may be used but they must be explicit, quantifiable, and defensible. >> I presume that a PDB specification will not specify parameter values >> but rather parameter definitions: e.g., "alpha-quantile delay bound D, >> measured from the time that the last bit of each packet enters the DS >> domain to the time the last bit exits the DS domain, over a sample >> large enough to provide a gamma confidence interval", but not >> "... and D MUST be 100 ms, alpha MUST be 1E-5 and gamma MUST be 0.8. Where particular statistics are used, the document must be precise about how they are to be measured and about how the characteris- tics were derived. >> This draft talks about QoS in various places. Is this a good place to >> say something to the effect that the Attributes of a PHB are the QoS >> objectives measured at the edges of the DS domain? Advice to a network operator would be to use these as guidelines in creating a service specification rather than use them directly. For example, a "loss-free" PDB would probably not be sold as such, but rather as a service with a very small packet loss proba- bility. 5.4 Parameters The definition and characteristics of a PDB may be parameterized by network-specific features; for example, maximum number of hops, minimum bandwidth, total number of entry/exit points of the PDB to/from the diffserv network, maximum transit delay of network elements, minimum buffer size available for the PDB at a network node, etc. >> There are two different kinds of parameters: those related to the >> TCS, and those related to the internal configuration of the DS domain. >> The former would be visible to the user of the PDB, and the only to the >> network operator. Does this section mean the former or the latter or both? >> Is there some kind of tabular form that we could create for this? 5.5 Assumptions In most cases, PDBs will be specified assuming lossless links, no link failures, and relatively stable routing. This is reasonable since otherwise it would be very difficult to quantify behavior. However, these assumptions must be clearly stated. Some PDBs may be developed without these assumptions, e.g., for high loss rate links, and these must also be made explicit. If additional restrictions, e.g., route pinning, are required, these must be stated. >> Which doesn't exist except in MPLS. Further, if any assumptions are made about the allocation of resources within a diffserv network in the creation of the PDB, these must be made explicit. 5.6 Example Uses A PDB specification must give example uses to motivate the understanding of ways in which a diffserv network could make use of the PDB although these are not expected to be detailed. For example, "A bulk handling behavior aggregate may be used for all packets which should not take any resources from the network unless they would otherwise go unused. This might be useful for Netnews traffic or for traffic rejected from some other PDB due to violation of that PDB's rules." >> How can traffic be rejected from a PDB due to a rules violation? The PDB >> should explicitly state the treatment given to packets that violate the >> TCS, and not just shift thme to another PDB. 5.7 Environmental Concerns (media, topology, etc.) >> how does the text in this section address medai, topology etc? Note that it is not necessary for a provider to expose which PDB (if a commonly defined one) is being used nor is it necessary for a provider to specify a service by the PDB's attributes. For exam- >> Back to the problem of relationship between PDB and service. If >> my understanding that the PDB is a service is correct, then this sentence >> does not make much sense. ple, a service provider might use a PDB with a "no queueing loss" characteristic in order to specify a "very low loss" service. This section is to inject realism into the characteristics described above. Detail the assumptions made there and what constraints that puts on topology or type of physical media or allocation. 6.0 PDB Attributes Attributes are associated with each PDB: measurable, quantifi- able, characteristics which can be used to describe what will hap- pen to packets using that PDB as they cross the domain. These expectations result directly from the application of edge rules enforced during the creation of the PDB's traffic aggregate and/or its entry into the domain and the forwarding treatment (PHB) packets of that traffic aggregate get inside the domain. There are many ways in which traffic might be distributed, but creating a quantifiable, realizable service across the DS domain will limit the scenarios which can occur. There is a clear correlation between the strictness of the rules and the quality of the charac- terization of the PDB. There are two ways to characterize PDBs with respect to time. First are its properties over "long" time periods, or average behaviors. In a PDB spec, these would be the rates or throughput seen over some specified time period. In addition, there are prop- >> or sample space... the problem really being one of statistical signficance erties of "short" time behavior, usually expressed as the allowable burstiness in an aggregate. The short time behavior is important is >> are we still talking attributes or are we talking about TCSs? understanding the buffering (and associated loss characteristics) and in quantifying how packets using the PDB aggregate, either within a DS domain or at the boundaries. For short-time behavior, >> This sentence doesn't parse. Also, these are characteristics visible >> from inside the network -- in fact, they are probably characteristics of >> PHBs rather than PDBs. They are not attributes of the PDB. we are interested primarily in two things: 1) how many back-to- back packets of the PDB's traffic aggregate will we see at any point (this would be metered as a burst) and 2) how large a burst of packets of this PDB's traffic aggregate can appear in a queue at once (gives queue overflow and loss). If other PDBs are using the same PHB within the domain, that must be taken into account. >> It almost sounds like "attributes" needs to be split into a section >> on attributes of the PDB and another on network and traffic engineering. Put simply, a PDB specification should provide the answer to the question: Under what conditions can we join the output of this domain to another under the same rules and expectations? 6.1 Considerations in specifying long-term or average PDB attributes To make this more concrete, consider the DS domain of figure 4 for which we will define the foo PDB. To characterize the average or long-term behavior that must be specified we must explore a number of questions, for instance: Can the DS domain handle the average foo traffic flow? Is that answer topology-dependent or are >> what is the average foo traffic flow? where would you measure? How oould >> the answer not be topology dependent? there some specific assumptions on routing which must hold for the foo PDB to preserve its "adequately provisioned" capability? >> This gets into the routing area. I haven't had a chance to read RFC 2676 >> (QOSPF), so can't reasonably speculate whether the mechanisms exist in >> the RFC series to support such a thing. In other words, if the topology of D changes suddenly, will the foo PDB's attributes change? Will its loss rate dramatically increase? >> How could attributes be guaranteed not to change? Also, loss rate is not the >> only attribute. And a topology change could dramatically DECREASE the loss >> rate as easily as it could increase it. Figure 4: ISP and DS domain D connected in a ring and connected to DS domain E Let figure 4 be an ISP ringing the U.S. with links of bandwidth B >> I should leave this to one of our international colleagues, but isn't this >> a bit of gratuitous flag waving? and with N tails to various metropolitan areas. If the link between the node connected to A and the node connected to Z goes down, all the foo traffic aggregate between the two nodes must transit the entire ring: Would the bounded behavior of the foo PDB change? If this outage results in some node of the ring now hav- ing a larger arrival rate to one of its links than the capacity of the link for foo's traffic aggregate, clearly the loss rate would change dramatically. In that case, there were topological assumptions made about the path of the traffic from A to Z that affected the characteristics of the foo PDB. Once these no longer hold, any assumptions on the loss rate of packets of the foo traffic aggregate transiting the domain would change; for example, a characteristic such as "loss rate no greater than 1% over any interval larger than 10 minutes" would no longer hold. A PDB specification should spell out the assumptions made on preserving the attributes. >> This seems to get too deeply into network engineering issues. What you're >> looking for is reasonable and reasonably abstract contstraints. I think >> that most of what would need to be said could be expressed in a sentence >> to the effect of: "The attributes of this PDB will be adversely affected >> if the topology changes in a way that causes the bottleneck bandwidth to be << exceeded by the sum of the TCSs at the bottleneck." 6.2 Considerations in specifying short-term or bursty PDB attributes Next, consider the short-time behavior of the traffic aggregate associated with a PDB, specifically whether permitting the maxi- mum bursts to add in the same manner as the average rates will lead to properties that aggregate or under what rules this will lead to properties that aggregate. In our example, if domain D allows each of the uplinks to burst p packets into the foo traffic aggre- gate, the bursts could accumulate as they transit the ring. Packets headed for link L can come from both directions of the ring and back-to-back packets from foo's traffic aggregate can arrive at the same time. If the bandwidth of link L is the same as the links of the ring, this probably does not present a buffering problem. If there are two input links that can send packets to queue for L, at worst, two packets can arrive simultaneously for L. If the band- width of link L equals or exceeds twice B, the packets won't accumulate. Further, if p is limited to one, and the bandwidth of L exceeds the rate of arrival (over the longer term) of foo packets (required for bounding the loss) then the queue of foo packets for link L will empty before new packets arrive. If the bandwidth of L is equal to B, one foo packet must queue while the other is trans- mitted. This would result in N x p back-to-back packets of this traffic aggregate arriving over L during the same time scale as the bursts of p were permitted on the uplinks. Thus, configuring the PDB so that link L can handle the sum of the rates that ingress to the foo PDB doesn't guarantee that L can handle the sum of the N bursts into the foo PDB. >> in another words, bursts accumulate. Accumulated bursts mean queue >> buildup. Queue buildup causes delay and/or packet loss. Topology >> affects burst accumulation. These are all things that the audience >> for this document should know. Or if they need reminding, a few sentences, >> not a belabored example should do. If the bandwidth of L is less than B, then the link must buffer Nxpx(B-L)/B foo packets to avoid loss. If the PDB is getting less than the full bandwidth L, this number is larger. For probabilistic bounds, a smaller buffer might do if the probability of exceeding it can be bounded. More generally, for router indegree of d, bursts of foo packets might arrive on each input. Then, in the absence of any additional rules, it is possible that dxpx(# of uplinks) back-to-back foo packets can be sent across link L to domain E. Thus the DS domain E must permit these much larger bursts into the foo PDB than domain D permits on the N uplinks or else the foo traffic aggregate must be made to conform to the rules for entering E (e.g., by shaping). What conditions should be imposed on a PDB and on the associ- ated PHB in order to ensure PDBs can be concatenated, as across the interior DS domains of figure 1? Edge rules for constructing a PDB that has certain attributes across a DS domain should apply independently of the origin of the packets. With reference to the example we've been exploring, the rules for the PDB's traffic aggregate entering link L into domain E should not depend on the number of uplinks into domain D. 6.3 Example >> can we move this to an Appendix? It is belabored and breaks up the flow >> of the normative body of the text. In this example, we will make the above more concrete. We assume that only the foo PDB is using its associated traffic aggre- gate and we use "foo agggregate" interchangeably with "the traf- fic aggregate associated with the PDB foo." We also use "foo >> This confuses matters. Try "Aggregate A is a traffic aggregate whose >> PDB is "foo". packets" interchangeably with "the packets marked for the PHB associated with PDB foo." >> Again, foo is overloaded. Try "Aggregate A is mapped to PHB X." Assume the topology of figure 4 and that all the uplinks have the same bandwidth B and link L has bandwidth L which is less than or equal to B. The foo traffic aggregates from the N uplinks each have average rate R and are destined to cross L. If only a fraction a of link L is allocated to foo, then R =axL/N fits the average rate constraint. If each of the N flows can have a burst of p packets and half the flows transit the ring in each direction, then 2xp packets can arrive at the foo queue for link L in the time it took to transmit p packets on the ring, p/B. Although the link scheduler for link L might allow the burst of packets to be transmitted at the line rate L, after the burst allotment has been exceeded, the queue should be expected to clear at only rate axL. Then consider the packets that can accumulate. It takes 2xp/(axL) to clear the queue of the foo packets. In that time, bursts of p packets from the other uplinks can arrive from the ring, so the packets do not even have to be back-to-back. Even if the packets do not arrive back-to- back, but are spaced by less time than it takes to clear the queue of foo packets, either the required buffer size can become large or the burst size of foo packets entering E across L becomes large and is a function of N, the number of uplinks of domain D. Let L = 1.5 Mbps, B = 45 Mbps, a = 1/3, N=10, p = 3. Suppose that the bursts from two streams of foo packets arrive at the queue for link L very close together. Even if 3 of the packets are cleared at the line rate of 1.5 Mbps, there will be 3 packets remaining to be serviced at a 500 kbps rate. In the time allocated to send one of these, 9 packets can arrive on each of the inputs from the ring. If any non-zero number of these 18 packets are foo packets, the queue size will not reduce. If two more bursts (6 of the 18 pack- ets) arrive, the queue increases to 8 packets. Thus, it's possible to build up quite a large queue, one likely to exceed the buffer allo- cated for foo. The rate bound means that each of the uplinks will be idle for the time to send three packets at 50 kbps, possibly by policing at the ring egress, and thus the queue would eventually decrease and clear, however, the queue at link L can still be very large. PDBs where the intention is to permit loss should be con- structed so as to provide a probabilistic bound for the queue size to exceed a reasonable buffer size of one or two bandwidth-delay products. Alternatively or additionally, rules can be used that bound the amount of foo packets that queue by limiting the burst size at the ingress uplinks to one packet, resulting in a maximum queue of N or 10 or to impose additional rules on the PDB. One approach is to limit the domain over which the PDB applies so that interior boundaries are placed at merge points (or between every M merge points) so that a shaping edge conditioner can be reapplied. Another approach is to use a PHB defined such that it strictly limits the burstiness. 6.4 Remarks This section has been provided to provide some motivational food for thought for PDB specifiers. It is by no means an exhaustive catalog of possible PDB attributes or what kind of analysis must be done. We expect this to be an interesting and evolutionary part of the work of understanding and deploying differentiated ser- vices in the Internet. There is a potential for much interesting research work. However, in submitting a PDB specification to the Diffserv WG, a PDB must also meet the test of being useful and relevant. >> Which leads to another meta-question about where we're going. There seems >> to be an industry expectation that diffserv is the solution to the world's >> QoS problems. If our intent is to meet that expectation to some extent, >> then this draft needs to be quite prescriptive about the content of PDB > > specifications. If we're really doing research, then we need to reset >> industry expectations (good luck!) Or we need an explicit way of dividing >> the Diffserv world into a research track and a standard track. 7.0 Reference Per-Domain Behaviors >> Shouldn't we put these into different drafts? The intent of this section is to define one or a few "reference" PDBss; certainly a Best Effort PDB and perhaps others. This sec- tion is very preliminary at this time and meant to be the starting point for discussion rather than its end. These are PDBs that have little in the way of rules or expectations. 7.1 Best Effort Behavior PDB 7.1.1 Applicability A Best Effort (BE) PDB is for sending "normal internet traffic" across a diffserv network. That is, the definition and use of this PDB is to preserve, to a reasonable extent, the pre-diffserv deliv- ery expectation for packets in a diffserv network that do not require any special differentiation. 7.1.2 Rules There are no rules governing rate and bursts of packets beyond the limits imposed by the ingress link. The network edge ensures that packets using the PDB are marked for the Default PHB (as defined in [RFC2474]). Interior network nodes use the Default PHB on these packets. 7.1.3 Attributes of this PDB "As much as possible as soon as possible". Packets of this PDB will not be completely starved and when resources are available (i.e., not required by packets from any other traffic aggregate), network elements should be configured to permit packets of this PDB to consume them. Although some network operators may bound the delay and loss rate for this aggregate given knowledge about their network, these attributes are not part of the definition. >> how are these "measurable, quantifiable attributes"? 7.1.4 Parameters None. 7.1.5 Assumptions .A properly functioning network, i.e. packets may be delivered from any ingress to any egress. 7.1.6 Example uses 1. For the normal Internet traffic connection of an organization. 2. For the "non-critical" Internet traffic of an organization. 3. For standard domestic consumer connections > what is a standard domestic consumer connection? 7.2 Bulk Handling Behavior PDB 7.2.1 Applicability A Bulk Handling (BH) PDB is for sending extremely non-critical traffic across a diffserv network. There should be an expectation that these packets may be delayed or dropped when other traffic is present. >> Is this related to the less-than-best effort PHB that we discussed for sev- >> eral meetings? I thought there was no consensus to proceed on that. 7.2.2 Rules There are no rules governing rate and bursts of packets beyond the limits imposed by the ingress link. The network edge ensures that packets using this PDB are marked for either a CS or an AF PHB. Interior network nodes must have this PHB configured so that its packets may be starved when other traffic is present. For example, using the PHB for Class Selector 1 (DSCP=001000), all routers in the domain could be configured to queue such traffic behind all other traffic, subject to tail drop. 7.2.3 Attributes of the BH PHB Packets are forwarded when there are idle resources. 7.2.4 Parameters None. 7.2.5 Assumptions A properly functioning network. 7.2.6 Example uses 1. For Netnews and other "bulk mail" of the Internet. 2. For "downgraded" traffic from some other PDB. >> see earlier comment on downgrading from other PDBs. 8.0 Procedure for submitting PDB specifications to Diffserv WG >> This is irrelevant to a large portion of the draft's ultimate audience. It >> should be moved to an appendix. 1. Following the guidelines of this document, write a draft and submit it as an Internet Draft and bring it to the attention of the WG mailing list. 2. Initial discussion on the WG should focus primarily on the merits of the a PDB, though comments and questions on the claimed attributes are reasonable. This is in line with our desire to put relevance before academic interest in spending WG time on PDBs. Academically interesting PDBs are encouraged, but not for submission to the diffserv WG. >> This slur "academically interesting" has been used in two ways. One of them >> is something along the lines of "would make a nice paper for a low-prestige >> conference." The other is more along the lines of "might be valuable, but >> too much work to put in my employer's router". The criteria should be >> a value proposition, implementation independent feasibility, ability to >> work within the context of the various elements of the Internet (e.g., does >> not assume routing features that don't exist), reasonable scalability. 3. Once consensus has been reached on a version of a draft that it is a useful PDB and that the characteristics "appear" to be correct (i.e., not egregiously wrong) that version of the draft goes to a review panel the WG Co-chairs set up to audit and report on the characteristics. The review panel will be given a deadline for the review. The exact timing of the deadline will be set on a case-by- case basis by the co-chairs to reflect the complexity of the task and other constraints (IETF meetings, major holidays) but is expected to be in the 4-8 week range. During that time, the panel may correspond with the authors directly (cc'ing the WG co- chairs) to get clarifications. This process should result in a revised draft and/or a report to the WG from the panel that either endorses or disputes the claimed characteristics. 4. If/when endorsed by the panel, that draft goes to WG last call. If not endorsed, the author(s) can give a itemized response to the panel's report and ask for a WG Last Call. 5. If/when passes Last Call, goes to ADs for publication as a WG Informational RFC in our "PDB series". >> The industry expectations I keep seeing seem to militate toward standards >> track. Although frankly, the distinction between informational and standards >> track is missed by the outside world. _______________________________________________ diffserv mailing list diffserv@ietf.org http://www1.ietf.org/mailman/listinfo/diffserv Archive: http://www-nrg.ee.lbl.gov/diff-serv-arch/
- [Diffserv] PDB draft Dan Grossman
- Re: [Diffserv] PDB draft Brian E Carpenter
- Re: [Diffserv] PDB draft Dan Grossman
- Re: [Diffserv] PDB draft Brian E Carpenter
- Re: [Diffserv] PDB draft Dan Grossman
- Re: [Diffserv] PDB draft Brian E Carpenter
- Re: [Diffserv] PDB draft Dan Grossman