RE: [NTDP] Purpose of NTDP

"Craig Brown \(crbrown\)" <> Wed, 02 August 2006 12:04 UTC

Received: from [] ( by with esmtp (Exim 4.43) id 1G8FSy-0005UO-DD; Wed, 02 Aug 2006 08:04:48 -0400
Received: from [] ( by with esmtp (Exim 4.43) id 1G8FSw-0005UJ-Av for; Wed, 02 Aug 2006 08:04:46 -0400
Received: from ([]) by with esmtp (Exim 4.43) id 1G8FSu-0000VJ-R9 for; Wed, 02 Aug 2006 08:04:46 -0400
Received: from ([]) by with ESMTP; 02 Aug 2006 17:52:35 -0700
X-IronPort-AV: i="4.07,204,1151910000"; d="scan'208"; a="72606576:sNHT29942502"
Received: from ( []) by (8.12.10/8.12.6) with ESMTP id k72C4e1L011892; Wed, 2 Aug 2006 20:04:40 +0800 (CST)
Received: from ([]) by with Microsoft SMTPSVC(6.0.3790.1830); Wed, 2 Aug 2006 20:04:39 +0800
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Subject: RE: [NTDP] Purpose of NTDP
Date: Wed, 2 Aug 2006 20:04:37 +0800
Message-ID: <>
Thread-Topic: [NTDP] Purpose of NTDP
Thread-Index: Aca0Rqlr8SP6mL+tQRyedpdrTHCrYwAdJlagACI2y4AAEIKMEAAgnZSA
From: "Craig Brown \(crbrown\)" <>
To: "Loki Jorgenson" <>
X-OriginalArrivalTime: 02 Aug 2006 12:04:39.0859 (UTC) FILETIME=[D4E9F830:01C6B62B]
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 140baa79ca42e6b0e2b4504291346186
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: define standards for the purpose of scripting of network testing equipment <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>


Thank you for the clear explanation. Let me reflect my understanding and
see if we are on the same page.

Using your explanation, I see the problem can be divided into two areas:
the characteristics/associations to configure a device, and
analysis/reporting of the data returned. 

For configuring devices that are either emulating traffic the
characteristics and associations can relate to the type to
clients/servers and the work flow/load, for traffic generation it can
relate to the protocol and work load of that protocol, and using your
example, for devices monitoring traffic "there should be a set of
constraints that define the nature and extent of the gathering of data.
How long? Until what is satisfied?". There are different objectives on
what is being configured but the same method for configuring a device
can be used. 

Similarly for the data set content the devices testing for traffic
generation are reviewing packet signatures for packet measurements, to
name one example, whereas in traffic monitoring the data being monitored
may not have been artificially created and the analysis and results may
be different.

It therefore appears that there are similarities in the two objectives;
both can work over the same architecture, use a similar method for data
modeling/management, but each have a different set of methodologies and
analysis based on their defined purpose. I.e. Traffic
emulation/generation versus traffic analysis.

The diagram below is a rough representation of these points over a
common architecture. 

   |   NMS    |   Test      |   Test      |
   |          |    Scripts  |     Appl.   |  Application
             ||                  /\
             ||                 /  \
            \  /                 ||
             \/                  ||
   |      Config       ||     Analysis    |  Methodologies &
   |       Data        ||       Data      |  Content (Data Modeling)
             ||                  /\
             ||                 /  \
            \  /                 ||
             \/                  ||
   |      Verbs        || Notifications / |
   |  <get|edit|query> ||  Events / Query |  Operations
             ||                  /\
             ||                 /  \
            \  /                 ||
             \/                  ||
   |              Transport               |  Transport

To meet the objective of a common architecture it must be well thought
out and flexible. Being flexible refers to not only that the
characteristics of a protocol have its related associations, but also
that the associations of all protocols can co-exist in a message being
sent to a device. That is, any combination data, voice, video, security,
or storage may be configurable on a single device so must match the
overall methodology of how messages are formed and forward to the

So, if I have represented your description correctly then the initial
charter could be adjusted to allow separate spin-off work in the traffic
monitoring area after an architecture and methodology were completed. 

The previous proposal was:

A proposed charter is to create a specification for the automatic
control of devices that generate and analyze traffic and/or emulate
network protocols for the testing of a network or a specific network
device. To perform either network or a network device specific testing,
a tester will use a product to generate traffic, emulate routing
protocols and emulate network end points, such as a network host or HTTP

The updated proposal could be:

To create a specification for the automatic control of devices that
assist in the testing of a network or a specific network device. The
primary objective is to define an architecture that would allow for the
control of devices that generate and analyze traffic and/or emulate
network protocols, and/or tests [Loki, is tests the right word or
perhaps monitoring/measuring?] network traffic. This objective includes
the definition of how data will be modeled in both the management and
analysis of the devices. The secondary objective is to develop specific
data models that will co-exist using the modeling methodology.


> -----Original Message-----
> From: Loki Jorgenson [] 
> Sent: Wednesday, August 02, 2006 3:02 AM
> To: Craig Brown (crbrown)
> Cc:
> Subject: RE: [NTDP] Purpose of NTDP
> Craig - let us suppose that "controlling testing devices" is 
> related to some methodology for detecting network conditions 
> (possibly distinct from the testing itself).  The testing 
> system is applied reactively in response to the detection of 
> some conditions.  For example, a VoIP management system 
> sampling RTCP-XR data from IP phones might discover a general 
> degredation and send out an alarm.
> Today, that is typically a user calling to complain.  
> However, with continuous monitoring, app-aware network 
> devices, adaptive applications, etc. it is more the likely 
> that either the "testing devices" themselves, or some other 
> other automated external stimulus, triggers a "controlling of 
> test devices" and directs them to a specific task.  Some call 
> this "proactive" simply because such a system responds to 
> degraded conditions before the end user does, potentially 
> resulting in a break-fix before the user is aware.
> While we are barely beyond reactivity in the sense of the 
> user complaint, it is clear that the technologies are moving 
> toward such a proactive stance and we need to be able to 
> provide a framework for "controlling testing devices" that 
> accepts triggers or other feeds from non-human sources.  This 
> might look like a simple API of trigger/response but it 
> should at least preserve the relationship between the 
> stimulus, the reaction, and the outcome.
> In directing the testing devices, there should be a set of 
> constraints that define the nature and extent of the 
> gathering of data.  How long?
> Until what is satisified?  Indefinitely until there is an 
> "off" trigger?
> Finally, a specific set of constraints on data gathering are met (e.g.
> enough data), then an analysis process should be associated 
> with the data.  Once again, the relationship between the 
> initial conditions and the analysis should be preserved.
> Once the analysis is complete, some conclusion or action may 
> be associated with it.  Typically, Fthat amounts to a 
> particular signal, a threshold against which it is compared, 
> and an alarm sent to a human (or at least a management 
> system).  This is very anthro-centric insofar as it assumes 
> that a human (or possibly an NMS) will interpret the outcomes 
> of the analysis and determine appropriate actions to take.
> A device/configuration management system might be invoked to 
> remediate or provision against a particular outcome of the 
> analysis, with yet another "controlling of testing devices" 
> activity subsequently triggered to validate the 
> remediation/provisioning action.  Once a framework is capable of 
>    o preserving the association between 
> 		conditions/trigger --> testing --> analysis --> 
> remediation --> validation
>    o associating condition sets with analyses, analysis 
> outcomes with remediation, remediation with validation actions
>    o relating triggers/testing/analysis over time
>    o  running without human intervention then it has the 
> potential to support fully autonomic operation.
> The specifics of those associations would be what I hope we 
> would discuss and develop on this list.  I anticipate that 
> the differences between methodologies, data set content, 
> analyses, etc. and the challenges of associating dissimilar 
> data/analyses will be substantial.
> This parallels work undertaken by the Internet2 End-to-End 
> project (e.g.
> perfSONAR) attempting to abstract a range of well-defined 
> testing technologies sufficiently that they can be integrated 
> into a single framework.
> On the other hand, it may be that the "controlling test 
> devices/gathering data" aspect of this larger framework is 
> all that we are interested in with NTDP.  It would be a layer 
> that simply accepts testing requests of a certain form, 
> executes and returns the data.  The analysis and other levels 
> of behavior might be found in a separate system/layer.  That 
> would be less interesting (unless there were some 
> well-defined descriptions for those layers that we can reference).
> Hope that helps to clarify my point, as requested.
> Loki
> -----Original Message-----
> From: Craig Brown (crbrown) []
> Sent: Tuesday, August 01, 2006 1:27 AM
> To: Loki Jorgenson
> Cc:
> Subject: RE: [NTDP] Purpose of NTDP
> Loki,
> I can see the areas of controlling testing device and gathering data.
> Can you expand the description of the other areas ?
> Cheers,
> Craig 

NTDP mailing list