Re: [vwrap] [wvrap] Simulation consistency

Morgaine <morgaine.dinova@googlemail.com> Sun, 10 April 2011 13:10 UTC

Return-Path: <morgaine.dinova@googlemail.com>
X-Original-To: vwrap@core3.amsl.com
Delivered-To: vwrap@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 20FC528B23E for <vwrap@core3.amsl.com>; Sun, 10 Apr 2011 06:10:10 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.01
X-Spam-Level:
X-Spam-Status: No, score=-2.01 tagged_above=-999 required=5 tests=[AWL=-0.549, BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001, J_CHICKENPOX_43=0.6, J_CHICKENPOX_51=0.6, RCVD_IN_DNSWL_LOW=-1, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qmOtwAP-ZfLB for <vwrap@core3.amsl.com>; Sun, 10 Apr 2011 06:10:03 -0700 (PDT)
Received: from mail-qw0-f44.google.com (mail-qw0-f44.google.com [209.85.216.44]) by core3.amsl.com (Postfix) with ESMTP id A1B7828B56A for <vwrap@ietf.org>; Sun, 10 Apr 2011 06:10:02 -0700 (PDT)
Received: by qwc23 with SMTP id 23so3455949qwc.31 for <vwrap@ietf.org>; Sun, 10 Apr 2011 06:11:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=qVpEUinIZGd3XUhlW0AlXq+6Xzvb8eU/AOFyCyio86Q=; b=OfELGm82UZoWULLVE87TxjmTWKqD61S5uljw5xbvhhaJA0UlCN+QSRJFxhd2zTmMWJ ZnwNRgZC/7C82fYAGzB2rmAnTDOL7/oxIK4kbQJIGYX84StIWI6DcIHl+6ihJehjNKWW dX8qFzfe7C9gAhP914PPDhgWUvh8F6qkTpyXI=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=googlemail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=kM0d0aBNzwy9w1bHIYFmeQQiHmVQvF3WRKjtFJ+iR6+/vrZkkhy+HCkpGATdLLq5ql xH6qC5jxyB51KdI1W9Uh7JiYCgMoFmsTm4g/+CMt4KhDMm7GGz0uqrWJMeCmDK7HI10e SjvcMNp8TPR8+pKE8QXtKUs5E89lzEDI8gdos=
MIME-Version: 1.0
Received: by 10.224.75.21 with SMTP id w21mr3244552qaj.272.1302441107277; Sun, 10 Apr 2011 06:11:47 -0700 (PDT)
Received: by 10.229.211.84 with HTTP; Sun, 10 Apr 2011 06:11:46 -0700 (PDT)
In-Reply-To: <BANLkTik4DEm06hvwmoXdbfJpjfmDAUvMuA@mail.gmail.com>
References: <BANLkTint6CiMRZWj59sEYM2j7VoKgz4-Bw@mail.gmail.com> <AANLkTimuVubm5Becx8cg_Uq2Gdj8EjHL7maMyqWOeYCJ@mail.gmail.com> <AANLkTi=0iBKxo0_yv2LWsExzrKUjJLqP5Ua2uHB=M_7d@mail.gmail.com> <AANLkTi=QH+c-19PvavnXU+pgWyaqpAA0F5G5SMd6h4JR@mail.gmail.com> <5365485D-FFAE-46CA-B04E-D413E85FB1D1@gmail.com> <4D97E7FE.7010104@gmail.com> <4D97EEC1.7020207@gmail.com> <BANLkTi=9CXCtb=ryFtMuyG2w9ifb-2urkA@mail.gmail.com> <4D98AC5F.70501@gmail.com> <BANLkTikci18U3S-fz6k4doVTdtUig7j=zw@mail.gmail.com> <BANLkTim8uUNmGU91mYmXQX6_Eqqp92--WQ@mail.gmail.com> <BANLkTikWSG0=rDvws4gqfDMQ8kT16uikSA@mail.gmail.com> <BANLkTik4DEm06hvwmoXdbfJpjfmDAUvMuA@mail.gmail.com>
Date: Sun, 10 Apr 2011 14:11:46 +0100
Message-ID: <BANLkTi=_hP7N=Xe6h3ni=jRAAc8jkkpnjQ@mail.gmail.com>
From: Morgaine <morgaine.dinova@googlemail.com>
To: vwrap@ietf.org
Content-Type: multipart/alternative; boundary="0016367d4f048a47b804a0903310"
Subject: Re: [vwrap] [wvrap] Simulation consistency
X-BeenThere: vwrap@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Virtual World Region Agent Protocol - IETF working group <vwrap.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/vwrap>, <mailto:vwrap-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/vwrap>
List-Post: <mailto:vwrap@ietf.org>
List-Help: <mailto:vwrap-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/vwrap>, <mailto:vwrap-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 10 Apr 2011 13:10:10 -0000

I think there's a bit of a misunderstanding about the nature of a "message",
Vaughn.  I'll see if I can describe it a little better further down.


On Sat, Apr 9, 2011 at 10:31 PM, Vaughn Deluca <vaughn.deluca@gmail.com>wrote:

>
> You can't  eliminate those messages even when not checking credentials. The
> roundtrips are anyhow needed to make sure that the communication is actually
> working. Or can we really get away with an UDP style of send and pray?? I
> fail to see how you would be able to realise a substantial gain over what
> will be needed anyhow as bare bone infrastructure and for error
> checking/reporting.  But i might be wrong.
>
>
> In the open content use cases that I described in my previous reply, we can
eliminate entirely the request-grant handshake messages which handle caps,
because the caps are only there as a roadblock against obtaining the item.
When an item is unrestricted, a roadblock against obtaining it is obviously
inappropriate.  What's more, once you have actually obtained the item, a
roadblock against obtaining it again is inappropriate as well, since you
will continue to have the item unless you choose to delete it, in our data
model.  What this *intrinsically* means is that caps are only meaningful as
a mechanism for obtaining the direct address of an item.  In the case of
open content, we already have that direct address. :-)

With regard to message transports, you mention "UDP style", but our
messaging is not related in any way to UDP (not sure what "style" means), as
we are talking about reliable messaging running over TCP in all of this.  A
message will always be received reliably if it is sent successfully.  There
is no return message confirming receipt needed at our level because the TCP
stream does all that under the hood.

A TCP session will be maintained between any two endpoints which exchange
one or more protocol messages.  In an efficient implementation, the same TCP
stream will carry the response message back too, since messages can be sent
over a TCP stream in both directions regardless of who initiated the
connection.

When we speak of a message sent from A to B, we're referring to the message
payload sent by A and received by B across such a TCP stream, and we're
specifically *not* referring to the IP datagrams which are sent in both
directions between A and B to implement the single-direction A->B message
transfer.  Our protocol messaging is at one level of abstraction higher than
that of the packets which implement the TCP stream.

(It may actually be two levels higher if our message payload is carried by
HTTP over TCP.)

In numerous places in your protocol flows, whenever the agent wishes to
introduce an asset into the region, a capability for the asset is first
requested from the asset service, and such a capability must be granted
before the asset can be fetched by any party.  This entails a minimum of two
message trip times before the asset itself is even requested.  Even if both
request and grant messages are efficiently carried on the same TCP stream,
there is still the full request-grant round trip time involved here whenever
this operation is performed.

This is quite unnecessary when there are no distribution restrictions on the
item and you already have its *direct address*.  I'll explain this step in
detail because we have not really discussed the nature of inventories and
metadata much on this list before, and they are relevant to asset access.

First of all, just to be sure that we're on the same page, an inventory is
just a tree structure containing asset metadata at its leaves.  It is not,
for the purposes of this discussion, the graphical thing that appears in
viewers as "Inventory", even though the two are strongly related.

Assets that an agent wears are always assets that are held in an agent's
inventory.  When an asset is in inventory, this implicitly means that its
metadata has been fetched and received, and under normal operation this
asset metadata will be in the viewer's inventory cache.  After all, you
can't expect to wear an item when you don't know whether it is a shoe or a
shirt, so the metadata *must* have been received previously for you to be
able to even begin a request to wear it.

Because the asset's metadata is already known, both the agent service and
the client also already know whether the item has access restrictions on it
or not, because that information is an asset property present in the asset's
metadata.  Knowing that it is not burdened with access restrictions, a
viewer can perform a direct fetch of the asset from the asset service
without any need for credentials, because credentials are irrelevant for
this.  Caps are not needed at all in this case, and it is highly likely that
this will be the most common situation in open worlds, which makes it a very
important use case.

It is worth noting that while inventories are mostly a client-side
implementation detail which can be changed unilaterally, the *metadata* held
by inventories is a very important and separable information type in an
architecture that uses 3rd party asset services, so it needs detailed
examination.

The metadata has to be available to remote parties in the same way as the
data itself is, and the best way of handling this is to make metadata a
separate item normally stored in the same asset service as the data.  The
metadata item will naturally reference the corresponding data item (which is
an N:1 relationship when hash-based addressing is used), and in some cases
can reference more than one data item --- for example, the metadata of a
mesh will commonly reference not only the graphical data required for
rendering in the client but also the collision mesh or bounding box data
required by the region.  These two types of data are separate because it
would be inefficient to require clients or regions to download data that
they do not use.  Our "asset" singleton now turns into a metadata + data
pair.

As a final point about metadata versus data, it should be mentioned that
generally only the data ever has access restrictions placed on it, because
placing restrictions on metadata leads to unsearchable inventories for which
the technical term is "annoying as hell", or more seriously, poorly
functionality and yet another source of lag.  I am going to assume that
nobody wants that and hence, that metadata is never accessed through
capabilities.  If anyone wants to suffer the burden of capabilities for
metadata then they are welcome to do so of course, just as long as the rest
of us do not have to:  "*the burden of a facility should be borne only by
those who need it*".  In the model that I am describing, metadata is
directly addressed and has the same protection as a cryptographic key:  the
address of metadata is an unguessable address, but once you know it, you
know it.

Now that we understand metadata, let's look at how this affects regions.

Regions don't have access to your inventory, but they do have access to the
metadata of each item that is part of the region's simulation, for numerous
reasons.  The primary reason is of course that the metadata contains one or
more references to needed data, but there are other reasons too.  For
example, the region may wish to ensure that every item in its region space
has specific properties which are described in item metadata, such as lack
of access restrictions, or the opposite, requiring certain access
restrictions.  (Note that your requirement for simulation consistency can be
implemented very cleanly that way too --- just add something like
Boolean:RegionConsistency to the set of asset properties).  As another
example, the region may wish to ensure that every item in its space is
properly tagged to support accessibility, so the item type and item
descriptions typically held in metadata would then be important.

Now that we've progressed to this level of detail, we can at last state
precisely what happens when agents inform the region of the items that they
are wearing as avatars:  they are supplying to the region the addresses of
the relevant metadata items.  The region then fetches those metadata items
from the asset service (unless it already has them in cache), and then on
the basis of that metadata, the region begins its decision making about
whether to allow the asset to appear in its region or not.  If the metadata
indicates that no access restrictions are required, then the region can send
the metadata URIs directly to clients.  In contrast, if the metadata says
that the asset data has restricted access, then the region has to engage its
more onerous access control mechanisms, ie. it first obtains an *
AssetAllowedInRegion* capability for the asset from the asset service, which
it then passes to all clients.

Note that this is only one particular design for our protocol flows, and no
doubt other designs can achieve the same functionality and efficiency.  I
expect you were trying to keep things simple for an initial diagram, but as
we get into the detail we have to start considering the role that metadata
plays in asset access because this is actually central to the effective
operation of external asset services.  Your diagram doesn't get down to the
level of metadata access, but we cannot avoid it when we have external asset
services.

A final point about metadata, before this email gets too long.  There may be
many thousands or even millions of instances of a common asset held in
people's inventories, such as those used by default avatars, or trees and
plants and other objects or textures common in the environment, and many
others.  Both our asset services and our caches need store those data items
only once when we use hash-based addressing, instead of millions of times.
The reason we can make this remarkable saving of space is because the *
metadata* is stored as separate items and holds details such as "ownership"
separately, while in terms of storage all those different metadata items can
point at the same asset data.

That N:1 relationship between metadata and data is important as we design
our protocols, because it allows us to avoid those unhelpful and inefficient
designs that require us to download something just to determine whether we
wanted to download it.  By separating the two types of information, we gain
a lot, but it does mean that "asset" cannot be treated as a monolithic
entity as in your diagram.  As we hone our design further, metadata fetches
enter the picture.


Morgaine.




========================

On Sat, Apr 9, 2011 at 10:31 PM, Vaughn Deluca <vaughn.deluca@gmail.com>wrote:

> Morgaine,
>
> Thanks for the compliments and the extensive comments  :)
> I am in a time crunch, so i will only respond to one aspect of your mail,
> and come back to some other points later.
>
> I am not sure i understand your emphasis on the (non)transfer of trust info
> to gain speed. When drafting the diagram I was thinking that depending on
> the  "domain" of the sender of the message  (with domain defined in the
> classical sense of a range of IP addresses) the authorisation info might
>  simply be dropped and the data used without further checking if the
> receiver wishes to do so. Also since in the majority of cases only caps are
> passed that contain *implicit* trust. The checking of the validity of the
> cap is local to the service, and should be really fast.
>
> Regarding capabilities, i was viewing them not so much as credentials
> (although they are) but more
> as a convenient way to pass references to some bit of underlaying data
> around. You can't  eliminate those messages even when not checking
> credentials. The roundtrips are anyhow needed to make sure that the
> communication is actually working. Or can we really get away with an UDP
> style of send and pray?? I fail to see how you would be able to realise a
> substantial gain over what will be needed anyhow as bare bone infrastructure
> and for error checking/reporting.  But i might be wrong.
>
> One way way to find out is to implement it, and I feel that we are getting
> closer to be able to do just that.
>
> -- Vaughn
>
>
> On Sat, Apr 9, 2011 at 1:18 AM, Morgaine <morgaine.dinova@googlemail.com>wrote:
>
>> Excellent work, Vaughn!
>>
>> You're right, I am working on something related to this, specifically a
>> design study for the Tourism use case.  It happens to end with a protocol
>> sequence diagram presented as a table of text, so I was very pleased to see
>> your highly relevant diagram.  Yours captures part of my use case, and it's
>> a lot prettier than my text format. :P
>>
>> I was particularly impressed by the way you start off with completely
>> separable services right from the start.  Since separable services can be
>> put together under a single administrative domain very easily, whereas
>> separating conjoined services is not easy at all, I think you've started
>> this off perfectly for the VWRAP approach to services.
>>
>> Because you have separated the services so well, your suggested protocol
>> flow could be said to to be targeting some kind of "*superset of all
>> VWRAP deployments*" deployment. :-)  Although it's logically structured,
>> it's a sort of worst-case scenario (or lawyer's delight) in which everyone
>> has to ask everyone else for permission to do anything, regardless of
>> whether permission is actually required or not.
>>
>> That's viable, but less than optimal.  Specifically, you are working to
>> the principle of "The heaviest burden required by anyone must be borne by
>> everyone."  I have been trying to stick to the opposite principle of "A
>> burden should be borne only by those whose use case requires it", which is
>> both fairer and more efficient.
>>
>> To illustrate this, consider the case of an asset service which serves (by
>> choice) only Creative Commons licensed assets --- an extremely important use
>> case which could well become the dominant one in an open metaverse of
>> community worlds.  Who knows, it could be operated by Debian, or Blender, or
>> OSgrid, or even Google Warehouse. :P
>>
>> In such a scenario, because the license on all the assets in the asset
>> service permits unchecked distribution to all destinations in perpetuity,
>> the vast majority of all the request-grant protocol flows in your diagram
>> are superfluous when the assets come from this repository.  By using a
>> protocol which understands *WHEN* it needs to ask a question, a large
>> amount of cumulative round-trip time latency (and its resulting lag) can be
>> avoided entirely.  This is also true on a per-asset basis if an asset
>> service serves a mixture of encumbered and freely distributable assets,
>> except that then the difference would be seen per-asset instead of per asset
>> service.
>>
>> For such freely distributable assets, the agent service doesn't need to do
>> anything at all beyond recording the addresses of assets which are currently
>> being worn by the agent.  Since you start off your trip (sensibly) by
>> checking your clothing at home before you leave, you'll notice a broken
>> asset service locally anyway since your viewer will be trying to fetch it
>> for local viewing.  The agent service need do nothing at all, beyond record
>> the addresses of top level items.  (In my design study, I refer to a *Worn
>> Assets List *which is held by the user's agent service, and which is
>> entirely separate from any concept of inventory.)
>>
>> Likewise, region services don't need to fetch the graphic assets normally
>> either (unless they opt to proxy them), but only pass the addresses of those
>> assets around to all the other agents in the corresponding region, so the
>> request-grant exchanges between regions and asset services can be avoided in
>> this case.  (Regions will be requesting other server-side data though, for
>> example the bounding box information or collision mesh of an asset, which
>> typically the viewer would not be fetching.)
>>
>> That's not the end of the "*no unnecessary burden*" issue yet though,
>> because even if you removed all the unnecessary request-grant protocol
>> flows, you're still making the incorrect assumption that assets *ALL NEED
>> TO BE RESTRICTED* by the mere fact of asking for caps to everything.
>> This itself is wrong.  The logic need to first determine whether a cap is
>> needed for fetching a given asset, and if it's not needed then the fetch can
>> be done by the viewer or region without this protocol burden at all.
>>
>> So you see, there is a fundamental assumption in your nicely laid out
>> flows that all assets must be tied up in heavy red tape by the needs of the
>> most burdensome use case.  I don't agree with that, neither on principle nor
>> on engineering grounds.
>>
>> Many of the flows you have shown are exactly what we need for securing
>> access to proprietary resources, but not all resources have that burden, and
>> I would want to elide a number of the flows away entirely when an asset
>> service allows it.
>>
>> To put it another way, the data is king, and protocol flows should reflect
>> the requirements imposed by data, not the other way around.
>>
>> I need this expressed in your flows, possibly as asset requirement
>> annotations.
>>
>>
>> PS.  Great work Vaughn, I think this gives us a wonderful launching point!
>>
>>
>> Morgaine.
>>
>>
>>
>>
>> ======================
>>
>>
>> On Fri, Apr 8, 2011 at 5:40 PM, Vaughn Deluca <vaughn.deluca@gmail.com>wrote:
>>
>>> VWRAP services high level message flow (preliminary diagram draft) see
>>>
>>>
>>> http://trac.tools.ietf.org/wg/vwrap/trac/attachment/wiki/Diagrams/VWRAP_FlowExample_VD1.pdf
>>>
>>> The main reason that i am submitting this in spite of my lack of formal
>>> expertise is that the group in my view badly needs a solid basis for
>>> discussion and preventing endless repeating loops. This example is probably
>>> wrong in many ways, but its better than what we have publicly available on
>>> interop now (although Morgaine is working on something along the lines of
>>> the recent discussions here)
>>>
>>> I hope this diagram will give us a base for discussion. I could have done
>>> my homework better by researching the old OGP stuff in more depth, and i
>>> probably  will do so in the future , but for now I just tried to followed
>>> the general principles as far as i understand them, to see what response
>>> that yields from the group. In other words,I try to let the group educate me
>>> :p
>>>
>>> Note that in  my view all services are equal, in principle it does not
>>> matter in what "domain" they run, since trust and policy are fully
>>> localized. It is however very possible to have internal shortcuts in the
>>> services to speed up processing.
>>>
>>> In the example I opted for an external Agent service, but I could as well
>>> have incorporated that in the set of local services. As indicated above all
>>> services could also be run by different organisations, true to what VWRAP
>>> stands for. Its all up to the deployer, including a user at home who might
>>> want to run a full world for family and friends. Those friends might try to
>>> use that agent service to venture out in the virtual universe.
>>> I envision that the final identity  provider is external, using OpenID
>>> and OAut  or whatever other  magic that I do not yet fully understand exists
>>> out there.
>>>
>>> The  example has 3 main purposes:
>>> -  Provide a reference for discussion
>>> - Illustrates the use case of tourism, and *true* interop.
>>> - Illustrate consistency problems along the lines discussed  here higher
>>> up in this tread, as well as the "slashdot" problem that Morgaine outlined
>>> so clearly.
>>>
>>> The message flow assumes an avatar already present in some region, (a
>>> small scale local home region in this case, but that is by no means
>>> essential, it could be a build in region in the viewer or a big commercial
>>> region). The user is preparing for a trip to immersive world, and after some
>>> outfit adjustments moves over.
>>>
>>> Finally i apologize for for the simplistic notation used here. I simply
>>> add the most relevant parameters passed in square brackets to a keyword
>>> specifying the nature of the message. Please improve on that where needed.
>>>
>>> So here we go, the avatar is  prepare for a visit to "immersive world"
>>> 0)  Viewer, here is an update of the state of the world your agent is in,
>>> please render.
>>> 1)  Agent service, I will go in my Zodiac dress that i keep in the
>>>  "Amazing assets" service.
>>> 2)  Asset service A, please send a cap for Z, here are my credentials
>>> 3)  Your fine, here is the cap
>>> 4)  Local region, can you please put this on my agent, i included the
>>> cap.
>>> 5)  Hello asset service A, i need Z, here is the cap
>>> 6)  Cap is good, data coming up, have fun.
>>> 7)  Agent service, your agent is now wearing Z
>>> 8)  Viewer, your avatar is now wearing Z
>>>     User: Hmm, amazing inventory has not been *that* amazing lately. 'll
>>> make a backup, just in case
>>> 9)  Hello asset service A, please send me a cap for Z, here are my
>>> credentials
>>> 10) Your fine, here is the cap
>>> 11) Local asset storage, please store Z for me, here is the cap to get it
>>> 12) Hello asset service A, i want Z, here is the cap
>>> 13) Cap is good, data coming up, have fun.
>>> 14) Viewer, Z is now stored for you
>>>     User: I am Ready!, Lets try to get to immersive world!
>>> 15) Hello immersive world, can i get in? Here are my credentials, and a
>>> list of my stuff.
>>> 16) Asset service A, please send me a cap for X, here are my credentials
>>> (I want this cap for consistency)
>>> 17) Your fine, here is the cap
>>> 18) Asset service B, please send me a cap for Y, here are my credentials
>>> (I want this cap for consistency)
>>> 19) Very sorry, but your not one of us, you can't have Y
>>> 20) Asset service B, please send me a cap for Z, here are my credentials
>>> (I want a cap for consistency)
>>> [Region service: Timeout... amazing inventory must be overloaded.. oh
>>> well... ]
>>> 21) Agent service, you wanted to send somebody over, here are your
>>> permissions.
>>> 22) Viewer, you asked for a transfer try, here are your results
>>>      User thinks:  Crap! Big asset service does not allow  me to take my
>>> yellow stockings! And Amazing assets  failed to deliver my zodiac dress. At
>>> least i made a backup of that dress!
>>> 23) 'll take the yellow stockings off...
>>> 24) ... done ('ll trash them here and now, forever, who needs stuff you
>>> can't use!)
>>> 25) The zodiac dress was not delivered by Big assets service, but i have
>>> a local copy!
>>> 26) Local Asset service, please send me Z, here are my credentials
>>> 27) I dont know you, but I 'll trust you, here is the cap, but you better
>>> store the data, its single use, i need to protect myself.
>>> 28) Local region, can you please put this on my agent, i include the cap.
>>> 29) Local Asset service, i need Z, here is the cap
>>> 30) Cap is good, data coming up, have fun.
>>> 31) Cap was only good for one time, I made a copy, but my policy is to
>>> only grant you fair use rights, at a later time i might even tell you to
>>> replace the dress.
>>> 32) Viewer, you can wear Z for now, but the asset service granted only
>>> fair use, i might ask you to replace the dress at a later time.
>>> 33) Ready at last! Off to immersive world!, I hope its not to crowded
>>> there or 'll loose my dress...
>>> 34) Hello immersive world, here are my credentials, and a list of stuff i
>>> want to bring
>>> 35) Hello asset service A, please send me a cap for X, here are my
>>> credentials
>>>     [darn, I should have kept that cap from last time..]
>>> 36) Your fine, here is the cap.
>>>    [Region service finds fair-use warning on Z and decides to make its
>>> own copy]
>>> 37) Hello Local region, can i still have Z? Here is the cap
>>> 38) Cap is still good, data coming up, have fun.
>>>    [Region service stores asset in private storage, providing a cap to
>>> replace the fair use one]
>>> 39) Agent service, you wanted to send somebody over, here are your
>>> permissions & info.
>>> 40) Hello immersive world, just  get me there, and use what you can
>>> 41) Placement done, Z is currently buffered by us as wel, you need to get
>>> details for X, have fun.
>>> 42) You are now in immersive world, your dress is buffered there as well,
>>> but you need to get X!
>>> 43) Hello asset service A, i want X, here is the cap
>>> 44) Cap is good, data coming up, have fun.
>>> 45) Viewer, here is an update of the state of the world your agent is in,
>>> please render.
>>>
>>> As far as I can see this conforms fully to our charter, and i hope it is
>>> possible to use large portions of the existing code bases. However, as said
>>> above, i did not really try to capture the old thinking, and I also might
>>> have misconceptions about the way to do these things in general.
>>> Looking forward to constructive comments.
>>>
>>> -- Vaughn
>>>
>>> On Sun, Apr 3, 2011 at 8:38 PM, Vaughn Deluca <vaughn.deluca@gmail.com>wrote:
>>>
>>>> Thanks for the pointers.  I have a  busy week in RL in front of me, so i
>>>> wont have to much time to respond the next few days, however, i intend to
>>>> start doing the following things:
>>>>
>>>> - Produce a visual that reflects my thinking, i.e. an illustration of my
>>>> response to Morgaine's itemlist  above.
>>>> - Read up on the older notes, as well as  more reading in the list
>>>> archive
>>>> - Try to make a summary for the wiki
>>>>
>>>> Regarding the use of domain, I think services are eventually what
>>>> counts, but its all terminology. The way I read the AWG diagrams is that the
>>>> agent domain is actually a cluster of tightly integrated services. When the
>>>> functionality of each sub-service is described properly and with uniform
>>>> interfaces the domain will slowly dissolve. But let not get ahead of out
>>>> selfs. We should put up some clear descriptions on the wiki for our views on
>>>> this, and *after* that we can decide what we need and what can go.
>>>>
>>>> Its been a very useful and illuminating weekend for me, and i am a lot
>>>> more optimistic about the future of vwrap than two weeks ago.
>>>>
>>>> -- Vaughn
>>>>
>>>>
>>>>
>>>> On Sun, Apr 3, 2011 at 7:20 PM, Dzonatas Sol <dzonatas@gmail.com>wrote:
>>>>
>>>>> Probably easy as suggested in other terms here on this list, as how the
>>>>> client contacts the asset services now in the regions. The newer issue is to
>>>>> unitize that asset services. Since their is proprietary (legacy) code then
>>>>> we can't expect that to change, and some form of proxy is of need. Whatever
>>>>> works best, I tried to narrow it down to suggestions here.
>>>>>
>>>>> Eventually, the agent domain is ideal to handle the direction of the
>>>>> asset services. This concept, unfortunately, ended support awhile ago with
>>>>> changes in LL.
>>>>> Also see; http://wiki.secondlife.com/wiki/Agent_Domain
>>>>> And: http://wiki.secondlife.com/wiki/User:Dzonatas_Sol/AWG_Asset(warn: unstructured collaborative notes, dumped on me and I tried to fix)
>>>>>
>>>>> I tried to find previous visuals.
>>>>>
>>>>> I'd imagine the agent domain could grow out of unitized versions of
>>>>> asset services. Despite that, I think that concept helps view where we were
>>>>> at in discussion and what didn't happen.
>>>>>
>>>>> Vaughn Deluca wrote:
>>>>>
>>>>>> Hi�Dzonatas
>>>>>>
>>>>>> Can you expand on that, what would be needed for legacy support in
>>>>>> VWAP terms�?,
>>>>>> If i want to read up on how the�asset server may proxy the simulator,
>>>>>> what would you recommend me to read?
>>>>>>
>>>>>> -- Vaughn
>>>>>>
>>>>>> On Sun, Apr 3, 2011 at 5:51 AM, Dzonatas Sol <dzonatas@gmail.com<mailto:
>>>>>> dzonatas@gmail.com>> wrote:
>>>>>>
>>>>>>    Some stated the proxy-to-asset-server is built into the sim;
>>>>>>    however, keep in mind possible legacy support where the asset
>>>>>>    server may proxy the simulator.
>>>>>>
>>>>>>
>>>>>>    Dzonatas Sol wrote:
>>>>>>
>>>>>>        Somehow I feel the basic asset server being able to login and
>>>>>>        download assets is now priority, yet I also wondered the best
>>>>>>        way to patch this into the current mode of viewers.
>>>>>>
>>>>>>        Maybe offer (1) by proxy (sim-side) and (2) by patch
>>>>>>        (viewer-side) that either of these two are optional and
>>>>>>        neither are mandatory for now. Thoughts?
>>>>>>
>>>>>>        Israel Alanis wrote:
>>>>>>
>>>>>>
>>>>>>            > when designing for scalability, the model to bear in
>>>>>>            mind is ...
>>>>>>
>>>>>>            Well, there are a lot of different models to keep in mind,
>>>>>>            and many different use cases. One particular use case to
>>>>>>            keep in mind is: "User acquires new outfit, and wants to
>>>>>>            'show it off' in a highly populated region".
>>>>>>
>>>>>>            > Both worlds and asset services may include commercial,
>>>>>>            community, and personal services
>>>>>>
>>>>>>            Yes, yes and yes. I'm particularly concerned about how the
>>>>>>            model affects the ability to host personal asset services.
>>>>>>
>>>>>>            > a proxying region, which would get slammed for every
>>>>>>            asset worn by every avatar present.
>>>>>>
>>>>>>            Granted the collection of services that are provided by
>>>>>>            the region need to be scaled to meet the demands of that
>>>>>>            region. That's all part of capacity planning.
>>>>>>
>>>>>>            > regions run many different CPU-intensive tasks,
>>>>>>            including physics simulation and server-side scripting,
>>>>>>            and absolutely cannot afford to serve assets too
>>>>>>            Well... who said the same CPU's have to do proxying,
>>>>>>            physics simulation and server-side scripting? Asset
>>>>>>            proxying is a different service than physics simulation
>>>>>>            and can be on separate hardware, could make use of
>>>>>>            geographically distributed caching, and in certain
>>>>>>            deployment patterns, the same caching services could be
>>>>>>            shared by different regions. (Server-side scripting is a
>>>>>>            discussion for another day).
>>>>>>
>>>>>>            > This is why we have to go parallel...
>>>>>>
>>>>>>            Totally agree, and a proxying model could and should also
>>>>>>            take advantage of parallelism.
>>>>>>
>>>>>>            > I think you're wrong that it has to cost much money. ?vs?
>>>>>>            > It costs money to host a high performance and scalable
>>>>>>            asset service and a high bandwidth network to handle the
>>>>>>            traffic. �A *lot* of money.
>>>>>>            I think what you're saying is: "It costs a lot of money to
>>>>>>            build a scalable asset service, but if assets are spread
>>>>>>            throughout the internet they don't have to be scalable."
>>>>>>            But that's not quite right. You're opening up every asset
>>>>>>            server to the VW equivalent of being slashdotted, so are
>>>>>>            you sure you're not forcing *every* asset service to be
>>>>>>            scalable and handle a lot of bandwith and network traffic?
>>>>>>            It's the exact opposite of your intention, but I think
>>>>>>            that's the result, all the same.
>>>>>>
>>>>>>            This particular design decision has a big effect on the
>>>>>>            economics of the VW infrastructure. I'd rather the
>>>>>>            economics to work out such that a region provider who
>>>>>>            wishes to build a region that supports a small population,
>>>>>>            can do so economically. A region that wants to host a
>>>>>>            *large* population has to bear that cost of providing that
>>>>>>            scalable asset service.
>>>>>>            I want the economics of hosting a small asset service to
>>>>>>            be a non-issue (as to best promote creation and
>>>>>>            creativity). Creating a high bar to provide asset services
>>>>>>            will mean that service will cost money and people
>>>>>>            shouldn't have to pay money just to create or own VW
>>>>>>            objects (I'm using 'own' here to refer to maintaining
>>>>>>            their existence, I'm not trying to make a
>>>>>>            'leftist'/'communist' statement about ownership ;)
>>>>>>
>>>>>>            - Izzy
>>>>>>
>>>>>>
>>>>>>            On Apr 2, 2011, at 3:58 PM, Morgaine wrote:
>>>>>>
>>>>>>                Izzy, when designing for scalability, the model to
>>>>>>                bear in mind is that of seasoned virtual world
>>>>>>                travelers whose inventories contain assets from many
>>>>>>                different worlds, those assets being served by many
>>>>>>                different asset services. �Both worlds and asset
>>>>>>                services may include commercial, community, and
>>>>>>                personal services, and as the metaverse grows, that
>>>>>>                set is highly likely to become progressively less
>>>>>>                clustered and more diverse.
>>>>>>
>>>>>>                When those seasoned travelers click on an advertised
>>>>>>                VW link and perform an inter-world teleport to one
>>>>>>                particular world's region to share an experience,
>>>>>>                their "worn" assets (the only ones of interest to the
>>>>>>                region) will contain references to asset services
>>>>>>                spread widely across the Internet. �The fetches by the
>>>>>>                travelers' clients occur over many parallel paths from
>>>>>>                clients to asset services, so one can reasonably
>>>>>>                expect reduced network contention and reduced asset
>>>>>>                server loading because they are both spread out over
>>>>>>                however many asset services are being referenced by
>>>>>>                the overall set of assets in the region.
>>>>>>
>>>>>>                This is very different to the case of a proxying
>>>>>>                region, which would get slammed for every asset worn
>>>>>>                by every avatar present. �In our current architecture,
>>>>>>                regions run many different CPU-intensive tasks,
>>>>>>                including physics simulation and server-side
>>>>>>                scripting, and absolutely cannot afford to serve
>>>>>>                assets too unless your scalability requirements are
>>>>>>                very low indeed, ie. just a few dozen avatars of
>>>>>>                today's kind. �We've hit the ceiling already on region
>>>>>>                scalability done that way. �There is nowhere to go in
>>>>>>                that direction at all beyond improving the code like
>>>>>>                Intel demonstrated, and that work is subject to a law
>>>>>>                of diminishing returns.
>>>>>>
>>>>>>                This is why we have to go parallel, and I think you're
>>>>>>                wrong that it has to cost much money. �As we spread
>>>>>>                the load across more and more asset services, we are
>>>>>>                simply better utilizing all the hardware that's
>>>>>>                already out there on the Internet, at least in respect
>>>>>>                of community and private resources. �But add to the
>>>>>>                community and private resources the commercial asset
>>>>>>                services that are likely to appear to exploit this
>>>>>>                opportunity, and not only will the number of asset
>>>>>>                services leap, but the power of each one will rocket
>>>>>>                too, because, after all, these businesses will be
>>>>>>                heavily optimized for the job.
>>>>>>
>>>>>>                As to why a world would want clients to access
>>>>>>                external asset services instead of providing its own
>>>>>>                implementation, that's an easy question. �It costs
>>>>>>                money to host a high performance and scalable asset
>>>>>>                service and a high bandwidth network to handle the
>>>>>>                traffic. �A *lot* of money. �In contrast, it costs a
>>>>>>                world nothing to let others serve the assets to
>>>>>>                clients. �And that matters to the bottom line.
>>>>>>
>>>>>>
>>>>>>                Morgaine.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>                ======================
>>>>>>
>>>>>>                On Sat, Apr 2, 2011 at 7:05 PM, Izzy Alanis
>>>>>>                <izzyalanis@gmail.com <mailto:izzyalanis@gmail.com>
>>>>>>                <mailto:izzyalanis@gmail.com
>>>>>>
>>>>>>                <mailto:izzyalanis@gmail.com>>> wrote:
>>>>>>
>>>>>>                � �> As always though, it's a trade-off, since the
>>>>>>                proxied design
>>>>>>                � �has very poor scalability compared to the
>>>>>>                distributed one.
>>>>>>
>>>>>>                � �I don't agree with that... If a user enters a
>>>>>>                highly populated
>>>>>>                � �region,
>>>>>>                � �every other client is going to (could and should be
>>>>>>                trying to)
>>>>>>                � �hit the
>>>>>>                � �asset server(s) for the assets that the user is
>>>>>>                wearing (assuming
>>>>>>                � �they're not cached locally). �Every asset server
>>>>>>                has to be scaled up
>>>>>>                � �to the point that it can handle that load from all
>>>>>>                over...
>>>>>>
>>>>>>                � �If I'm hosting a region that supports 10s of
>>>>>>                thousands of
>>>>>>                � �simultaneous
>>>>>>                � �users (thinking of the future), I already have to
>>>>>>                scale to meet that
>>>>>>                � �demand. If the region is proxying the assets, then,
>>>>>>                yes the
>>>>>>                � �region has
>>>>>>                � �to be scaled to meet that asset demand too, but it
>>>>>>                already has to be
>>>>>>                � �scaled to meet other demands of being a region
>>>>>>                server... and why is
>>>>>>                � �scaling those asset proxy services hard? �It's
>>>>>>                going to cost $,
>>>>>>                � �but is
>>>>>>                � �not technically challenging. So, if I want to host
>>>>>>                a region like
>>>>>>                � �that... sure it will cost me, but the simulation
>>>>>>                will be consistent
>>>>>>                � �and users will be able to participate equally,
>>>>>>                regardless of the
>>>>>>                � �capabilities of their individual asset services.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>                � �On Fri, Apr 1, 2011 at 11:55 PM, Morgaine
>>>>>>                � �<morgaine.dinova@googlemail.com
>>>>>>                <mailto:morgaine.dinova@googlemail.com>
>>>>>>                � �<mailto:morgaine.dinova@googlemail.com
>>>>>>
>>>>>>                <mailto:morgaine.dinova@googlemail.com>>> wrote:
>>>>>>                � �> Every design choice results in a trade-off,
>>>>>>                Vaughn, improving
>>>>>>                � �one thing at
>>>>>>                � �> the expense of something else. �If every time we
>>>>>>                offered a
>>>>>>                � �service we had to
>>>>>>                � �> inform its users about the downsides of all the
>>>>>>                trade-offs we
>>>>>>                � �have made,
>>>>>>                � �> they would have an awful lot to read. ;-)
>>>>>>                � �>
>>>>>>                � �> The specific trade-off that you are discussing is
>>>>>> no
>>>>>>                � �different. �A region
>>>>>>                � �> that proxies all content has the "benefit" of
>>>>>>                acquiring control
>>>>>>                � �from the
>>>>>>                � �> asset servers over the items in the region, so
>>>>>>                that it can
>>>>>>                � �ensure that
>>>>>>                � �> everyone in the region not only obtains the items
>>>>>>                but obtains
>>>>>>                � �the same items
>>>>>>                � �> as everyone else. �That does indeed provide a
>>>>>>                greater guarantee of
>>>>>>                � �> consistency than a deployment in which the region
>>>>>>                only passes
>>>>>>                � �asset URIs to
>>>>>>                � �> clients who then fetch the items from asset
>>>>>> services
>>>>>>                � �separately. �As always
>>>>>>                � �> though, it's a trade-off, since the proxied
>>>>>>                design has very
>>>>>>                � �poor scalability
>>>>>>                � �> compared to the distributed one.
>>>>>>                � �>
>>>>>>                � �> If we're going to warn users of the potential for
>>>>>>                inconsistency
>>>>>>                � �in the
>>>>>>                � �> distributed deployment as you suggest, are we
>>>>>>                also going to
>>>>>>                � �warn them of
>>>>>>                � �> non-scalability in the proxied one? �I really
>>>>>>                don't see much
>>>>>>                � �merit in the
>>>>>>                � �> idea of warning about design choices. �Many such
>>>>>>                choices are
>>>>>>                � �technical, and
>>>>>>                � �> the issues are quite likely to be of little
>>>>>>                interest to
>>>>>>                � �non-technical users
>>>>>>                � �> anyway. �In any case, the better services are
>>>>>>                likely to provide
>>>>>>                � �such
>>>>>>                � �> information in their online documentation, I
>>>>>>                would guess.
>>>>>>                � �>
>>>>>>                � �> You mentioned users "voting with their feet" or
>>>>>>                choosing to
>>>>>>                � �accept the risk
>>>>>>                � �> of inconsistency. �Well that will happen anyway,
>>>>>>                when services
>>>>>>                � �fail and
>>>>>>                � �> users get annoyed. �If some asset services refuse
>>>>>>                to send the
>>>>>>                � �requested
>>>>>>                � �> items to some users, those services will get a
>>>>>>                bad reputation
>>>>>>                � �and people
>>>>>>                � �> will choose different asset services instead.
>>>>>>                �Likewise, if a
>>>>>>                � �world service
>>>>>>                � �> proxies everything and so it can't handle a large
>>>>>>                number of
>>>>>>                � �assets or of
>>>>>>                � �> people, users will get annoyed at the lag and will
>>>>>> go
>>>>>>                � �elsewhere. �This user
>>>>>>                � �> evaluation and "voting with their feet" happens
>>>>>>                already with
>>>>>>                � �online services
>>>>>>                � �> all over the Internet, and I am sure that this
>>>>>>                human process
>>>>>>                � �will continue
>>>>>>                � �> to work when the services are asset and region
>>>>>>                services.
>>>>>>                � �>
>>>>>>                � �> Back in September 2010, I wrote this post which
>>>>>>                proposes that
>>>>>>                � �we use in
>>>>>>                � �> VWRAP a form of asset addressing that provides
>>>>>>                massive
>>>>>>                � �scalability at the
>>>>>>                � �> same time as a very high degree of resilience --
>>>>>>                � �>
>>>>>>                �
>>>>>>                �
>>>>>> http://www.ietf.org/mail-archive/web/vwrap/current/msg00463.html
>>>>>>                � �. �It is
>>>>>>                � �> based on the concept of the URI containing a host
>>>>>>                part and a
>>>>>>                � �hash part,
>>>>>>                � �> where the hash is generated (once, at the time of
>>>>>>                storage to
>>>>>>                � �the asset
>>>>>>                � �> service) using a specified digest algorithm over
>>>>>>                the content of
>>>>>>                � �the asset
>>>>>>                � �> being referenced. �You may wish to note that if
>>>>>>                this design
>>>>>>                � �were used, the
>>>>>>                � �> failure of an asset service to deliver a
>>>>>>                requested item would
>>>>>>                � �result in a
>>>>>>                � �> failover request for the item to one or more
>>>>>>                backup services,
>>>>>>                � �using the same
>>>>>>                � �> hash part but with a different host address.
>>>>>>                � �>
>>>>>>                � �> This can go some way towards overcoming the
>>>>>>                problem that you
>>>>>>                � �think might
>>>>>>                � �> occur when assets are fetched by clients from
>>>>>>                asset services
>>>>>>                � �directly.
>>>>>>                � �> Although it won't help when the missing item is
>>>>>>                available from
>>>>>>                � �only a single
>>>>>>                � �> asset service, it will help in many other cases,
>>>>>>                and it will
>>>>>>                � �compensate for
>>>>>>                � �> service failures and network outages
>>>>>>                automatically at the same
>>>>>>                � �time.
>>>>>>                � �>
>>>>>>                � �> PS. This design for hash-based asset addressing
>>>>>>                is already
>>>>>>                � �being tested by
>>>>>>                � �> Mojito Sorbet in her experimental world and
>>>>>>                client. �It would give
>>>>>>                � �> VWRAP-based worlds an improved level of service
>>>>>>                availability,
>>>>>>                � �so I think it
>>>>>>                � �> should be a core feature of our protocol.
>>>>>>                � �>
>>>>>>                � �>
>>>>>>                � �> Morgaine.
>>>>>>                � �>
>>>>>>                � �>
>>>>>>                � �>
>>>>>>                � �>
>>>>>>                � �> ===========================
>>>>>>                � �>
>>>>>>                � �> On Fri, Apr 1, 2011 at 11:17 PM, Vaughn Deluca
>>>>>>                � �<vaughn.deluca@gmail.com
>>>>>>                <mailto:vaughn.deluca@gmail.com>
>>>>>>                <mailto:vaughn.deluca@gmail.com
>>>>>>                <mailto:vaughn.deluca@gmail.com>>>
>>>>>>                � �> wrote:
>>>>>>                � �>>
>>>>>>                � �>> This is a question i discussed with Morgaine
>>>>>>                off-list a while
>>>>>>                � �ago (I
>>>>>>                � �>> intended to send it to the list but pushed the
>>>>>>                wrong button...) I
>>>>>>                � �>> think we need to address this problem, and
>>>>>>                decide how to deal
>>>>>>                � �with it.
>>>>>>                � �>>
>>>>>>                � �>> �In Davids deployment draft, section 7.3.1.1 an
>>>>>>                overview is
>>>>>>                � �given van
>>>>>>                � �>> ways to deliver content to the region. One way
>>>>>>                is only passing a
>>>>>>                � �>> capability that allows access to (part of) the
>>>>>>                resource:
>>>>>>                � �>>
>>>>>>                � �>> � � � � � 7.3.1.1. �Content delivery models
>>>>>>                � �>> � � � � � A range of possible represenations can
>>>>>>                be passed to
>>>>>>                � �a region for
>>>>>>                � �>> � � � � � simulation. [...] The other end of the
>>>>>>                delivery spectrum
>>>>>>                � �>> involves passing
>>>>>>                � �>> � � � � � only a URI or capability used to
>>>>>>                access the rendering
>>>>>>                � �>> information and a
>>>>>>                � �>> � � � � � collision mesh,and related data for
>>>>>>                physical simulation.
>>>>>>                � �>> � � � � � In such a model, the client is
>>>>>>                responsible for
>>>>>>                � �fetching the
>>>>>>                � �>> additional
>>>>>>                � �>> � � � � � information needed to render the
>>>>>>                item's visual
>>>>>>                � �presence from a
>>>>>>                � �>> separate
>>>>>>                � �>> � � � � � service. �This fetch can be done
>>>>>>                *under the
>>>>>>                � �credentials of the
>>>>>>                � �>> end user*
>>>>>>                � �>> � � � � � viewing the material [my emphasis--VD]
>>>>>>                , and
>>>>>>                � �divorces the
>>>>>>                � �>> simulation from
>>>>>>                � �>> � � � � � the trust chain needed to manage
>>>>>>                content. �Any
>>>>>>                � �automation
>>>>>>                � �>> is done on a
>>>>>>                � �>> � � � � � separate host which the content
>>>>>>                creator or owner trusts,
>>>>>>                � �>> interacting with the
>>>>>>                � �>> � � � � � object through remoted interfaces.
>>>>>>                � �>>
>>>>>>                � �>> �I can see the need for such a setup, however, i
>>>>>>                feel we are
>>>>>>                � �>> unpleasantly close to a situation were the
>>>>>>                coherence of the
>>>>>>                � �simulation
>>>>>>                � �>> falls apart.
>>>>>>                � �>> In this deployment pattern the region advertises
>>>>>>                the presence
>>>>>>                � �of the
>>>>>>                � �>> asset, and *some* clients will be able to get it
>>>>>>                as expected,
>>>>>>                � �while
>>>>>>                � �>> -based on the arbitrary whims of the asset
>>>>>>                service- others
>>>>>>                � �might not.
>>>>>>                � �>>
>>>>>>                � �>> My hope would be that after the asset server
>>>>>>                provides the
>>>>>>                � �region with
>>>>>>                � �>> the capability to get the asset, it gives up
>>>>>>                control. That
>>>>>>                � �would mean
>>>>>>                � �>> that if the client finds the inventory server is
>>>>>>                unwilling to
>>>>>>                � �serve
>>>>>>                � �>> the content - in spire of the region saying it
>>>>>>                is present-,
>>>>>>                � �the client
>>>>>>                � �>> should be able to turn around ask the *region*
>>>>>>                for the asset,
>>>>>>                � �(and get
>>>>>>                � �>> is after all).
>>>>>>                � �>>
>>>>>>                � �>> �If that is not the case, -and there are
>>>>>>                probably good reasons
>>>>>>                � �for the
>>>>>>                � �>> deployment pattern as described- �shouldn't we
>>>>>>                *warn* clients
>>>>>>                � �that the
>>>>>>                � �>> region might be inconsistent, so the users
>>>>>>                behind the client
>>>>>>                � �can vote
>>>>>>                � �>> with their feet, (or take the risk)?
>>>>>>                � �>>
>>>>>>                � �>> --Vaughn
>>>>>>                � �>> _______________________________________________
>>>>>>                � �>> vwrap mailing list
>>>>>>                � �>> vwrap@ietf.org <mailto:vwrap@ietf.org>
>>>>>>                <mailto:vwrap@ietf.org <mailto:vwrap@ietf.org>>
>>>>>>
>>>>>>                � �>> https://www.ietf.org/mailman/listinfo/vwrap
>>>>>>                � �>
>>>>>>                � �>
>>>>>>                � �> _______________________________________________
>>>>>>                � �> vwrap mailing list
>>>>>>                � �> vwrap@ietf.org <mailto:vwrap@ietf.org>
>>>>>>                <mailto:vwrap@ietf.org <mailto:vwrap@ietf.org>>
>>>>>>
>>>>>>                � �> https://www.ietf.org/mailman/listinfo/vwrap
>>>>>>                � �>
>>>>>>                � �>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>  ------------------------------------------------------------------------
>>>>>>
>>>>>>            _______________________________________________
>>>>>>            vwrap mailing list
>>>>>>            vwrap@ietf.org <mailto:vwrap@ietf.org>
>>>>>>            https://www.ietf.org/mailman/listinfo/vwrap
>>>>>>            �
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>    --     --- https://twitter.com/Dzonatas_Sol ---
>>>>>>    Web Development, Software Engineering, Virtual Reality, Consultant
>>>>>>
>>>>>>    _______________________________________________
>>>>>>    vwrap mailing list
>>>>>>    vwrap@ietf.org <mailto:vwrap@ietf.org>
>>>>>>    https://www.ietf.org/mailman/listinfo/vwrap
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> --- https://twitter.com/Dzonatas_Sol ---
>>>>> Web Development, Software Engineering, Virtual Reality, Consultant
>>>>>
>>>>>
>>>>
>>>
>>> _______________________________________________
>>>
>>> vwrap mailing list
>>> vwrap@ietf.org
>>> https://www.ietf.org/mailman/listinfo/vwrap
>>>
>>>
>>
>