Re: [vwrap] [wvrap] Simulation consistency

Vaughn Deluca <vaughn.deluca@gmail.com> Fri, 08 April 2011 18:56 UTC

Return-Path: <vaughn.deluca@gmail.com>
X-Original-To: vwrap@core3.amsl.com
Delivered-To: vwrap@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 4C98B3A69CC for <vwrap@core3.amsl.com>; Fri, 8 Apr 2011 11:56:02 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.448
X-Spam-Level:
X-Spam-Status: No, score=-1.448 tagged_above=-999 required=5 tests=[AWL=-1.650, BAYES_50=0.001, HTML_MESSAGE=0.001, J_CHICKENPOX_43=0.6, J_CHICKENPOX_51=0.6, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ob9tarRSmZ5a for <vwrap@core3.amsl.com>; Fri, 8 Apr 2011 11:55:57 -0700 (PDT)
Received: from mail-ey0-f172.google.com (mail-ey0-f172.google.com [209.85.215.172]) by core3.amsl.com (Postfix) with ESMTP id 959E03A6962 for <vwrap@ietf.org>; Fri, 8 Apr 2011 11:55:56 -0700 (PDT)
Received: by eye13 with SMTP id 13so1389771eye.31 for <vwrap@ietf.org>; Fri, 08 Apr 2011 11:57:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=L4q27Mq75J/qSc2LqqAN9cDgtMfPAE9QuHJFKVku6os=; b=QBQhVCnfJLvwAQ1jQwZw6xUXhIA41yPyynvNvOCnlXLGmhURRhO1Jjky69a0+gckPr H5Mx+n+iWpj0F00mPNXMftGUkjAtnrSrdxaaXzO1sqXqldheAgijgRpK5NGPPArwAxWK iAqyOBrrrumkjNpE27RxghouWKmeEpSpqRCQU=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=us+kcS0SHgtjs1nhcuSgq88W/8aY8ANNuHvby73mnrjA1M/HlQ0NlcRdTk9nstZ7zk 1ncLHwjwYvknrmKFFuW+tJBLOH4ajCqjuKkYK8ktNwNCqTPwU5oHdDYtlgPtPt2y7FlC Sd2vklZStC7iKCx7VHJ0brZXChtwRqxPqIv8U=
MIME-Version: 1.0
Received: by 10.213.108.84 with SMTP id e20mr1170929ebp.3.1302289061422; Fri, 08 Apr 2011 11:57:41 -0700 (PDT)
Received: by 10.213.17.17 with HTTP; Fri, 8 Apr 2011 11:57:41 -0700 (PDT)
In-Reply-To: <BANLkTim8uUNmGU91mYmXQX6_Eqqp92--WQ@mail.gmail.com>
References: <BANLkTint6CiMRZWj59sEYM2j7VoKgz4-Bw@mail.gmail.com> <AANLkTimuVubm5Becx8cg_Uq2Gdj8EjHL7maMyqWOeYCJ@mail.gmail.com> <AANLkTi=0iBKxo0_yv2LWsExzrKUjJLqP5Ua2uHB=M_7d@mail.gmail.com> <AANLkTi=QH+c-19PvavnXU+pgWyaqpAA0F5G5SMd6h4JR@mail.gmail.com> <5365485D-FFAE-46CA-B04E-D413E85FB1D1@gmail.com> <4D97E7FE.7010104@gmail.com> <4D97EEC1.7020207@gmail.com> <BANLkTi=9CXCtb=ryFtMuyG2w9ifb-2urkA@mail.gmail.com> <4D98AC5F.70501@gmail.com> <BANLkTikci18U3S-fz6k4doVTdtUig7j=zw@mail.gmail.com> <BANLkTim8uUNmGU91mYmXQX6_Eqqp92--WQ@mail.gmail.com>
Date: Fri, 08 Apr 2011 20:57:41 +0200
Message-ID: <BANLkTinW=mZZpSy_h8_0BAe1POwBm4-MHw@mail.gmail.com>
From: Vaughn Deluca <vaughn.deluca@gmail.com>
To: vwrap@ietf.org
Content-Type: multipart/alternative; boundary="0015174be714e6a64504a06cccc3"
Subject: Re: [vwrap] [wvrap] Simulation consistency
X-BeenThere: vwrap@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Virtual World Region Agent Protocol - IETF working group <vwrap.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/vwrap>, <mailto:vwrap-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/vwrap>
List-Post: <mailto:vwrap@ietf.org>
List-Help: <mailto:vwrap-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/vwrap>, <mailto:vwrap-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 08 Apr 2011 18:56:02 -0000

Drat!, I already found two editing mistakes, step 20 should address A, not B
and should read:
20) Asset service A, please send me a cap for Z, here are my credentials (I
want a cap for consistency)

And the same mixup in step 25: "Amzing Assets" should be mentioned instead
of "Big assets":
25) The zodiac dress was not delivered by Amzing assets, but i have a local
copy!

I will add an updated version of the description on the vwrap wiki

-- Vaughn

On Fri, Apr 8, 2011 at 6:40 PM, Vaughn Deluca <vaughn.deluca@gmail.com>wrote:

> VWRAP services high level message flow (preliminary diagram draft) see
>
>
> http://trac.tools.ietf.org/wg/vwrap/trac/attachment/wiki/Diagrams/VWRAP_FlowExample_VD1.pdf
>
> The main reason that i am submitting this in spite of my lack of formal
> expertise is that the group in my view badly needs a solid basis for
> discussion and preventing endless repeating loops. This example is probably
> wrong in many ways, but its better than what we have publicly available on
> interop now (although Morgaine is working on something along the lines of
> the recent discussions here)
>
> I hope this diagram will give us a base for discussion. I could have done
> my homework better by researching the old OGP stuff in more depth, and i
> probably  will do so in the future , but for now I just tried to followed
> the general principles as far as i understand them, to see what response
> that yields from the group. In other words,I try to let the group educate me
> :p
>
> Note that in  my view all services are equal, in principle it does not
> matter in what "domain" they run, since trust and policy are fully
> localized. It is however very possible to have internal shortcuts in the
> services to speed up processing.
>
> In the example I opted for an external Agent service, but I could as well
> have incorporated that in the set of local services. As indicated above all
> services could also be run by different organisations, true to what VWRAP
> stands for. Its all up to the deployer, including a user at home who might
> want to run a full world for family and friends. Those friends might try to
> use that agent service to venture out in the virtual universe.
> I envision that the final identity  provider is external, using OpenID and
> OAut  or whatever other  magic that I do not yet fully understand exists out
> there.
>
> The  example has 3 main purposes:
> -  Provide a reference for discussion
> - Illustrates the use case of tourism, and *true* interop.
> - Illustrate consistency problems along the lines discussed  here higher up
> in this tread, as well as the "slashdot" problem that Morgaine outlined so
> clearly.
>
> The message flow assumes an avatar already present in some region, (a small
> scale local home region in this case, but that is by no means essential, it
> could be a build in region in the viewer or a big commercial region). The
> user is preparing for a trip to immersive world, and after some outfit
> adjustments moves over.
>
> Finally i apologize for for the simplistic notation used here. I simply add
> the most relevant parameters passed in square brackets to a keyword
> specifying the nature of the message. Please improve on that where needed.
>
> So here we go, the avatar is  prepare for a visit to "immersive world"
> 0)  Viewer, here is an update of the state of the world your agent is in,
> please render.
> 1)  Agent service, I will go in my Zodiac dress that i keep in the
>  "Amazing assets" service.
> 2)  Asset service A, please send a cap for Z, here are my credentials
> 3)  Your fine, here is the cap
> 4)  Local region, can you please put this on my agent, i included the cap.
> 5)  Hello asset service A, i need Z, here is the cap
> 6)  Cap is good, data coming up, have fun.
> 7)  Agent service, your agent is now wearing Z
> 8)  Viewer, your avatar is now wearing Z
>     User: Hmm, amazing inventory has not been *that* amazing lately. 'll
> make a backup, just in case
> 9)  Hello asset service A, please send me a cap for Z, here are my
> credentials
> 10) Your fine, here is the cap
> 11) Local asset storage, please store Z for me, here is the cap to get it
> 12) Hello asset service A, i want Z, here is the cap
> 13) Cap is good, data coming up, have fun.
> 14) Viewer, Z is now stored for you
>     User: I am Ready!, Lets try to get to immersive world!
> 15) Hello immersive world, can i get in? Here are my credentials, and a
> list of my stuff.
> 16) Asset service A, please send me a cap for X, here are my credentials (I
> want this cap for consistency)
> 17) Your fine, here is the cap
> 18) Asset service B, please send me a cap for Y, here are my credentials (I
> want this cap for consistency)
> 19) Very sorry, but your not one of us, you can't have Y
> 20) Asset service B, please send me a cap for Z, here are my credentials (I
> want a cap for consistency)
> [Region service: Timeout... amazing inventory must be overloaded.. oh
> well... ]
> 21) Agent service, you wanted to send somebody over, here are your
> permissions.
> 22) Viewer, you asked for a transfer try, here are your results
>      User thinks:  Crap! Big asset service does not allow  me to take my
> yellow stockings! And Amazing assets  failed to deliver my zodiac dress. At
> least i made a backup of that dress!
> 23) 'll take the yellow stockings off...
> 24) ... done ('ll trash them here and now, forever, who needs stuff you
> can't use!)
> 25) The zodiac dress was not delivered by Big assets service, but i have a
> local copy!
> 26) Local Asset service, please send me Z, here are my credentials
> 27) I dont know you, but I 'll trust you, here is the cap, but you better
> store the data, its single use, i need to protect myself.
> 28) Local region, can you please put this on my agent, i include the cap.
> 29) Local Asset service, i need Z, here is the cap
> 30) Cap is good, data coming up, have fun.
> 31) Cap was only good for one time, I made a copy, but my policy is to only
> grant you fair use rights, at a later time i might even tell you to replace
> the dress.
> 32) Viewer, you can wear Z for now, but the asset service granted only fair
> use, i might ask you to replace the dress at a later time.
> 33) Ready at last! Off to immersive world!, I hope its not to crowded there
> or 'll loose my dress...
> 34) Hello immersive world, here are my credentials, and a list of stuff i
> want to bring
> 35) Hello asset service A, please send me a cap for X, here are my
> credentials
>     [darn, I should have kept that cap from last time..]
> 36) Your fine, here is the cap.
>    [Region service finds fair-use warning on Z and decides to make its own
> copy]
> 37) Hello Local region, can i still have Z? Here is the cap
> 38) Cap is still good, data coming up, have fun.
>    [Region service stores asset in private storage, providing a cap to
> replace the fair use one]
> 39) Agent service, you wanted to send somebody over, here are your
> permissions & info.
> 40) Hello immersive world, just  get me there, and use what you can
> 41) Placement done, Z is currently buffered by us as wel, you need to get
> details for X, have fun.
> 42) You are now in immersive world, your dress is buffered there as well,
> but you need to get X!
> 43) Hello asset service A, i want X, here is the cap
> 44) Cap is good, data coming up, have fun.
> 45) Viewer, here is an update of the state of the world your agent is in,
> please render.
>
> As far as I can see this conforms fully to our charter, and i hope it is
> possible to use large portions of the existing code bases. However, as said
> above, i did not really try to capture the old thinking, and I also might
> have misconceptions about the way to do these things in general.
> Looking forward to constructive comments.
>
> -- Vaughn
>
> On Sun, Apr 3, 2011 at 8:38 PM, Vaughn Deluca <vaughn.deluca@gmail.com>wrote:
>
>> Thanks for the pointers.  I have a  busy week in RL in front of me, so i
>> wont have to much time to respond the next few days, however, i intend to
>> start doing the following things:
>>
>> - Produce a visual that reflects my thinking, i.e. an illustration of my
>> response to Morgaine's itemlist  above.
>> - Read up on the older notes, as well as  more reading in the list archive
>> - Try to make a summary for the wiki
>>
>> Regarding the use of domain, I think services are eventually what counts,
>> but its all terminology. The way I read the AWG diagrams is that the agent
>> domain is actually a cluster of tightly integrated services. When the
>> functionality of each sub-service is described properly and with uniform
>> interfaces the domain will slowly dissolve. But let not get ahead of out
>> selfs. We should put up some clear descriptions on the wiki for our views on
>> this, and *after* that we can decide what we need and what can go.
>>
>> Its been a very useful and illuminating weekend for me, and i am a lot
>> more optimistic about the future of vwrap than two weeks ago.
>>
>> -- Vaughn
>>
>>
>>
>> On Sun, Apr 3, 2011 at 7:20 PM, Dzonatas Sol <dzonatas@gmail.com> wrote:
>>
>>> Probably easy as suggested in other terms here on this list, as how the
>>> client contacts the asset services now in the regions. The newer issue is to
>>> unitize that asset services. Since their is proprietary (legacy) code then
>>> we can't expect that to change, and some form of proxy is of need. Whatever
>>> works best, I tried to narrow it down to suggestions here.
>>>
>>> Eventually, the agent domain is ideal to handle the direction of the
>>> asset services. This concept, unfortunately, ended support awhile ago with
>>> changes in LL.
>>> Also see; http://wiki.secondlife.com/wiki/Agent_Domain
>>> And: http://wiki.secondlife.com/wiki/User:Dzonatas_Sol/AWG_Asset (warn:
>>> unstructured collaborative notes, dumped on me and I tried to fix)
>>>
>>> I tried to find previous visuals.
>>>
>>> I'd imagine the agent domain could grow out of unitized versions of asset
>>> services. Despite that, I think that concept helps view where we were at in
>>> discussion and what didn't happen.
>>>
>>> Vaughn Deluca wrote:
>>>
>>>> Hi�Dzonatas
>>>>
>>>> Can you expand on that, what would be needed for legacy support in VWAP
>>>> terms�?,
>>>> If i want to read up on how the�asset server may proxy the simulator,
>>>> what would you recommend me to read?
>>>>
>>>> -- Vaughn
>>>>
>>>> On Sun, Apr 3, 2011 at 5:51 AM, Dzonatas Sol <dzonatas@gmail.com<mailto:
>>>> dzonatas@gmail.com>> wrote:
>>>>
>>>>    Some stated the proxy-to-asset-server is built into the sim;
>>>>    however, keep in mind possible legacy support where the asset
>>>>    server may proxy the simulator.
>>>>
>>>>
>>>>    Dzonatas Sol wrote:
>>>>
>>>>        Somehow I feel the basic asset server being able to login and
>>>>        download assets is now priority, yet I also wondered the best
>>>>        way to patch this into the current mode of viewers.
>>>>
>>>>        Maybe offer (1) by proxy (sim-side) and (2) by patch
>>>>        (viewer-side) that either of these two are optional and
>>>>        neither are mandatory for now. Thoughts?
>>>>
>>>>        Israel Alanis wrote:
>>>>
>>>>
>>>>            > when designing for scalability, the model to bear in
>>>>            mind is ...
>>>>
>>>>            Well, there are a lot of different models to keep in mind,
>>>>            and many different use cases. One particular use case to
>>>>            keep in mind is: "User acquires new outfit, and wants to
>>>>            'show it off' in a highly populated region".
>>>>
>>>>            > Both worlds and asset services may include commercial,
>>>>            community, and personal services
>>>>
>>>>            Yes, yes and yes. I'm particularly concerned about how the
>>>>            model affects the ability to host personal asset services.
>>>>
>>>>            > a proxying region, which would get slammed for every
>>>>            asset worn by every avatar present.
>>>>
>>>>            Granted the collection of services that are provided by
>>>>            the region need to be scaled to meet the demands of that
>>>>            region. That's all part of capacity planning.
>>>>
>>>>            > regions run many different CPU-intensive tasks,
>>>>            including physics simulation and server-side scripting,
>>>>            and absolutely cannot afford to serve assets too
>>>>            Well... who said the same CPU's have to do proxying,
>>>>            physics simulation and server-side scripting? Asset
>>>>            proxying is a different service than physics simulation
>>>>            and can be on separate hardware, could make use of
>>>>            geographically distributed caching, and in certain
>>>>            deployment patterns, the same caching services could be
>>>>            shared by different regions. (Server-side scripting is a
>>>>            discussion for another day).
>>>>
>>>>            > This is why we have to go parallel...
>>>>
>>>>            Totally agree, and a proxying model could and should also
>>>>            take advantage of parallelism.
>>>>
>>>>            > I think you're wrong that it has to cost much money. ?vs?
>>>>            > It costs money to host a high performance and scalable
>>>>            asset service and a high bandwidth network to handle the
>>>>            traffic. �A *lot* of money.
>>>>            I think what you're saying is: "It costs a lot of money to
>>>>            build a scalable asset service, but if assets are spread
>>>>            throughout the internet they don't have to be scalable."
>>>>            But that's not quite right. You're opening up every asset
>>>>            server to the VW equivalent of being slashdotted, so are
>>>>            you sure you're not forcing *every* asset service to be
>>>>            scalable and handle a lot of bandwith and network traffic?
>>>>            It's the exact opposite of your intention, but I think
>>>>            that's the result, all the same.
>>>>
>>>>            This particular design decision has a big effect on the
>>>>            economics of the VW infrastructure. I'd rather the
>>>>            economics to work out such that a region provider who
>>>>            wishes to build a region that supports a small population,
>>>>            can do so economically. A region that wants to host a
>>>>            *large* population has to bear that cost of providing that
>>>>            scalable asset service.
>>>>            I want the economics of hosting a small asset service to
>>>>            be a non-issue (as to best promote creation and
>>>>            creativity). Creating a high bar to provide asset services
>>>>            will mean that service will cost money and people
>>>>            shouldn't have to pay money just to create or own VW
>>>>            objects (I'm using 'own' here to refer to maintaining
>>>>            their existence, I'm not trying to make a
>>>>            'leftist'/'communist' statement about ownership ;)
>>>>
>>>>            - Izzy
>>>>
>>>>
>>>>            On Apr 2, 2011, at 3:58 PM, Morgaine wrote:
>>>>
>>>>                Izzy, when designing for scalability, the model to
>>>>                bear in mind is that of seasoned virtual world
>>>>                travelers whose inventories contain assets from many
>>>>                different worlds, those assets being served by many
>>>>                different asset services. �Both worlds and asset
>>>>                services may include commercial, community, and
>>>>                personal services, and as the metaverse grows, that
>>>>                set is highly likely to become progressively less
>>>>                clustered and more diverse.
>>>>
>>>>                When those seasoned travelers click on an advertised
>>>>                VW link and perform an inter-world teleport to one
>>>>                particular world's region to share an experience,
>>>>                their "worn" assets (the only ones of interest to the
>>>>                region) will contain references to asset services
>>>>                spread widely across the Internet. �The fetches by the
>>>>                travelers' clients occur over many parallel paths from
>>>>                clients to asset services, so one can reasonably
>>>>                expect reduced network contention and reduced asset
>>>>                server loading because they are both spread out over
>>>>                however many asset services are being referenced by
>>>>                the overall set of assets in the region.
>>>>
>>>>                This is very different to the case of a proxying
>>>>                region, which would get slammed for every asset worn
>>>>                by every avatar present. �In our current architecture,
>>>>                regions run many different CPU-intensive tasks,
>>>>                including physics simulation and server-side
>>>>                scripting, and absolutely cannot afford to serve
>>>>                assets too unless your scalability requirements are
>>>>                very low indeed, ie. just a few dozen avatars of
>>>>                today's kind. �We've hit the ceiling already on region
>>>>                scalability done that way. �There is nowhere to go in
>>>>                that direction at all beyond improving the code like
>>>>                Intel demonstrated, and that work is subject to a law
>>>>                of diminishing returns.
>>>>
>>>>                This is why we have to go parallel, and I think you're
>>>>                wrong that it has to cost much money. �As we spread
>>>>                the load across more and more asset services, we are
>>>>                simply better utilizing all the hardware that's
>>>>                already out there on the Internet, at least in respect
>>>>                of community and private resources. �But add to the
>>>>                community and private resources the commercial asset
>>>>                services that are likely to appear to exploit this
>>>>                opportunity, and not only will the number of asset
>>>>                services leap, but the power of each one will rocket
>>>>                too, because, after all, these businesses will be
>>>>                heavily optimized for the job.
>>>>
>>>>                As to why a world would want clients to access
>>>>                external asset services instead of providing its own
>>>>                implementation, that's an easy question. �It costs
>>>>                money to host a high performance and scalable asset
>>>>                service and a high bandwidth network to handle the
>>>>                traffic. �A *lot* of money. �In contrast, it costs a
>>>>                world nothing to let others serve the assets to
>>>>                clients. �And that matters to the bottom line.
>>>>
>>>>
>>>>                Morgaine.
>>>>
>>>>
>>>>
>>>>
>>>>                ======================
>>>>
>>>>                On Sat, Apr 2, 2011 at 7:05 PM, Izzy Alanis
>>>>                <izzyalanis@gmail.com <mailto:izzyalanis@gmail.com>
>>>>                <mailto:izzyalanis@gmail.com
>>>>
>>>>                <mailto:izzyalanis@gmail.com>>> wrote:
>>>>
>>>>                � �> As always though, it's a trade-off, since the
>>>>                proxied design
>>>>                � �has very poor scalability compared to the
>>>>                distributed one.
>>>>
>>>>                � �I don't agree with that... If a user enters a
>>>>                highly populated
>>>>                � �region,
>>>>                � �every other client is going to (could and should be
>>>>                trying to)
>>>>                � �hit the
>>>>                � �asset server(s) for the assets that the user is
>>>>                wearing (assuming
>>>>                � �they're not cached locally). �Every asset server
>>>>                has to be scaled up
>>>>                � �to the point that it can handle that load from all
>>>>                over...
>>>>
>>>>                � �If I'm hosting a region that supports 10s of
>>>>                thousands of
>>>>                � �simultaneous
>>>>                � �users (thinking of the future), I already have to
>>>>                scale to meet that
>>>>                � �demand. If the region is proxying the assets, then,
>>>>                yes the
>>>>                � �region has
>>>>                � �to be scaled to meet that asset demand too, but it
>>>>                already has to be
>>>>                � �scaled to meet other demands of being a region
>>>>                server... and why is
>>>>                � �scaling those asset proxy services hard? �It's
>>>>                going to cost $,
>>>>                � �but is
>>>>                � �not technically challenging. So, if I want to host
>>>>                a region like
>>>>                � �that... sure it will cost me, but the simulation
>>>>                will be consistent
>>>>                � �and users will be able to participate equally,
>>>>                regardless of the
>>>>                � �capabilities of their individual asset services.
>>>>
>>>>
>>>>
>>>>
>>>>                � �On Fri, Apr 1, 2011 at 11:55 PM, Morgaine
>>>>                � �<morgaine.dinova@googlemail.com
>>>>                <mailto:morgaine.dinova@googlemail.com>
>>>>                � �<mailto:morgaine.dinova@googlemail.com
>>>>
>>>>                <mailto:morgaine.dinova@googlemail.com>>> wrote:
>>>>                � �> Every design choice results in a trade-off,
>>>>                Vaughn, improving
>>>>                � �one thing at
>>>>                � �> the expense of something else. �If every time we
>>>>                offered a
>>>>                � �service we had to
>>>>                � �> inform its users about the downsides of all the
>>>>                trade-offs we
>>>>                � �have made,
>>>>                � �> they would have an awful lot to read. ;-)
>>>>                � �>
>>>>                � �> The specific trade-off that you are discussing is no
>>>>                � �different. �A region
>>>>                � �> that proxies all content has the "benefit" of
>>>>                acquiring control
>>>>                � �from the
>>>>                � �> asset servers over the items in the region, so
>>>>                that it can
>>>>                � �ensure that
>>>>                � �> everyone in the region not only obtains the items
>>>>                but obtains
>>>>                � �the same items
>>>>                � �> as everyone else. �That does indeed provide a
>>>>                greater guarantee of
>>>>                � �> consistency than a deployment in which the region
>>>>                only passes
>>>>                � �asset URIs to
>>>>                � �> clients who then fetch the items from asset services
>>>>                � �separately. �As always
>>>>                � �> though, it's a trade-off, since the proxied
>>>>                design has very
>>>>                � �poor scalability
>>>>                � �> compared to the distributed one.
>>>>                � �>
>>>>                � �> If we're going to warn users of the potential for
>>>>                inconsistency
>>>>                � �in the
>>>>                � �> distributed deployment as you suggest, are we
>>>>                also going to
>>>>                � �warn them of
>>>>                � �> non-scalability in the proxied one? �I really
>>>>                don't see much
>>>>                � �merit in the
>>>>                � �> idea of warning about design choices. �Many such
>>>>                choices are
>>>>                � �technical, and
>>>>                � �> the issues are quite likely to be of little
>>>>                interest to
>>>>                � �non-technical users
>>>>                � �> anyway. �In any case, the better services are
>>>>                likely to provide
>>>>                � �such
>>>>                � �> information in their online documentation, I
>>>>                would guess.
>>>>                � �>
>>>>                � �> You mentioned users "voting with their feet" or
>>>>                choosing to
>>>>                � �accept the risk
>>>>                � �> of inconsistency. �Well that will happen anyway,
>>>>                when services
>>>>                � �fail and
>>>>                � �> users get annoyed. �If some asset services refuse
>>>>                to send the
>>>>                � �requested
>>>>                � �> items to some users, those services will get a
>>>>                bad reputation
>>>>                � �and people
>>>>                � �> will choose different asset services instead.
>>>>                �Likewise, if a
>>>>                � �world service
>>>>                � �> proxies everything and so it can't handle a large
>>>>                number of
>>>>                � �assets or of
>>>>                � �> people, users will get annoyed at the lag and will
>>>> go
>>>>                � �elsewhere. �This user
>>>>                � �> evaluation and "voting with their feet" happens
>>>>                already with
>>>>                � �online services
>>>>                � �> all over the Internet, and I am sure that this
>>>>                human process
>>>>                � �will continue
>>>>                � �> to work when the services are asset and region
>>>>                services.
>>>>                � �>
>>>>                � �> Back in September 2010, I wrote this post which
>>>>                proposes that
>>>>                � �we use in
>>>>                � �> VWRAP a form of asset addressing that provides
>>>>                massive
>>>>                � �scalability at the
>>>>                � �> same time as a very high degree of resilience --
>>>>                � �>
>>>>                �
>>>>                �
>>>> http://www.ietf.org/mail-archive/web/vwrap/current/msg00463.html
>>>>                � �. �It is
>>>>                � �> based on the concept of the URI containing a host
>>>>                part and a
>>>>                � �hash part,
>>>>                � �> where the hash is generated (once, at the time of
>>>>                storage to
>>>>                � �the asset
>>>>                � �> service) using a specified digest algorithm over
>>>>                the content of
>>>>                � �the asset
>>>>                � �> being referenced. �You may wish to note that if
>>>>                this design
>>>>                � �were used, the
>>>>                � �> failure of an asset service to deliver a
>>>>                requested item would
>>>>                � �result in a
>>>>                � �> failover request for the item to one or more
>>>>                backup services,
>>>>                � �using the same
>>>>                � �> hash part but with a different host address.
>>>>                � �>
>>>>                � �> This can go some way towards overcoming the
>>>>                problem that you
>>>>                � �think might
>>>>                � �> occur when assets are fetched by clients from
>>>>                asset services
>>>>                � �directly.
>>>>                � �> Although it won't help when the missing item is
>>>>                available from
>>>>                � �only a single
>>>>                � �> asset service, it will help in many other cases,
>>>>                and it will
>>>>                � �compensate for
>>>>                � �> service failures and network outages
>>>>                automatically at the same
>>>>                � �time.
>>>>                � �>
>>>>                � �> PS. This design for hash-based asset addressing
>>>>                is already
>>>>                � �being tested by
>>>>                � �> Mojito Sorbet in her experimental world and
>>>>                client. �It would give
>>>>                � �> VWRAP-based worlds an improved level of service
>>>>                availability,
>>>>                � �so I think it
>>>>                � �> should be a core feature of our protocol.
>>>>                � �>
>>>>                � �>
>>>>                � �> Morgaine.
>>>>                � �>
>>>>                � �>
>>>>                � �>
>>>>                � �>
>>>>                � �> ===========================
>>>>                � �>
>>>>                � �> On Fri, Apr 1, 2011 at 11:17 PM, Vaughn Deluca
>>>>                � �<vaughn.deluca@gmail.com
>>>>                <mailto:vaughn.deluca@gmail.com>
>>>>                <mailto:vaughn.deluca@gmail.com
>>>>                <mailto:vaughn.deluca@gmail.com>>>
>>>>                � �> wrote:
>>>>                � �>>
>>>>                � �>> This is a question i discussed with Morgaine
>>>>                off-list a while
>>>>                � �ago (I
>>>>                � �>> intended to send it to the list but pushed the
>>>>                wrong button...) I
>>>>                � �>> think we need to address this problem, and
>>>>                decide how to deal
>>>>                � �with it.
>>>>                � �>>
>>>>                � �>> �In Davids deployment draft, section 7.3.1.1 an
>>>>                overview is
>>>>                � �given van
>>>>                � �>> ways to deliver content to the region. One way
>>>>                is only passing a
>>>>                � �>> capability that allows access to (part of) the
>>>>                resource:
>>>>                � �>>
>>>>                � �>> � � � � � 7.3.1.1. �Content delivery models
>>>>                � �>> � � � � � A range of possible represenations can
>>>>                be passed to
>>>>                � �a region for
>>>>                � �>> � � � � � simulation. [...] The other end of the
>>>>                delivery spectrum
>>>>                � �>> involves passing
>>>>                � �>> � � � � � only a URI or capability used to
>>>>                access the rendering
>>>>                � �>> information and a
>>>>                � �>> � � � � � collision mesh,and related data for
>>>>                physical simulation.
>>>>                � �>> � � � � � In such a model, the client is
>>>>                responsible for
>>>>                � �fetching the
>>>>                � �>> additional
>>>>                � �>> � � � � � information needed to render the
>>>>                item's visual
>>>>                � �presence from a
>>>>                � �>> separate
>>>>                � �>> � � � � � service. �This fetch can be done
>>>>                *under the
>>>>                � �credentials of the
>>>>                � �>> end user*
>>>>                � �>> � � � � � viewing the material [my emphasis--VD]
>>>>                , and
>>>>                � �divorces the
>>>>                � �>> simulation from
>>>>                � �>> � � � � � the trust chain needed to manage
>>>>                content. �Any
>>>>                � �automation
>>>>                � �>> is done on a
>>>>                � �>> � � � � � separate host which the content
>>>>                creator or owner trusts,
>>>>                � �>> interacting with the
>>>>                � �>> � � � � � object through remoted interfaces.
>>>>                � �>>
>>>>                � �>> �I can see the need for such a setup, however, i
>>>>                feel we are
>>>>                � �>> unpleasantly close to a situation were the
>>>>                coherence of the
>>>>                � �simulation
>>>>                � �>> falls apart.
>>>>                � �>> In this deployment pattern the region advertises
>>>>                the presence
>>>>                � �of the
>>>>                � �>> asset, and *some* clients will be able to get it
>>>>                as expected,
>>>>                � �while
>>>>                � �>> -based on the arbitrary whims of the asset
>>>>                service- others
>>>>                � �might not.
>>>>                � �>>
>>>>                � �>> My hope would be that after the asset server
>>>>                provides the
>>>>                � �region with
>>>>                � �>> the capability to get the asset, it gives up
>>>>                control. That
>>>>                � �would mean
>>>>                � �>> that if the client finds the inventory server is
>>>>                unwilling to
>>>>                � �serve
>>>>                � �>> the content - in spire of the region saying it
>>>>                is present-,
>>>>                � �the client
>>>>                � �>> should be able to turn around ask the *region*
>>>>                for the asset,
>>>>                � �(and get
>>>>                � �>> is after all).
>>>>                � �>>
>>>>                � �>> �If that is not the case, -and there are
>>>>                probably good reasons
>>>>                � �for the
>>>>                � �>> deployment pattern as described- �shouldn't we
>>>>                *warn* clients
>>>>                � �that the
>>>>                � �>> region might be inconsistent, so the users
>>>>                behind the client
>>>>                � �can vote
>>>>                � �>> with their feet, (or take the risk)?
>>>>                � �>>
>>>>                � �>> --Vaughn
>>>>                � �>> _______________________________________________
>>>>                � �>> vwrap mailing list
>>>>                � �>> vwrap@ietf.org <mailto:vwrap@ietf.org>
>>>>                <mailto:vwrap@ietf.org <mailto:vwrap@ietf.org>>
>>>>
>>>>                � �>> https://www.ietf.org/mailman/listinfo/vwrap
>>>>                � �>
>>>>                � �>
>>>>                � �> _______________________________________________
>>>>                � �> vwrap mailing list
>>>>                � �> vwrap@ietf.org <mailto:vwrap@ietf.org>
>>>>                <mailto:vwrap@ietf.org <mailto:vwrap@ietf.org>>
>>>>
>>>>                � �> https://www.ietf.org/mailman/listinfo/vwrap
>>>>                � �>
>>>>                � �>
>>>>
>>>>
>>>>
>>>>
>>>>  ------------------------------------------------------------------------
>>>>
>>>>            _______________________________________________
>>>>            vwrap mailing list
>>>>            vwrap@ietf.org <mailto:vwrap@ietf.org>
>>>>            https://www.ietf.org/mailman/listinfo/vwrap
>>>>            �
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>    --     --- https://twitter.com/Dzonatas_Sol ---
>>>>    Web Development, Software Engineering, Virtual Reality, Consultant
>>>>
>>>>    _______________________________________________
>>>>    vwrap mailing list
>>>>    vwrap@ietf.org <mailto:vwrap@ietf.org>
>>>>    https://www.ietf.org/mailman/listinfo/vwrap
>>>>
>>>>
>>>>
>>>
>>> --
>>> --- https://twitter.com/Dzonatas_Sol ---
>>> Web Development, Software Engineering, Virtual Reality, Consultant
>>>
>>>
>>
>