Re: [vwrap] [wvrap] Simulation consistency

Dzonatas Sol <dzonatas@gmail.com> Sun, 03 April 2011 17:18 UTC

Return-Path: <dzonatas@gmail.com>
X-Original-To: vwrap@core3.amsl.com
Delivered-To: vwrap@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 4689B3A69C4 for <vwrap@core3.amsl.com>; Sun, 3 Apr 2011 10:18:08 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.238
X-Spam-Level:
X-Spam-Status: No, score=-3.238 tagged_above=-999 required=5 tests=[AWL=-0.239, BAYES_00=-2.599, J_CHICKENPOX_43=0.6, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UX4sAMTfslSi for <vwrap@core3.amsl.com>; Sun, 3 Apr 2011 10:18:06 -0700 (PDT)
Received: from mail-iw0-f172.google.com (mail-iw0-f172.google.com [209.85.214.172]) by core3.amsl.com (Postfix) with ESMTP id C5C403A684F for <vwrap@ietf.org>; Sun, 3 Apr 2011 10:18:05 -0700 (PDT)
Received: by iwn39 with SMTP id 39so5929158iwn.31 for <vwrap@ietf.org>; Sun, 03 Apr 2011 10:19:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:message-id:date:from:user-agent:mime-version:to :cc:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=LgMvTRBY9kJ+oebyd2qLnp5znTlOMr2uGHLv3X2bSQI=; b=KwZtYw03//Gxzv5+5/KagA3vG73Dytz8BNaw4skDh7gr/RNIQyR56DtQgg7ZWvxahM hDx77sNGe7vciRL0zfhZlBdOWDwatSRzqlbJpm7KlevWnA2MG+MEU7RjdPUs6lyLX82n 6yFM0Tx/qMLdGaid/HF/wq/DlyKfuVBAvoCGI=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; b=No1/vcAhC8c+XO3TlX1f9PXeAhfwgeO8/Vry5jb3O24q+KqW3ZKHoEED3QdQ2/qNsK d8dUhop0qplz99wvtRlKqyh1lBdcBhWN7qITsYxY9GTYVWOt8bbPvml/Q5nGqWbiWb8x CUEI2vO2+CRi5O/Wnm2spn+JhcavXF8l4+xL0=
Received: by 10.42.149.7 with SMTP id t7mr2871858icv.449.1301851186654; Sun, 03 Apr 2011 10:19:46 -0700 (PDT)
Received: from [192.168.0.50] (adsl-71-137-195-251.dsl.scrm01.pacbell.net [71.137.195.251]) by mx.google.com with ESMTPS id 13sm3106296ibo.8.2011.04.03.10.19.43 (version=TLSv1/SSLv3 cipher=OTHER); Sun, 03 Apr 2011 10:19:45 -0700 (PDT)
Message-ID: <4D98AC5F.70501@gmail.com>
Date: Sun, 03 Apr 2011 10:20:31 -0700
From: Dzonatas Sol <dzonatas@gmail.com>
User-Agent: Mozilla-Thunderbird 2.0.0.24 (X11/20100329)
MIME-Version: 1.0
To: Vaughn Deluca <vaughn.deluca@gmail.com>
References: <BANLkTint6CiMRZWj59sEYM2j7VoKgz4-Bw@mail.gmail.com> <AANLkTimuVubm5Becx8cg_Uq2Gdj8EjHL7maMyqWOeYCJ@mail.gmail.com> <AANLkTi=0iBKxo0_yv2LWsExzrKUjJLqP5Ua2uHB=M_7d@mail.gmail.com> <AANLkTi=QH+c-19PvavnXU+pgWyaqpAA0F5G5SMd6h4JR@mail.gmail.com> <5365485D-FFAE-46CA-B04E-D413E85FB1D1@gmail.com> <4D97E7FE.7010104@gmail.com> <4D97EEC1.7020207@gmail.com> <BANLkTi=9CXCtb=ryFtMuyG2w9ifb-2urkA@mail.gmail.com>
In-Reply-To: <BANLkTi=9CXCtb=ryFtMuyG2w9ifb-2urkA@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"; format="flowed"
Content-Transfer-Encoding: 8bit
Cc: vwrap@ietf.org
Subject: Re: [vwrap] [wvrap] Simulation consistency
X-BeenThere: vwrap@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Virtual World Region Agent Protocol - IETF working group <vwrap.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/vwrap>, <mailto:vwrap-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/vwrap>
List-Post: <mailto:vwrap@ietf.org>
List-Help: <mailto:vwrap-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/vwrap>, <mailto:vwrap-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 03 Apr 2011 17:18:08 -0000

Probably easy as suggested in other terms here on this list, as how the 
client contacts the asset services now in the regions. The newer issue 
is to unitize that asset services. Since their is proprietary (legacy) 
code then we can't expect that to change, and some form of proxy is of 
need. Whatever works best, I tried to narrow it down to suggestions here.

Eventually, the agent domain is ideal to handle the direction of the 
asset services. This concept, unfortunately, ended support awhile ago 
with changes in LL.
Also see; http://wiki.secondlife.com/wiki/Agent_Domain
And: http://wiki.secondlife.com/wiki/User:Dzonatas_Sol/AWG_Asset (warn: 
unstructured collaborative notes, dumped on me and I tried to fix)

I tried to find previous visuals.

I'd imagine the agent domain could grow out of unitized versions of 
asset services. Despite that, I think that concept helps view where we 
were at in discussion and what didn't happen.

Vaughn Deluca wrote:
> Hi�Dzonatas
>
> Can you expand on that, what would be needed for legacy support in 
> VWAP terms�?,
> If i want to read up on how the�asset server may proxy the simulator, 
> what would you recommend me to read?
>
> -- Vaughn
>
> On Sun, Apr 3, 2011 at 5:51 AM, Dzonatas Sol <dzonatas@gmail.com 
> <mailto:dzonatas@gmail.com>> wrote:
>
>     Some stated the proxy-to-asset-server is built into the sim;
>     however, keep in mind possible legacy support where the asset
>     server may proxy the simulator.
>
>
>     Dzonatas Sol wrote:
>
>         Somehow I feel the basic asset server being able to login and
>         download assets is now priority, yet I also wondered the best
>         way to patch this into the current mode of viewers.
>
>         Maybe offer (1) by proxy (sim-side) and (2) by patch
>         (viewer-side) that either of these two are optional and
>         neither are mandatory for now. Thoughts?
>
>         Israel Alanis wrote:
>
>
>             > when designing for scalability, the model to bear in
>             mind is ...
>
>             Well, there are a lot of different models to keep in mind,
>             and many different use cases. One particular use case to
>             keep in mind is: "User acquires new outfit, and wants to
>             'show it off' in a highly populated region".
>
>             > Both worlds and asset services may include commercial,
>             community, and personal services
>
>             Yes, yes and yes. I'm particularly concerned about how the
>             model affects the ability to host personal asset services.
>
>             > a proxying region, which would get slammed for every
>             asset worn by every avatar present.
>
>             Granted the collection of services that are provided by
>             the region need to be scaled to meet the demands of that
>             region. That's all part of capacity planning.
>
>             > regions run many different CPU-intensive tasks,
>             including physics simulation and server-side scripting,
>             and absolutely cannot afford to serve assets too
>             Well... who said the same CPU's have to do proxying,
>             physics simulation and server-side scripting? Asset
>             proxying is a different service than physics simulation
>             and can be on separate hardware, could make use of
>             geographically distributed caching, and in certain
>             deployment patterns, the same caching services could be
>             shared by different regions. (Server-side scripting is a
>             discussion for another day).
>
>             > This is why we have to go parallel...
>
>             Totally agree, and a proxying model could and should also
>             take advantage of parallelism.
>
>             > I think you're wrong that it has to cost much money. ?vs?
>             > It costs money to host a high performance and scalable
>             asset service and a high bandwidth network to handle the
>             traffic. �A *lot* of money.
>             I think what you're saying is: "It costs a lot of money to
>             build a scalable asset service, but if assets are spread
>             throughout the internet they don't have to be scalable."
>             But that's not quite right. You're opening up every asset
>             server to the VW equivalent of being slashdotted, so are
>             you sure you're not forcing *every* asset service to be
>             scalable and handle a lot of bandwith and network traffic?
>             It's the exact opposite of your intention, but I think
>             that's the result, all the same.
>
>             This particular design decision has a big effect on the
>             economics of the VW infrastructure. I'd rather the
>             economics to work out such that a region provider who
>             wishes to build a region that supports a small population,
>             can do so economically. A region that wants to host a
>             *large* population has to bear that cost of providing that
>             scalable asset service.
>             I want the economics of hosting a small asset service to
>             be a non-issue (as to best promote creation and
>             creativity). Creating a high bar to provide asset services
>             will mean that service will cost money and people
>             shouldn't have to pay money just to create or own VW
>             objects (I'm using 'own' here to refer to maintaining
>             their existence, I'm not trying to make a
>             'leftist'/'communist' statement about ownership ;)
>
>             - Izzy
>
>
>             On Apr 2, 2011, at 3:58 PM, Morgaine wrote:
>
>                 Izzy, when designing for scalability, the model to
>                 bear in mind is that of seasoned virtual world
>                 travelers whose inventories contain assets from many
>                 different worlds, those assets being served by many
>                 different asset services. �Both worlds and asset
>                 services may include commercial, community, and
>                 personal services, and as the metaverse grows, that
>                 set is highly likely to become progressively less
>                 clustered and more diverse.
>
>                 When those seasoned travelers click on an advertised
>                 VW link and perform an inter-world teleport to one
>                 particular world's region to share an experience,
>                 their "worn" assets (the only ones of interest to the
>                 region) will contain references to asset services
>                 spread widely across the Internet. �The fetches by the
>                 travelers' clients occur over many parallel paths from
>                 clients to asset services, so one can reasonably
>                 expect reduced network contention and reduced asset
>                 server loading because they are both spread out over
>                 however many asset services are being referenced by
>                 the overall set of assets in the region.
>
>                 This is very different to the case of a proxying
>                 region, which would get slammed for every asset worn
>                 by every avatar present. �In our current architecture,
>                 regions run many different CPU-intensive tasks,
>                 including physics simulation and server-side
>                 scripting, and absolutely cannot afford to serve
>                 assets too unless your scalability requirements are
>                 very low indeed, ie. just a few dozen avatars of
>                 today's kind. �We've hit the ceiling already on region
>                 scalability done that way. �There is nowhere to go in
>                 that direction at all beyond improving the code like
>                 Intel demonstrated, and that work is subject to a law
>                 of diminishing returns.
>
>                 This is why we have to go parallel, and I think you're
>                 wrong that it has to cost much money. �As we spread
>                 the load across more and more asset services, we are
>                 simply better utilizing all the hardware that's
>                 already out there on the Internet, at least in respect
>                 of community and private resources. �But add to the
>                 community and private resources the commercial asset
>                 services that are likely to appear to exploit this
>                 opportunity, and not only will the number of asset
>                 services leap, but the power of each one will rocket
>                 too, because, after all, these businesses will be
>                 heavily optimized for the job.
>
>                 As to why a world would want clients to access
>                 external asset services instead of providing its own
>                 implementation, that's an easy question. �It costs
>                 money to host a high performance and scalable asset
>                 service and a high bandwidth network to handle the
>                 traffic. �A *lot* of money. �In contrast, it costs a
>                 world nothing to let others serve the assets to
>                 clients. �And that matters to the bottom line.
>
>
>                 Morgaine.
>
>
>
>
>                 ======================
>
>                 On Sat, Apr 2, 2011 at 7:05 PM, Izzy Alanis
>                 <izzyalanis@gmail.com <mailto:izzyalanis@gmail.com>
>                 <mailto:izzyalanis@gmail.com
>                 <mailto:izzyalanis@gmail.com>>> wrote:
>
>                 � �> As always though, it's a trade-off, since the
>                 proxied design
>                 � �has very poor scalability compared to the
>                 distributed one.
>
>                 � �I don't agree with that... If a user enters a
>                 highly populated
>                 � �region,
>                 � �every other client is going to (could and should be
>                 trying to)
>                 � �hit the
>                 � �asset server(s) for the assets that the user is
>                 wearing (assuming
>                 � �they're not cached locally). �Every asset server
>                 has to be scaled up
>                 � �to the point that it can handle that load from all
>                 over...
>
>                 � �If I'm hosting a region that supports 10s of
>                 thousands of
>                 � �simultaneous
>                 � �users (thinking of the future), I already have to
>                 scale to meet that
>                 � �demand. If the region is proxying the assets, then,
>                 yes the
>                 � �region has
>                 � �to be scaled to meet that asset demand too, but it
>                 already has to be
>                 � �scaled to meet other demands of being a region
>                 server... and why is
>                 � �scaling those asset proxy services hard? �It's
>                 going to cost $,
>                 � �but is
>                 � �not technically challenging. So, if I want to host
>                 a region like
>                 � �that... sure it will cost me, but the simulation
>                 will be consistent
>                 � �and users will be able to participate equally,
>                 regardless of the
>                 � �capabilities of their individual asset services.
>
>
>
>
>                 � �On Fri, Apr 1, 2011 at 11:55 PM, Morgaine
>                 � �<morgaine.dinova@googlemail.com
>                 <mailto:morgaine.dinova@googlemail.com>
>                 � �<mailto:morgaine.dinova@googlemail.com
>                 <mailto:morgaine.dinova@googlemail.com>>> wrote:
>                 � �> Every design choice results in a trade-off,
>                 Vaughn, improving
>                 � �one thing at
>                 � �> the expense of something else. �If every time we
>                 offered a
>                 � �service we had to
>                 � �> inform its users about the downsides of all the
>                 trade-offs we
>                 � �have made,
>                 � �> they would have an awful lot to read. ;-)
>                 � �>
>                 � �> The specific trade-off that you are discussing is no
>                 � �different. �A region
>                 � �> that proxies all content has the "benefit" of
>                 acquiring control
>                 � �from the
>                 � �> asset servers over the items in the region, so
>                 that it can
>                 � �ensure that
>                 � �> everyone in the region not only obtains the items
>                 but obtains
>                 � �the same items
>                 � �> as everyone else. �That does indeed provide a
>                 greater guarantee of
>                 � �> consistency than a deployment in which the region
>                 only passes
>                 � �asset URIs to
>                 � �> clients who then fetch the items from asset services
>                 � �separately. �As always
>                 � �> though, it's a trade-off, since the proxied
>                 design has very
>                 � �poor scalability
>                 � �> compared to the distributed one.
>                 � �>
>                 � �> If we're going to warn users of the potential for
>                 inconsistency
>                 � �in the
>                 � �> distributed deployment as you suggest, are we
>                 also going to
>                 � �warn them of
>                 � �> non-scalability in the proxied one? �I really
>                 don't see much
>                 � �merit in the
>                 � �> idea of warning about design choices. �Many such
>                 choices are
>                 � �technical, and
>                 � �> the issues are quite likely to be of little
>                 interest to
>                 � �non-technical users
>                 � �> anyway. �In any case, the better services are
>                 likely to provide
>                 � �such
>                 � �> information in their online documentation, I
>                 would guess.
>                 � �>
>                 � �> You mentioned users "voting with their feet" or
>                 choosing to
>                 � �accept the risk
>                 � �> of inconsistency. �Well that will happen anyway,
>                 when services
>                 � �fail and
>                 � �> users get annoyed. �If some asset services refuse
>                 to send the
>                 � �requested
>                 � �> items to some users, those services will get a
>                 bad reputation
>                 � �and people
>                 � �> will choose different asset services instead.
>                 �Likewise, if a
>                 � �world service
>                 � �> proxies everything and so it can't handle a large
>                 number of
>                 � �assets or of
>                 � �> people, users will get annoyed at the lag and will go
>                 � �elsewhere. �This user
>                 � �> evaluation and "voting with their feet" happens
>                 already with
>                 � �online services
>                 � �> all over the Internet, and I am sure that this
>                 human process
>                 � �will continue
>                 � �> to work when the services are asset and region
>                 services.
>                 � �>
>                 � �> Back in September 2010, I wrote this post which
>                 proposes that
>                 � �we use in
>                 � �> VWRAP a form of asset addressing that provides
>                 massive
>                 � �scalability at the
>                 � �> same time as a very high degree of resilience --
>                 � �>
>                 �
>                 �http://www.ietf.org/mail-archive/web/vwrap/current/msg00463.html
>                 � �. �It is
>                 � �> based on the concept of the URI containing a host
>                 part and a
>                 � �hash part,
>                 � �> where the hash is generated (once, at the time of
>                 storage to
>                 � �the asset
>                 � �> service) using a specified digest algorithm over
>                 the content of
>                 � �the asset
>                 � �> being referenced. �You may wish to note that if
>                 this design
>                 � �were used, the
>                 � �> failure of an asset service to deliver a
>                 requested item would
>                 � �result in a
>                 � �> failover request for the item to one or more
>                 backup services,
>                 � �using the same
>                 � �> hash part but with a different host address.
>                 � �>
>                 � �> This can go some way towards overcoming the
>                 problem that you
>                 � �think might
>                 � �> occur when assets are fetched by clients from
>                 asset services
>                 � �directly.
>                 � �> Although it won't help when the missing item is
>                 available from
>                 � �only a single
>                 � �> asset service, it will help in many other cases,
>                 and it will
>                 � �compensate for
>                 � �> service failures and network outages
>                 automatically at the same
>                 � �time.
>                 � �>
>                 � �> PS. This design for hash-based asset addressing
>                 is already
>                 � �being tested by
>                 � �> Mojito Sorbet in her experimental world and
>                 client. �It would give
>                 � �> VWRAP-based worlds an improved level of service
>                 availability,
>                 � �so I think it
>                 � �> should be a core feature of our protocol.
>                 � �>
>                 � �>
>                 � �> Morgaine.
>                 � �>
>                 � �>
>                 � �>
>                 � �>
>                 � �> ===========================
>                 � �>
>                 � �> On Fri, Apr 1, 2011 at 11:17 PM, Vaughn Deluca
>                 � �<vaughn.deluca@gmail.com
>                 <mailto:vaughn.deluca@gmail.com>
>                 <mailto:vaughn.deluca@gmail.com
>                 <mailto:vaughn.deluca@gmail.com>>>
>                 � �> wrote:
>                 � �>>
>                 � �>> This is a question i discussed with Morgaine
>                 off-list a while
>                 � �ago (I
>                 � �>> intended to send it to the list but pushed the
>                 wrong button...) I
>                 � �>> think we need to address this problem, and
>                 decide how to deal
>                 � �with it.
>                 � �>>
>                 � �>> �In Davids deployment draft, section 7.3.1.1 an
>                 overview is
>                 � �given van
>                 � �>> ways to deliver content to the region. One way
>                 is only passing a
>                 � �>> capability that allows access to (part of) the
>                 resource:
>                 � �>>
>                 � �>> � � � � � 7.3.1.1. �Content delivery models
>                 � �>> � � � � � A range of possible represenations can
>                 be passed to
>                 � �a region for
>                 � �>> � � � � � simulation. [...] The other end of the
>                 delivery spectrum
>                 � �>> involves passing
>                 � �>> � � � � � only a URI or capability used to
>                 access the rendering
>                 � �>> information and a
>                 � �>> � � � � � collision mesh,and related data for
>                 physical simulation.
>                 � �>> � � � � � In such a model, the client is
>                 responsible for
>                 � �fetching the
>                 � �>> additional
>                 � �>> � � � � � information needed to render the
>                 item's visual
>                 � �presence from a
>                 � �>> separate
>                 � �>> � � � � � service. �This fetch can be done
>                 *under the
>                 � �credentials of the
>                 � �>> end user*
>                 � �>> � � � � � viewing the material [my emphasis--VD]
>                 , and
>                 � �divorces the
>                 � �>> simulation from
>                 � �>> � � � � � the trust chain needed to manage
>                 content. �Any
>                 � �automation
>                 � �>> is done on a
>                 � �>> � � � � � separate host which the content
>                 creator or owner trusts,
>                 � �>> interacting with the
>                 � �>> � � � � � object through remoted interfaces.
>                 � �>>
>                 � �>> �I can see the need for such a setup, however, i
>                 feel we are
>                 � �>> unpleasantly close to a situation were the
>                 coherence of the
>                 � �simulation
>                 � �>> falls apart.
>                 � �>> In this deployment pattern the region advertises
>                 the presence
>                 � �of the
>                 � �>> asset, and *some* clients will be able to get it
>                 as expected,
>                 � �while
>                 � �>> -based on the arbitrary whims of the asset
>                 service- others
>                 � �might not.
>                 � �>>
>                 � �>> My hope would be that after the asset server
>                 provides the
>                 � �region with
>                 � �>> the capability to get the asset, it gives up
>                 control. That
>                 � �would mean
>                 � �>> that if the client finds the inventory server is
>                 unwilling to
>                 � �serve
>                 � �>> the content - in spire of the region saying it
>                 is present-,
>                 � �the client
>                 � �>> should be able to turn around ask the *region*
>                 for the asset,
>                 � �(and get
>                 � �>> is after all).
>                 � �>>
>                 � �>> �If that is not the case, -and there are
>                 probably good reasons
>                 � �for the
>                 � �>> deployment pattern as described- �shouldn't we
>                 *warn* clients
>                 � �that the
>                 � �>> region might be inconsistent, so the users
>                 behind the client
>                 � �can vote
>                 � �>> with their feet, (or take the risk)?
>                 � �>>
>                 � �>> --Vaughn
>                 � �>> _______________________________________________
>                 � �>> vwrap mailing list
>                 � �>> vwrap@ietf.org <mailto:vwrap@ietf.org>
>                 <mailto:vwrap@ietf.org <mailto:vwrap@ietf.org>>
>                 � �>> https://www.ietf.org/mailman/listinfo/vwrap
>                 � �>
>                 � �>
>                 � �> _______________________________________________
>                 � �> vwrap mailing list
>                 � �> vwrap@ietf.org <mailto:vwrap@ietf.org>
>                 <mailto:vwrap@ietf.org <mailto:vwrap@ietf.org>>
>                 � �> https://www.ietf.org/mailman/listinfo/vwrap
>                 � �>
>                 � �>
>
>
>
>             ------------------------------------------------------------------------
>
>             _______________________________________________
>             vwrap mailing list
>             vwrap@ietf.org <mailto:vwrap@ietf.org>
>             https://www.ietf.org/mailman/listinfo/vwrap
>             �
>
>
>
>
>
>     -- 
>     --- https://twitter.com/Dzonatas_Sol ---
>     Web Development, Software Engineering, Virtual Reality, Consultant
>
>     _______________________________________________
>     vwrap mailing list
>     vwrap@ietf.org <mailto:vwrap@ietf.org>
>     https://www.ietf.org/mailman/listinfo/vwrap
>
>


-- 
--- https://twitter.com/Dzonatas_Sol ---
Web Development, Software Engineering, Virtual Reality, Consultant