Re: [vwrap] [wvrap] Simulation consistency

Vaughn Deluca <vaughn.deluca@gmail.com> Sun, 03 April 2011 06:54 UTC

Return-Path: <vaughn.deluca@gmail.com>
X-Original-To: vwrap@core3.amsl.com
Delivered-To: vwrap@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id E6ED53A692F for <vwrap@core3.amsl.com>; Sat, 2 Apr 2011 23:54:39 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.016
X-Spam-Level:
X-Spam-Status: No, score=-3.016 tagged_above=-999 required=5 tests=[AWL=-0.318, BAYES_00=-2.599, HTML_MESSAGE=0.001, J_CHICKENPOX_43=0.6, RCVD_IN_DNSWL_LOW=-1, SARE_WEOFFER=0.3]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id iSS5DURM-vXx for <vwrap@core3.amsl.com>; Sat, 2 Apr 2011 23:54:36 -0700 (PDT)
Received: from mail-ey0-f172.google.com (mail-ey0-f172.google.com [209.85.215.172]) by core3.amsl.com (Postfix) with ESMTP id A81733A692E for <vwrap@ietf.org>; Sat, 2 Apr 2011 23:54:35 -0700 (PDT)
Received: by eye13 with SMTP id 13so1628181eye.31 for <vwrap@ietf.org>; Sat, 02 Apr 2011 23:56:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=/xFg8zYb/SGVv1rVvWT65GsmaYPZ+xEUen4LqgJr29s=; b=vDrGLxPhyg1SobbA4UYPVCY+irtzyCnvsCzu654i76U/ZsOEyeYGV53KlNEFxVjjy+ YD/g31K4/Wlq7ovzNenWE3mluUDB5zM3xiWLDny3b2W4pVJLvtO4chSpoxWGQspOeg0F tz4v0BuRdgvSIfgrqigQ12MgQPwZzlBFkw02M=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=fyCIISK7QTgVINpCD8xIDQzNh9q5pqcNuVPL5z/0H2NH+IioF/Ku5j3ag3WFtzXCo4 uNdLMhZ/f3DOpwDwJ85fy0wxfGPWNMjksMpfMmDbCgEYA+nRCaga4CbDwWghAatKAFmK zkDmVnZWgaTg0lC9GGZaN1CS0t+f/71rFTpNk=
MIME-Version: 1.0
Received: by 10.213.34.209 with SMTP id m17mr3060712ebd.3.1301813775355; Sat, 02 Apr 2011 23:56:15 -0700 (PDT)
Received: by 10.213.17.17 with HTTP; Sat, 2 Apr 2011 23:56:15 -0700 (PDT)
In-Reply-To: <4D97EEC1.7020207@gmail.com>
References: <BANLkTint6CiMRZWj59sEYM2j7VoKgz4-Bw@mail.gmail.com> <AANLkTimuVubm5Becx8cg_Uq2Gdj8EjHL7maMyqWOeYCJ@mail.gmail.com> <AANLkTi=0iBKxo0_yv2LWsExzrKUjJLqP5Ua2uHB=M_7d@mail.gmail.com> <AANLkTi=QH+c-19PvavnXU+pgWyaqpAA0F5G5SMd6h4JR@mail.gmail.com> <5365485D-FFAE-46CA-B04E-D413E85FB1D1@gmail.com> <4D97E7FE.7010104@gmail.com> <4D97EEC1.7020207@gmail.com>
Date: Sun, 03 Apr 2011 08:56:15 +0200
Message-ID: <BANLkTi=9CXCtb=ryFtMuyG2w9ifb-2urkA@mail.gmail.com>
From: Vaughn Deluca <vaughn.deluca@gmail.com>
To: Dzonatas Sol <dzonatas@gmail.com>
Content-Type: multipart/alternative; boundary="0015174c177ca4d0a0049ffe23a4"
Cc: vwrap@ietf.org
Subject: Re: [vwrap] [wvrap] Simulation consistency
X-BeenThere: vwrap@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Virtual World Region Agent Protocol - IETF working group <vwrap.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/vwrap>, <mailto:vwrap-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/vwrap>
List-Post: <mailto:vwrap@ietf.org>
List-Help: <mailto:vwrap-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/vwrap>, <mailto:vwrap-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 03 Apr 2011 06:54:40 -0000

Hi Dzonatas

Can you expand on that, what would be needed for legacy support in VWAP
terms ?,
If i want to read up on how the asset server may proxy the simulator, what
would you recommend me to read?

-- Vaughn

On Sun, Apr 3, 2011 at 5:51 AM, Dzonatas Sol <dzonatas@gmail.com> wrote:

> Some stated the proxy-to-asset-server is built into the sim; however, keep
> in mind possible legacy support where the asset server may proxy the
> simulator.
>
>
> Dzonatas Sol wrote:
>
>> Somehow I feel the basic asset server being able to login and download
>> assets is now priority, yet I also wondered the best way to patch this into
>> the current mode of viewers.
>>
>> Maybe offer (1) by proxy (sim-side) and (2) by patch (viewer-side) that
>> either of these two are optional and neither are mandatory for now.
>> Thoughts?
>>
>> Israel Alanis wrote:
>>
>>>
>>> > when designing for scalability, the model to bear in mind is ...
>>>
>>> Well, there are a lot of different models to keep in mind, and many
>>> different use cases. One particular use case to keep in mind is: "User
>>> acquires new outfit, and wants to 'show it off' in a highly populated
>>> region".
>>>
>>> > Both worlds and asset services may include commercial, community, and
>>> personal services
>>>
>>> Yes, yes and yes. I'm particularly concerned about how the model affects
>>> the ability to host personal asset services.
>>>
>>> > a proxying region, which would get slammed for every asset worn by
>>> every avatar present.
>>>
>>> Granted the collection of services that are provided by the region need
>>> to be scaled to meet the demands of that region. That's all part of capacity
>>> planning.
>>>
>>> > regions run many different CPU-intensive tasks, including physics
>>> simulation and server-side scripting, and absolutely cannot afford to serve
>>> assets too
>>> Well... who said the same CPU's have to do proxying, physics simulation
>>> and server-side scripting? Asset proxying is a different service than
>>> physics simulation and can be on separate hardware, could make use of
>>> geographically distributed caching, and in certain deployment patterns, the
>>> same caching services could be shared by different regions. (Server-side
>>> scripting is a discussion for another day).
>>>
>>> > This is why we have to go parallel...
>>>
>>> Totally agree, and a proxying model could and should also take advantage
>>> of parallelism.
>>>
>>> > I think you're wrong that it has to cost much money. ?vs?
>>> > It costs money to host a high performance and scalable asset service
>>> and a high bandwidth network to handle the traffic.  A *lot* of money.
>>> I think what you're saying is: "It costs a lot of money to build a
>>> scalable asset service, but if assets are spread throughout the internet
>>> they don't have to be scalable." But that's not quite right. You're opening
>>> up every asset server to the VW equivalent of being slashdotted, so are you
>>> sure you're not forcing *every* asset service to be scalable and handle a
>>> lot of bandwith and network traffic? It's the exact opposite of your
>>> intention, but I think that's the result, all the same.
>>>
>>> This particular design decision has a big effect on the economics of the
>>> VW infrastructure. I'd rather the economics to work out such that a region
>>> provider who wishes to build a region that supports a small population, can
>>> do so economically. A region that wants to host a *large* population has to
>>> bear that cost of providing that scalable asset service.
>>> I want the economics of hosting a small asset service to be a non-issue
>>> (as to best promote creation and creativity). Creating a high bar to provide
>>> asset services will mean that service will cost money and people shouldn't
>>> have to pay money just to create or own VW objects (I'm using 'own' here to
>>> refer to maintaining their existence, I'm not trying to make a
>>> 'leftist'/'communist' statement about ownership ;)
>>>
>>> - Izzy
>>>
>>>
>>> On Apr 2, 2011, at 3:58 PM, Morgaine wrote:
>>>
>>>  Izzy, when designing for scalability, the model to bear in mind is that
>>>> of seasoned virtual world travelers whose inventories contain assets from
>>>> many different worlds, those assets being served by many different asset
>>>> services.  Both worlds and asset services may include commercial, community,
>>>> and personal services, and as the metaverse grows, that set is highly likely
>>>> to become progressively less clustered and more diverse.
>>>>
>>>> When those seasoned travelers click on an advertised VW link and perform
>>>> an inter-world teleport to one particular world's region to share an
>>>> experience, their "worn" assets (the only ones of interest to the region)
>>>> will contain references to asset services spread widely across the Internet.
>>>>  The fetches by the travelers' clients occur over many parallel paths from
>>>> clients to asset services, so one can reasonably expect reduced network
>>>> contention and reduced asset server loading because they are both spread out
>>>> over however many asset services are being referenced by the overall set of
>>>> assets in the region.
>>>>
>>>> This is very different to the case of a proxying region, which would get
>>>> slammed for every asset worn by every avatar present.  In our current
>>>> architecture, regions run many different CPU-intensive tasks, including
>>>> physics simulation and server-side scripting, and absolutely cannot afford
>>>> to serve assets too unless your scalability requirements are very low
>>>> indeed, ie. just a few dozen avatars of today's kind.  We've hit the ceiling
>>>> already on region scalability done that way.  There is nowhere to go in that
>>>> direction at all beyond improving the code like Intel demonstrated, and that
>>>> work is subject to a law of diminishing returns.
>>>>
>>>> This is why we have to go parallel, and I think you're wrong that it has
>>>> to cost much money.  As we spread the load across more and more asset
>>>> services, we are simply better utilizing all the hardware that's already out
>>>> there on the Internet, at least in respect of community and private
>>>> resources.  But add to the community and private resources the commercial
>>>> asset services that are likely to appear to exploit this opportunity, and
>>>> not only will the number of asset services leap, but the power of each one
>>>> will rocket too, because, after all, these businesses will be heavily
>>>> optimized for the job.
>>>>
>>>> As to why a world would want clients to access external asset services
>>>> instead of providing its own implementation, that's an easy question.  It
>>>> costs money to host a high performance and scalable asset service and a high
>>>> bandwidth network to handle the traffic.  A *lot* of money.  In contrast, it
>>>> costs a world nothing to let others serve the assets to clients.  And that
>>>> matters to the bottom line.
>>>>
>>>>
>>>> Morgaine.
>>>>
>>>>
>>>>
>>>>
>>>> ======================
>>>>
>>>> On Sat, Apr 2, 2011 at 7:05 PM, Izzy Alanis <izzyalanis@gmail.com<mailto:
>>>> izzyalanis@gmail.com>> wrote:
>>>>
>>>>    > As always though, it's a trade-off, since the proxied design
>>>>    has very poor scalability compared to the distributed one.
>>>>
>>>>    I don't agree with that... If a user enters a highly populated
>>>>    region,
>>>>    every other client is going to (could and should be trying to)
>>>>    hit the
>>>>    asset server(s) for the assets that the user is wearing (assuming
>>>>    they're not cached locally).  Every asset server has to be scaled up
>>>>    to the point that it can handle that load from all over...
>>>>
>>>>    If I'm hosting a region that supports 10s of thousands of
>>>>    simultaneous
>>>>    users (thinking of the future), I already have to scale to meet that
>>>>    demand. If the region is proxying the assets, then, yes the
>>>>    region has
>>>>    to be scaled to meet that asset demand too, but it already has to be
>>>>    scaled to meet other demands of being a region server... and why is
>>>>    scaling those asset proxy services hard?  It's going to cost $,
>>>>    but is
>>>>    not technically challenging. So, if I want to host a region like
>>>>    that... sure it will cost me, but the simulation will be consistent
>>>>    and users will be able to participate equally, regardless of the
>>>>    capabilities of their individual asset services.
>>>>
>>>>
>>>>
>>>>
>>>>    On Fri, Apr 1, 2011 at 11:55 PM, Morgaine
>>>>    <morgaine.dinova@googlemail.com
>>>>    <mailto:morgaine.dinova@googlemail.com>> wrote:
>>>>    > Every design choice results in a trade-off, Vaughn, improving
>>>>    one thing at
>>>>    > the expense of something else.  If every time we offered a
>>>>    service we had to
>>>>    > inform its users about the downsides of all the trade-offs we
>>>>    have made,
>>>>    > they would have an awful lot to read. ;-)
>>>>    >
>>>>    > The specific trade-off that you are discussing is no
>>>>    different.  A region
>>>>    > that proxies all content has the "benefit" of acquiring control
>>>>    from the
>>>>    > asset servers over the items in the region, so that it can
>>>>    ensure that
>>>>    > everyone in the region not only obtains the items but obtains
>>>>    the same items
>>>>    > as everyone else.  That does indeed provide a greater guarantee of
>>>>    > consistency than a deployment in which the region only passes
>>>>    asset URIs to
>>>>    > clients who then fetch the items from asset services
>>>>    separately.  As always
>>>>    > though, it's a trade-off, since the proxied design has very
>>>>    poor scalability
>>>>    > compared to the distributed one.
>>>>    >
>>>>    > If we're going to warn users of the potential for inconsistency
>>>>    in the
>>>>    > distributed deployment as you suggest, are we also going to
>>>>    warn them of
>>>>    > non-scalability in the proxied one?  I really don't see much
>>>>    merit in the
>>>>    > idea of warning about design choices.  Many such choices are
>>>>    technical, and
>>>>    > the issues are quite likely to be of little interest to
>>>>    non-technical users
>>>>    > anyway.  In any case, the better services are likely to provide
>>>>    such
>>>>    > information in their online documentation, I would guess.
>>>>    >
>>>>    > You mentioned users "voting with their feet" or choosing to
>>>>    accept the risk
>>>>    > of inconsistency.  Well that will happen anyway, when services
>>>>    fail and
>>>>    > users get annoyed.  If some asset services refuse to send the
>>>>    requested
>>>>    > items to some users, those services will get a bad reputation
>>>>    and people
>>>>    > will choose different asset services instead.  Likewise, if a
>>>>    world service
>>>>    > proxies everything and so it can't handle a large number of
>>>>    assets or of
>>>>    > people, users will get annoyed at the lag and will go
>>>>    elsewhere.  This user
>>>>    > evaluation and "voting with their feet" happens already with
>>>>    online services
>>>>    > all over the Internet, and I am sure that this human process
>>>>    will continue
>>>>    > to work when the services are asset and region services.
>>>>    >
>>>>    > Back in September 2010, I wrote this post which proposes that
>>>>    we use in
>>>>    > VWRAP a form of asset addressing that provides massive
>>>>    scalability at the
>>>>    > same time as a very high degree of resilience --
>>>>    >
>>>>    http://www.ietf.org/mail-archive/web/vwrap/current/msg00463.html
>>>>    .  It is
>>>>    > based on the concept of the URI containing a host part and a
>>>>    hash part,
>>>>    > where the hash is generated (once, at the time of storage to
>>>>    the asset
>>>>    > service) using a specified digest algorithm over the content of
>>>>    the asset
>>>>    > being referenced.  You may wish to note that if this design
>>>>    were used, the
>>>>    > failure of an asset service to deliver a requested item would
>>>>    result in a
>>>>    > failover request for the item to one or more backup services,
>>>>    using the same
>>>>    > hash part but with a different host address.
>>>>    >
>>>>    > This can go some way towards overcoming the problem that you
>>>>    think might
>>>>    > occur when assets are fetched by clients from asset services
>>>>    directly.
>>>>    > Although it won't help when the missing item is available from
>>>>    only a single
>>>>    > asset service, it will help in many other cases, and it will
>>>>    compensate for
>>>>    > service failures and network outages automatically at the same
>>>>    time.
>>>>    >
>>>>    > PS. This design for hash-based asset addressing is already
>>>>    being tested by
>>>>    > Mojito Sorbet in her experimental world and client.  It would give
>>>>    > VWRAP-based worlds an improved level of service availability,
>>>>    so I think it
>>>>    > should be a core feature of our protocol.
>>>>    >
>>>>    >
>>>>    > Morgaine.
>>>>    >
>>>>    >
>>>>    >
>>>>    >
>>>>    > ===========================
>>>>    >
>>>>    > On Fri, Apr 1, 2011 at 11:17 PM, Vaughn Deluca
>>>>    <vaughn.deluca@gmail.com <mailto:vaughn.deluca@gmail.com>>
>>>>    > wrote:
>>>>    >>
>>>>    >> This is a question i discussed with Morgaine off-list a while
>>>>    ago (I
>>>>    >> intended to send it to the list but pushed the wrong button...) I
>>>>    >> think we need to address this problem, and decide how to deal
>>>>    with it.
>>>>    >>
>>>>    >>  In Davids deployment draft, section 7.3.1.1 an overview is
>>>>    given van
>>>>    >> ways to deliver content to the region. One way is only passing a
>>>>    >> capability that allows access to (part of) the resource:
>>>>    >>
>>>>    >>           7.3.1.1.  Content delivery models
>>>>    >>           A range of possible represenations can be passed to
>>>>    a region for
>>>>    >>           simulation. [...] The other end of the delivery spectrum
>>>>    >> involves passing
>>>>    >>           only a URI or capability used to access the rendering
>>>>    >> information and a
>>>>    >>           collision mesh,and related data for physical simulation.
>>>>    >>           In such a model, the client is responsible for
>>>>    fetching the
>>>>    >> additional
>>>>    >>           information needed to render the item's visual
>>>>    presence from a
>>>>    >> separate
>>>>    >>           service.  This fetch can be done *under the
>>>>    credentials of the
>>>>    >> end user*
>>>>    >>           viewing the material [my emphasis--VD] , and
>>>>    divorces the
>>>>    >> simulation from
>>>>    >>           the trust chain needed to manage content.  Any
>>>>    automation
>>>>    >> is done on a
>>>>    >>           separate host which the content creator or owner trusts,
>>>>    >> interacting with the
>>>>    >>           object through remoted interfaces.
>>>>    >>
>>>>    >>  I can see the need for such a setup, however, i feel we are
>>>>    >> unpleasantly close to a situation were the coherence of the
>>>>    simulation
>>>>    >> falls apart.
>>>>    >> In this deployment pattern the region advertises the presence
>>>>    of the
>>>>    >> asset, and *some* clients will be able to get it as expected,
>>>>    while
>>>>    >> -based on the arbitrary whims of the asset service- others
>>>>    might not.
>>>>    >>
>>>>    >> My hope would be that after the asset server provides the
>>>>    region with
>>>>    >> the capability to get the asset, it gives up control. That
>>>>    would mean
>>>>    >> that if the client finds the inventory server is unwilling to
>>>>    serve
>>>>    >> the content - in spire of the region saying it is present-,
>>>>    the client
>>>>    >> should be able to turn around ask the *region* for the asset,
>>>>    (and get
>>>>    >> is after all).
>>>>    >>
>>>>    >>  If that is not the case, -and there are probably good reasons
>>>>    for the
>>>>    >> deployment pattern as described-  shouldn't we *warn* clients
>>>>    that the
>>>>    >> region might be inconsistent, so the users behind the client
>>>>    can vote
>>>>    >> with their feet, (or take the risk)?
>>>>    >>
>>>>    >> --Vaughn
>>>>    >> _______________________________________________
>>>>    >> vwrap mailing list
>>>>    >> vwrap@ietf.org <mailto:vwrap@ietf.org>
>>>>    >> https://www.ietf.org/mailman/listinfo/vwrap
>>>>    >
>>>>    >
>>>>    > _______________________________________________
>>>>    > vwrap mailing list
>>>>    > vwrap@ietf.org <mailto:vwrap@ietf.org>
>>>>    > https://www.ietf.org/mailman/listinfo/vwrap
>>>>    >
>>>>    >
>>>>
>>>>
>>>>
>>> ------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> vwrap mailing list
>>> vwrap@ietf.org
>>> https://www.ietf.org/mailman/listinfo/vwrap
>>>
>>>
>>
>>
>>
>
> --
> --- https://twitter.com/Dzonatas_Sol ---
> Web Development, Software Engineering, Virtual Reality, Consultant
>
> _______________________________________________
> vwrap mailing list
> vwrap@ietf.org
> https://www.ietf.org/mailman/listinfo/vwrap
>