Re: [vwrap] Consensus? What exactly should be in the protocol

Meadhbh Hamrick <ohmeadhbh@gmail.com> Wed, 22 September 2010 18:50 UTC

Return-Path: <ohmeadhbh@gmail.com>
X-Original-To: vwrap@core3.amsl.com
Delivered-To: vwrap@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 8B60E3A6A84; Wed, 22 Sep 2010 11:50:27 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.387
X-Spam-Level:
X-Spam-Status: No, score=-1.387 tagged_above=-999 required=5 tests=[AWL=0.612, BAYES_00=-2.599, J_CHICKENPOX_43=0.6]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PFlu7-hiSVnS; Wed, 22 Sep 2010 11:50:26 -0700 (PDT)
Received: from mail-ww0-f42.google.com (mail-ww0-f42.google.com [74.125.82.42]) by core3.amsl.com (Postfix) with ESMTP id 746953A6A80; Wed, 22 Sep 2010 11:50:25 -0700 (PDT)
Received: by wwb18 with SMTP id 18so490684wwb.1 for <multiple recipients>; Wed, 22 Sep 2010 11:50:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:mime-version:received:in-reply-to :references:from:date:message-id:subject:to:cc:content-type; bh=3fnS+2l5QAmg9YCFPXNLoobbIOsDnUH28UW3MPmG4DQ=; b=F8L6JFoVMQ/0mci6H7fV0VnrDKUS+xMVdHQ8q2A6ZxJcU+aqlFQouwQN7WiYJ4lAUs bQaaafkDtjpvQtWaBBCZet+VYj08qGl89sG2X75oXSIMdELQ+adeckw2Ihvc/6kjxqyC KMEQtfGPc3VU+30jfD9JSbdK945cEHWIkP5ZE=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; b=DcY1YOO4BDd//Qh91cekjyulk5bjz4C33EM1Me90VGOKZ898vxj3iM+61TEyPjadas kuYXhrlZOzky5/4O8DEPIHz53lLg7Ntbvv6uk5vKa8tEBRjsN7wiPGv88zlCoi/+HBlI ZlYqxmHi6rOwmz0Uj9/xZKjYTSk6kPBtkOZLU=
Received: by 10.216.50.19 with SMTP id y19mr7380676web.52.1285181445910; Wed, 22 Sep 2010 11:50:45 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.216.170.82 with HTTP; Wed, 22 Sep 2010 11:50:25 -0700 (PDT)
In-Reply-To: <4C9A45FC.6030709@ics.uci.edu>
References: <AANLkTinxpGRZ9PEWQx=KvaBNGBba4Z+P+SaP4N80VGV1@mail.gmail.com> <E2109887-F5B2-4742-B4F7-1C4655A2DD8B@ics.uci.edu> <62BFE5680C037E4DA0B0A08946C0933D012670D0C9@rrsmsx506.amr.corp.intel.com> <4C9A070B.3070202@hp.com> <AANLkTinVX6Uo2S+7ocdTiVfiTFa9wxM=x1Cncyi5ij86@mail.gmail.com> <4C9A17FC.9090308@ics.uci.edu> <OF98CA2B26.9D4927A8-ON852577A6.00572945-852577A6.0060FB5D@us.ibm.com> <4C9A45FC.6030709@ics.uci.edu>
From: Meadhbh Hamrick <ohmeadhbh@gmail.com>
Date: Wed, 22 Sep 2010 11:50:25 -0700
Message-ID: <AANLkTin5OEf4Len1L8jXmqEMF_v_MWBoRTqkdft_9myL@mail.gmail.com>
To: lopes@ics.uci.edu
Content-Type: text/plain; charset=ISO-8859-1
Cc: vwrap@ietf.org, vwrap-bounces@ietf.org
Subject: Re: [vwrap] Consensus? What exactly should be in the protocol
X-BeenThere: vwrap@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Virtual World Region Agent Protocol - IETF working group <vwrap.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/vwrap>, <mailto:vwrap-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/vwrap>
List-Post: <mailto:vwrap@ietf.org>
List-Help: <mailto:vwrap-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/vwrap>, <mailto:vwrap-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 22 Sep 2010 18:50:27 -0000

christina,

i think you may be missing the point of capabiliies a bit. they're in
the mix here so we can avoid the confused deputy problem and have
"loosely coupled" collections of services. CORS does allow you to do
mashups in a user agent, but CORS in and of itself says nothing about
trust with respect to the server side.

in other words, if i want to create an application that is a composite
of two services, each of which touches "sensitive" data, i have to
have a way for the client to authorize itself to both services.

you can do this with something like SAML and you can use OAuth as a
way to carry proof of authorization. but you would still need trust
relationships between the two services for this to work. capabilities
allow us to use a "transitive trust" model that reduces the
requirement for a federated identity / trust model.

and i like that idea.

however, you can compose something like SAML + OAuth with
capabilities, but you can't go the other way.

SAML and OAuth are GREAT technologies to use for certain problems, but
the world is more than a browser or client. some use cases require
services to either trust each other (independently of trust stitched
together by the client) or to trust the people they trust without a
pre-existing direct-trust relationship.

so it's all about whom you trust. my take on the tourist model is that
it's the client that decides whom to trust. and my take on the
HyperGrid model is that it's the simulator that decides whom to trust.
what we want to build with the use of capabilities is the generic
ability of an auth service that decides whom to trust, even if there
is not an existing trust relationship between the client and the
non-auth service.

i wonder if it's possible to define RESTful VWRAP resources
independently of them being behind capabilities. if you're really down
with using traditional web auth, this might be an answer.

we could define a cap based service establishment protocol flow, but
make it independent of the use of other defined resources. so if they
service establishment were defined with RFC XXXX and some other
service were defined in RFC YYYY, you could deploy a RFC YYYY service
without the benefits of RFC XXXX. it should be straight-forward to
even code this.

if you really have a use case for direct-trust only, laissez les
bontemps roullez!

does this make sense?

-cheers
-meadhbh
--
meadhbh hamrick * it's pronounced "maeve"
@OhMeadhbh * http://meadhbh.org/ * OhMeadhbh@gmail.com



On Wed, Sep 22, 2010 at 11:07 AM, Cristina Videira Lopes
<lopes@ics.uci.edu> wrote:
> All the limitations that you mention about the Web architecture not being
> enough to support virtual world applications have been muted by HTML5.
> Additionally, CORS now allows for true client-side mashups.
> But even without these two things, you can build non-web-browser clients
> that follow the general principles but that do special things for the
> real-time updates -- basically, the general concept of JavaScript+WebSockets
> done in whatever other way you like: different programming language,
> different protocol,...
>
> The really important architectural principle, though, and one that is
> unlikely to be let go, is that the use of WebSockets, the data formats that
> flow through them, and the use of CORS, are decisions that pertain to *each*
> virtual world application, it's not something that is imposed on all VWs by
> web standards -- it comes as JavaScript sent by the server! They are
> *implementation options* -- very valid options, I must add, but options
> nevertheless. What you are trying to do here is to dictate that all virtual
> world applications MUST use some protocol for renderer -- server
> interactions, down to the data formats, and MUST use capabilities for
> mashing things up, or else... they can't interoperate.
>
> You can dictate that. But then this will be completely irrelevant in a
> couple of years when WebGL is actually usable or when Google finishes their
> virtual machine for running safe native code on browsers.
>
>
>
> David W Levine wrote:
>>
>> So, of course we're building in the web space. I hope nobody is denying
>> that. In fact if you look at everything described in VWRAP is starts with an
>> assumption that most services are delivered as REST or REST like services. I
>> think its safe to say that the people who have been discussing this for over
>> two years are aware of Roy's work, and have thought about how REST applies
>> to virtual worlds. REST represents a lot of thinking about how the web
>> delivers content, and in particular why not to turn the web into a
>> distributed object model, or a shared state model, but rather to leverage
>> the observed successful patterns of the web in managing distributed
>> programming problems.
>>
>> But.. (There is always a but) The very core thing that a virtual world
>> does doesn't fit terribly well into the mainstream web model. The heart of a
>> virtual world is delivering (and Morgaine's phrase serves very well here) a
>> visual mashup of things to users 30-60 times a second, updating continually
>> to reflect the input of the physical simulation, any user
>> inputs, and any scripted inputs. Our core problem is taking in the inputs,
>> deriving the new state and sharing it out to the users. This isn't really
>> what the web has historically done. The fact that it isn't, that there are
>> some really interesting distributed system challenges at the very heart of
>> this, is part of its technical appeal to me.
>>
>> Life is made harder by the fact that the virtual space is being constantly
>> asked to accept new things to deal with. Every time an avatar arrives it
>> brings a set of stuff
>> which has to be melded into the scenegraph. Again, we all know this.
>> Rezing an avatar means adding a bunch of new content to the virtual space,
>> and it means pushing
>> it back out to all the observers.
>>
>> In the traditional web you go to a URL, you do a get, and you get handed a
>> huge slab of stuff to render.(some of which may require fetches, plugins,
>> etc.) In the more dynamic 2.0 style stuff, the stuff you get may include
>> dynamic elements which fetch and update more stuff. In the virtual worlds
>> space, we bring to to a fever pitch. we take inputs from all the present
>> users, from a simulation, including the scripted changes within the
>> simulation. We then turn around and want to show this to the user.
>>
>> How do we present this to the user. Well, we currently use Linden's
>> UDP/http/longpoll tangle. Fine. But. how could we do it?
>>
>> We could create a video stream and stream it. (WHich isn't very web page
>> like at all, but has some nice properties)
>> We could do something like OnLive where we would create a very tailored
>> stream and deliver it to a client with very specialized coupled inputs (And
>> life with a lot of
>> constraints and again isn't very web page like)
>> We could send a stateless update every frame for the client to render
>> (Well, with ulimited bandwidth and processor power)
>> Or.. we could do what we currently do, just cleaner. which, roughly
>> speaking is send down initial state and then send down a series of updates
>> to that state. Woah, not
>> exactly a traditional web page. Worse still.. where do we post the inputs
>> from the client to the world?
>>
>> At the same time, we also get to ask "How do we get all the "stuff" into
>> the region. In Linden's world, the answer is easy. They use a proprietary
>> protocol and fetch it from
>> their creaking central servers. In OpenSim, a similar answer obtains. And
>> for added pain which we have all shared, the current set of clients push all
>> the stuff related to
>> the user via the region.
>>
>> VWRAP attempts to describe nothing more than a set of REST web services
>> which represent the region and the services. It attempts to leverage what's
>> been learned from REST, and Linden's system, and in fact OpenSim, to
>> describe a simple, extensible set of services which can describe: Regions,
>> Auth services, how to rez and unrez avatars, how to (when we get some
>> writing down) fetch and manipulate assets, inventory lists and so on.
>>
>> What you end up with is built deeply on web principals, but not a web
>> page, but mostly because a virtual world is not, at its heart a web page,but
>> a set of services collaborating to
>> share state in a pretty unusual way.
>>
>> - David
>> ~Zha
>>
>
>