Re: [vwrap] Technical basis for VW client in a web browser?
Morgaine <morgaine.dinova@googlemail.com> Tue, 21 December 2010 04:37 UTC
Return-Path: <morgaine.dinova@googlemail.com>
X-Original-To: vwrap@core3.amsl.com
Delivered-To: vwrap@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix)
with ESMTP id 479853A6997 for <vwrap@core3.amsl.com>;
Mon, 20 Dec 2010 20:37:31 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.257
X-Spam-Level:
X-Spam-Status: No, score=-2.257 tagged_above=-999 required=5 tests=[AWL=-0.481,
BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001,
J_CHICKENPOX_48=0.6, J_CHICKENPOX_51=0.6, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com
[127.0.0.1]) (amavisd-new, port 10024) with ESMTP id VTZ1G36+mFZr for
<vwrap@core3.amsl.com>; Mon, 20 Dec 2010 20:37:27 -0800 (PST)
Received: from mail-qw0-f44.google.com (mail-qw0-f44.google.com
[209.85.216.44]) by core3.amsl.com (Postfix) with ESMTP id 7D0B73A67EE for
<vwrap@ietf.org>; Mon, 20 Dec 2010 20:37:27 -0800 (PST)
Received: by qwg5 with SMTP id 5so3607515qwg.31 for <vwrap@ietf.org>;
Mon, 20 Dec 2010 20:39:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=gamma;
h=domainkey-signature:mime-version:received:received:in-reply-to
:references:date:message-id:subject:from:to:content-type;
bh=uBfRLFSfwsifLE+K64ygDzNdsy3u1SIZYU+6UUrvJZw=;
b=UKJ6REMMl0LH/d4pps7w6CSchYbEHohMmPqFqe7K9RAHIHjckkDUqw9gNfDYkZlYx8
kV1E0j7CyM8RS8USRfS+0D1JeypPTLiJwI55XCtlIArEMzHIj5npiQA7KxwYwEq8jp65
K7jcL8SdLZn0k7RJPnCiUh2Ueo0pnnsRIvC4Q=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=googlemail.com; s=gamma;
h=mime-version:in-reply-to:references:date:message-id:subject:from:to
:content-type;
b=HjWYeYRJZFq2XULNBYxLZ7ktfFv9k0rxOXJNJVpacAWgXYvcWBqOkx7Z3pzgJre621
W3xCz0LbbStcl7SiWb7bBUYxEq+9YVx7Q2+2+Xru2j2ucXE5ls9vb0UYskmgJs/Otho2
h747PmPW3//gXWKUEfP6mEJMC1bR4WhGt+Sp4=
MIME-Version: 1.0
Received: by 10.229.213.202 with SMTP id gx10mr4441314qcb.96.1292906362213;
Mon, 20 Dec 2010 20:39:22 -0800 (PST)
Received: by 10.229.91.67 with HTTP; Mon, 20 Dec 2010 20:39:22 -0800 (PST)
In-Reply-To: <AANLkTi=tBq6YELYi8KP4WGiy9dTTApwpddoJnqUJRZfC@mail.gmail.com>
References: <AANLkTintjQdAS=EWfiRu3oWenB42LKsNzJPDJ+5ofBRO@mail.gmail.com>
<AANLkTinhWObg6Te2VtGYKXsxBG5=gVDS5szmjtLeOgnm@mail.gmail.com>
<AANLkTikYn-iA7osXT_oW8rL61GhK57pp7uJVmTSGVvj7@mail.gmail.com>
<AANLkTikFWUxQyT9aNFBk7-Fdb5bNdFT9Bj-dehqVP0WN@mail.gmail.com>
<6.2.5.6.2.20101219141829.0a381da8@resistor.net>
<AANLkTik-1m=4OOeQN=D3w2t-G-f6DNKwDOmhT5_bNkmb@mail.gmail.com>
<AANLkTinMstkDv5iq6usxbe1djK7GkPrOAjpKYANyMxcy@mail.gmail.com>
<AANLkTimHUOwSMCWxAOyMH1O6XwiebOfep2AfN898pETR@mail.gmail.com>
<AANLkTi=LZ9s-dMmOUz79RrKbHcgMS-OU452qr4MS1ex+@mail.gmail.com>
<AANLkTinhzO0=hyxUy++g21uB00tkKqtbMuCEMP00nkki@mail.gmail.com>
<AANLkTinH+Ym6oQyKRAfsRiQ_LFjWLUFxtYjGQ0WNaDxf@mail.gmail.com>
<AANLkTimOjLygVOuoNbssjwUJ01Ma5nPj4BNRnTZX7cQ-@mail.gmail.com>
<AANLkTi=tBq6YELYi8KP4WGiy9dTTApwpddoJnqUJRZfC@mail.gmail.com>
Date: Tue, 21 Dec 2010 04:39:22 +0000
Message-ID: <AANLkTi=q2N76dbRqCsixrg-TC2_PY1DiZE49fUo+_KYQ@mail.gmail.com>
From: Morgaine <morgaine.dinova@googlemail.com>
To: vwrap@ietf.org
Content-Type: multipart/alternative; boundary=00163630f8357298550497e4385e
Subject: Re: [vwrap] Technical basis for VW client in a web browser?
X-BeenThere: vwrap@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Virtual World Region Agent Protocol - IETF working group
<vwrap.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/vwrap>,
<mailto:vwrap-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/vwrap>
List-Post: <mailto:vwrap@ietf.org>
List-Help: <mailto:vwrap-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/vwrap>,
<mailto:vwrap-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 21 Dec 2010 04:37:31 -0000
Although I understand what you meant by "TCP has it's share of problems as well", I'm going to put my foot down a little here, because the phrase almost suggests that TCP is not doing its job well enough, or is introducing additional problems of its own. I know you didn't mean that, but I want to make it clear that this is not the issue. :-) The often-mentioned "problem" of TCP choking message deliveries when a piece of the sequence is missing is not a problem at all if you want a reliable data stream. It's the only possible thing that can occur at that point in time. TCP is doing the job which it was designed to do, and which a user of TCP should expect and require of it. If TCP didn't block then it would have failed, because the required data for the stream hasn't yet arrived. It * MUST* block. If TCP is doing its job correctly by blocking, then why does the perception exist that it is introducing problems by blocking? The reason is that the developer has incorrectly mapped his or her desired application semantic of reliable transport of multiple *independent*messages onto a single reliable data stream in which the messages have incorrectly become *sequentially dependent on each other* by being made part of a single serial stream of delivery. This is an application design error, not a TCP failure. If the developer does not want independent messages to become sequentially dependent on delivery, then he or she should not use a single TCP stream, because reliable sequential delivery is TCP's defined semantic. To do so and then to complain about TCP blocking independent messages simply makes no sense. It's only blocking them because the developer has made them sequentially dependent. This incorrect apportioning of blame is made so frequently that it deserves highlighting. TCP will not block delivery of independent messages when it is used correctly, ie. independent messages are given separate TCP streams. Temporary holdups in the transport of one TCP stream will not block transport of a second TCP stream, they are entirely independent. I know I've belabored a point which is almost certainly obvious to everyone here, but it needs underlining to leave no doubt where the "problem" actually lies. It lies not with TCP but with inappropriate application of TCP channels. Of course, that leaves us with the interesting question of how to program TCP-based apps that preserve our desired independent message delivery semantic without using an infinite number of TCP streams. It's a great topic, but it doesn't even reach the agenda for discussion until people understand that the alleged delivery choking "problem" stemmed from them writing code that implemented the wrong delivery semantic in the first place. TCP is completely blameless! :-) Morgaine. ================================ On Mon, Dec 20, 2010 at 9:25 PM, Dahlia Trimble <dahliatrimble@gmail.com>wrote;wrote: > I'm not certain that TCP mitigates all of this, it merely moves it out of > process space. TCP also has additional end-point requirements; depending on > API design, it may require additional listener threads and associated > message queueing mechanisms and lock management in order to manage multiple > asynchronous streams. Streams are still buffered, and while the buffering > may not exist in process space and may be even shared with mid-stream hosts, > there are still costs involved with these buffers. Applications cannot > simply assume they can safely dump as much information into a TCP stream and > let someone else worry about it, if they did then any congestion that may > occur in the network (or even the receiving application) will increase > buffer memory requirements. One way around this may be to limit how much > data can be pushed into a stream without some form of readiness signal from > the other end, but might that be considered application level flow control? > > If it sounds like I'm advocating UDP for reliable transport, I'm not. I'm > rather pointing out that TCP has it's share of problems as well. > > > > On Mon, Dec 20, 2010 at 12:31 PM, Dan Olivares <dcolivares@gmail.com>wrote;wrote: > >> While OpenSimulator's UDP stack was a learning ground, I'm talking in >> a more general sense about what UDP is good at while evaluating it's >> fitness as a technical base for virtual worlds in a web browser. >> >> Using UDP with flow control puts the flow control cost squarely on the >> server managing the connection. Using TCP, when flow control is >> required, spreads the cost of the flow control over the devices on the >> internet. It also ensures that connections with bad links are >> segmented. Regardless of the point of view here(business or >> technical), UDP with flow control isn't a great option. >> >> From a business standpoint, with UDP, the organization hosting the >> server bears all of the cost of the flow control. Servers are a >> costly limited resource. It makes more sense to spread the load out >> onto the network and dedicate the servers to more important tasks like >> hosting an additional simulator that can be rented to increase the >> bottom line. >> >> From a technical standpoint, spreading it out over the network makes >> it more scalable and fault tolerant. >> >> That's not to say that we should throw UDP out the window. It's good >> at sending large amounts of quickly expiring data. >> >> Regards >> >> Daniel Olivares >> >> >> >> On Mon, Dec 20, 2010 at 2:37 PM, Dahlia Trimble <dahliatrimble@gmail.com> >> wrote: >> > I believe part of the problem Dan refers to with CPU load is with the >> > implementation of LLUDP in OpenSimulator ( http://opensimulator.org ). >> In >> > this implementation, the scenegraph simulation server communicates with >> > multiple clients and much (most?) information is transmitted via UDP, >> > including many assets which could be retrieved from other sources and >> using >> > other protocols. In this case the choice of UDP for *all* communications >> may >> > have been a poor one, at least from a scalability perspective. Linden >> Lab >> > has recently moved some server<->client information transfer to other >> > processes outside the main simulation server, and via HTTP protocol. I >> > suspect as the benefits of this are realized there may be more >> non-realtime >> > messaging moving out of Linden's simulator process, and the >> OpenSimulator >> > implementation of this protocol will follow. >> > >> > On Mon, Dec 20, 2010 at 11:11 AM, Dan Olivares <dcolivares@gmail.com> >> wrote: >> >> >> >> I think, one thing that we've learned by exploring the leading virtual >> >> world technologies that exist today is that TCP should be used for any >> >> transmission that requires ordered and reliable delivery. Doing so, >> >> shares the 'flow control' load across the devices on the internet (for >> >> free). >> >> >> >> The user level UDP flow control may /seem/ negligible. With a small >> >> amount of users and high quality connections, it is negligible. >> >> However, as the load increases and the quality of the network links >> >> decreases, there's a considerable spike in CPU time required to >> >> maintain reliable delivery. Generally, with UDP, flow control, and >> >> bad quality or overloaded connections, the load is felt on the CPU of >> >> the server managing the flow control. *The quality to CPU usage >> >> curve is precipitous.* As the quality of the connection decreases a >> >> little, the CPU load increases a LOT. At a certain point, the server >> >> falls off of the edge of the cliff and can no longer keep up. Sending >> >> reliable data in any but the lowest quantity, is far worse for the >> >> overall user experience over UDP then TCP as the load increases. >> >> Network congestion, leading to single server CPU overload doesn't just >> >> affect the experience of one user with a low quality connection. It >> >> affects the quality of the experience for every user connected. >> >> With ISPs de-prioritizing UDP packets, using it for a large amount of >> >> reliable communication isn't a good option. >> >> >> >> UDP is of most benefit when there is a significant amount of >> >> unreliable traffic that can be dropped if it comes in out of order. >> >> Real time spacial information, for example, benefits from this >> >> approach. >> >> >> >> The flow control coupled with the quantity of traffic that results >> >> from the server relaying reliable updates over UDP leaves less CPU >> >> resources for other things on the server and is one of the reasons for >> >> the 'SecondLife 100 user Limit'. Intel reached 1000 users by >> >> separating the client manager from the world space managing >> >> server(simulator). (#0) (There are other reasons for the 100 user >> >> limit, but they're beyond the scope of this document) >> >> >> >> >> >> TCP has guaranteed delivery of every packet. In the content space, >> >> TCP is the most effective delivery mechanism for everything except a >> >> small subset of technologies. >> >> >> >> That doesn't mean that we should throw UDP out. TCP has it's own >> >> problems. With TCP, the load isn't felt on the CPU, it's felt in the >> >> speed of delivery of the content. >> >> >> >> As Dahlia said, every packet must arrive before the next one can begin >> >> to be sent. It isn't effective to send large quantities of quickly >> >> expiring data over TCP. With multiple types of data requiring >> >> different speeds of delivery, it also isn't effective stuffing >> >> everything into the same connection. >> >> >> >> Browsers overcome this problem by making multiple temporary >> >> connections downloading referenced resources immediately after >> >> downloading a full 'index' (html) content page. >> >> >> >> Some people might think that if browsers have overcome this problem >> >> then it might just be better to use the same technology for virtual >> >> worlds (HTTP). There's still an issue with that reasoning. The >> >> issue is that HTTP was really designed to transmit files and not a >> >> 'data stream of multiple objects'. There is overhead and there are >> >> restrictions, designed with files in mind, built into HTTP that make >> >> it, not quite a perfect fit. You can overlay 'streaming compatible >> >> files' on top of HTTP but it usually involves establishing several >> >> connections over time for the same content element. >> >> >> >> This is where web sockets find their sweet spot. It's the middle >> >> ground, getting the benefits of TCP while not encountering the >> >> restrictions of HTTP. You can create multiple TCP connections for >> >> different types of data allowing you to get around the TCP queue bog. >> >> They still don't really solve the 'mass quantities of quickly >> >> expiring data' problem though, so we still can't throw UDP out the >> >> window. >> >> >> >> In conclusion, at the moment, none of the existing underlying >> >> protocols are optimal for the complex nature of the data of virtual >> >> worlds. However, with some ingenuity and 'out of the box' thinking, >> >> we can utilize what's there to 'almost' get there. >> >> >> >> We do need a few things: >> >> 1. Web Sockets standard in browsers (we can make due without UDP >> >> support, but we really really want it) >> >> 2. Solid, and accepted standard for 3d rendering in the browser. >> >> (Problems: WebGL + Internet Explorer?, Flash + 3d rendering speed + >> >> Closed source renderer?, Unity3d + Plugin distribution + closed source >> >> renderer?) >> >> 3. Faster JavaScript (Each WebGL frame, fully processed with >> >> JavaScript is a bit much when you consider the problem space of >> >> thousands of user created objects being displayed) >> >> >> >> Regards >> >> >> >> Daniel Olivares >> >> >> >> References: >> >> (0) : http://nwn.blogs.com/nwn/2010/09/intel-science-sim.html >> >> >> >> >> >> >> >> On Mon, Dec 20, 2010 at 11:44 AM, Morgaine >> >> <morgaine.dinova@googlemail.com> wrote: >> >> > On Mon, Dec 20, 2010 at 12:56 AM, Dahlia Trimble >> >> > <dahliatrimble@gmail.com> >> >> > wrote: >> >> >> >> >> >> If anyone has any evidence of internet pathways that selectively >> favor >> >> >> TCP >> >> >> over other traffic, I'd be interested in seeing it. >> >> > >> >> > >> >> > I have first-hand knowledge of this. I worked for several years at >> the >> >> > Network Operations Centre of a top-tier ISP, and one of my duties was >> >> > looking after service routers and firewalls. Policy-based routing >> and >> >> > firewall access lists are configured with rulesets which are >> processed >> >> > sequentially and eat up router CPU, which is a finite resource. >> >> > >> >> > Under off-peak conditions, packet loss is quite rare in the absence >> of >> >> > interface or line faults, and router CPUs are scaled to handle the >> >> > expected >> >> > load so packets are never dropped willfully. When networks are >> >> > congested >> >> > however, which unfortunately is not uncommon during peak hours owing >> to >> >> > the >> >> > common practice of oversubscribing capacity (or poor scalability >> >> > planning), >> >> > CPU load often reaches critical levels, and routers are configured to >> >> > prioritize certain payload types in favor of others when this >> happens. >> >> > >> >> > TCP always gets top priority because it carries HTTP which is most >> >> > closely >> >> > tied to business revenue. In contrast, UDP (and also ICMP Echo) are >> >> > normally configured right down the bottom end of the priority list, >> so >> >> > under >> >> > peak load when the CPU has to make a choice what to drop to stay >> within >> >> > safe >> >> > operating limits, UDP gets the chop first. This was business as >> usual >> >> > at >> >> > the ISP, and that's how the network designers wanted the traffic >> >> > priorities >> >> > configured. (I was merely implementing policy, not creating it.) >> >> > >> >> > Gaming fans sometimes complain that ISPs are reducing the quality of >> >> > service >> >> > of their UDP traffic, and in some sense it's true. From my >> experience >> >> > it's >> >> > not done maliciously nor as an conscious policy of network >> >> > non-neutrality, >> >> > but simply as a means of protecting the more prized resource of TCP >> >> > payloads >> >> > when operating conditions mandate that something has to be thrown >> away. >> >> > >> >> > On the positive side, I've never known UDP packets (nor any other >> kind) >> >> > to >> >> > be dropped willfully for the above reason when all equipment is >> working >> >> > within safe design limits and there is enough capacity to carry them. >> >> > If it >> >> > happens off-peak then something is very wrong with network sizing, or >> >> > the >> >> > equipment is faulty. >> >> > >> >> > Unfortunately, that's not the end of the saga with UDP. It's just >> the >> >> > beginning, because there is another big reason for dropped packets, >> and >> >> > this >> >> > one occurs even when equipment is working within its designed >> operating >> >> > limits: traffic shaping. >> >> > >> >> > IP traffic shaping is performed by packet queuing as a first resort, >> to >> >> > slow >> >> > down traffic in the hope that the source notices and adapts. If the >> >> > packet >> >> > rate doesn't slow down then the method of last resort is to drop >> excess >> >> > packets when queue buffers hit their configured highwater marks. TCP >> >> > implements a lot of things to mitigate packet loss, such as slow >> start, >> >> > exponential backoff and transmit pacing, with the purpose of adapting >> >> > transmit rate to receipt rate across paths of limited bandwidth so >> that >> >> > traffic-shaping routers don't enter their drop state. >> >> > >> >> > UDP has no such flow control, so the onus falls upon the UDP >> application >> >> > endpoints to carry out adaptive flow control themselves. The >> likelihood >> >> > of >> >> > this being done by UDP applications as effectively as it is done in >> >> > today's >> >> > finely honed TCP stacks is very low. It may not even be done at all. >> >> > >> >> > As a result, UDP packets can get dropped for being bad network >> citizens >> >> > and >> >> > not slowing down in response to packet queueing and transmit pacing. >> >> > UDP >> >> > applications may think that they're free of the shackles of TCP flow >> >> > control, but they're not. They either slow down when given the hint, >> or >> >> > their traffic gets the chop. As a result, a UDP application that is >> >> > oblivious of end-to-end timing should expect packet loss when the >> >> > network >> >> > acts to protect itself against congestion. See RFC-2309 for more >> >> > details >> >> > about this issue, in particular "MANAGING AGGRESSIVE FLOWS". There >> is >> >> > no >> >> > free lunch for UDP packets. >> >> > >> >> > This is why the best advice to give prospective users of UDP is >> "Don't, >> >> > unless your application is tolerant of packet loss, packet >> duplication, >> >> > packet delay, corrupted packets, and out of order delivery." To try >> to >> >> > work >> >> > around these properties of UDP and make it reliable while >> simultaneously >> >> > not >> >> > impacting on the congestion controls of TCP over the shared path is >> >> > highly >> >> > unlikely to be successful, unless you reimplement the clever control >> >> > features of TCP, and do it compatibly. Bulldozing your UDP packets >> >> > through >> >> > a shared network is not a solution, and will fail. >> >> > >> >> > [PS. The problems don't even stop there, as there are further causes >> of >> >> > UDP >> >> > packet loss. One is the effect of TCP flow-control synchronization >> on >> >> > UDP >> >> > loss rate over congested paths, which counter-intuitively negates any >> >> > benefit that could result from reducing UDP traffic rates because TCP >> >> > synchronization picks up any bandwidth slack that reducing UDP >> traffic >> >> > has >> >> > freed. As a result, TCP congestion actually increases UDP packet >> drop >> >> > rates. (This has been a topic of research.) Networks protocols have >> >> > very >> >> > complex behaviors.] >> >> > >> >> > >> >> > Morgaine. >> >> > >> >> > >> >> > >> >> > >> >> > ====================================== >> >> > >> >> > On Mon, Dec 20, 2010 at 12:56 AM, Dahlia Trimble >> >> > <dahliatrimble@gmail.com> >> >> > wrote: >> >> >> >> >> >> I have used both TCP and UDP in VW applications. I've found that TCP >> >> >> has >> >> >> acceptable latency and is not really any worse than UDP when either >> are >> >> >> tried over a clean, highly functional connection. I've not seen any >> >> >> routers >> >> >> which drop UDP packets in favor of TCP, and I've not seen any >> evidence >> >> >> of >> >> >> better quality TCP connections than UDP in any of my tests. To the >> >> >> contrary, >> >> >> I've seen UDP perform much better when network conditions are less >> than >> >> >> optimal as small messages can be sent immediately and repeated as >> >> >> needed >> >> >> without waiting for prior message acknowledgement or waiting for a >> TCP >> >> >> stream to recover in the event of dropped packets. >> >> >> >> >> >> TCP seems to be favorable when latency is not critical as it's >> >> >> generally >> >> >> (but not always) easier to use. UDP seems favorable when latency is >> >> >> critical >> >> >> as it allows the programmer to control network traffic and tailor it >> to >> >> >> the >> >> >> application requirements. >> >> >> >> >> >> If anyone has any evidence of internet pathways that selectively >> favor >> >> >> TCP >> >> >> over other traffic, I'd be interested in seeing it. >> >> >> >> >> >> >> >> >> On Sun, Dec 19, 2010 at 3:26 PM, Meadhbh Hamrick < >> ohmeadhbh@gmail.com> >> >> >> wrote: >> >> >>> >> >> >>> do we really know that UDP is what we want, even for low latency? >> if >> >> >>> you're multiplexing messages over a websocket connection, it's >> highly >> >> >>> likely >> >> >>> it'll be an existing connection (i.e.- it's likely one tcp/ip >> >> >>> connection >> >> >>> will carry several multiplexed websockets messages.) >> >> >>> >> >> >>> in my tests, UDP doesn't do much better than TCP if you're near the >> >> >>> network rate as it seems a lot of routers tend to dump UDP packets >> >> >>> first. >> >> >>> >> >> >>> most modern OSes now have api calls to let you disable TCP >> slow-start. >> >> >>> >> >> >>> i guess what i'm saying is it might be a good idea to define >> messages >> >> >>> in >> >> >>> a way so they're transport agnostic. that and I would wager that >> any >> >> >>> latency >> >> >>> improvements from UDP are dwarfed by latency introduced by >> application >> >> >>> layer >> >> >>> mechanisms to replace TCP's flow control & resend semantics. >> >> >>> >> >> >>> just my $0.02. >> >> >>> >> >> >>> On Dec 19, 2010 2:34 PM, "SM" <sm@resistor.net> wrote: >> >> >>> >> >> >>> _______________________________________________ >> >> >>> vwrap mailing list >> >> >>> vwrap@ietf.org >> >> >>> https://www.ietf.org/mailman/listinfo/vwrap >> >> >>> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> >> vwrap mailing list >> >> >> vwrap@ietf.org >> >> >> https://www.ietf.org/mailman/listinfo/vwrap >> >> >> >> >> > >> >> > >> >> > _______________________________________________ >> >> > vwrap mailing list >> >> > vwrap@ietf.org >> >> > https://www.ietf.org/mailman/listinfo/vwrap >> >> > >> >> > >> >> _______________________________________________ >> >> vwrap mailing list >> >> vwrap@ietf.org >> >> https://www.ietf.org/mailman/listinfo/vwrap >> > >> > >> > > > _______________________________________________ > vwrap mailing list > vwrap@ietf.org > https://www.ietf.org/mailman/listinfo/vwrap > >
- [vwrap] Technical basis for VW client in a web br… Morgaine
- Re: [vwrap] Technical basis for VW client in a we… Dahlia Trimble
- Re: [vwrap] Technical basis for VW client in a we… Morgaine
- Re: [vwrap] Technical basis for VW client in a we… Joshua Bell
- Re: [vwrap] Technical basis for VW client in a we… Meadhbh Hamrick
- Re: [vwrap] Technical basis for VW client in a we… Peter Saint-Andre
- Re: [vwrap] Technical basis for VW client in a we… Morgaine
- Re: [vwrap] Technical basis for VW client in a we… JohnnyB Hammerer
- Re: [vwrap] Technical basis for VW client in a we… peter host
- Re: [vwrap] Technical basis for VW client in a we… Brian Hurley
- Re: [vwrap] Technical basis for VW client in a we… Morgaine
- Re: [vwrap] Technical basis for VW client in a we… Morgaine
- Re: [vwrap] Technical basis for VW client in a we… Cristina Videira Lopes
- Re: [vwrap] Technical basis for VW client in a we… peter host
- Re: [vwrap] Technical basis for VW client in a we… SM
- Re: [vwrap] Technical basis for VW client in a we… Meadhbh Hamrick
- Re: [vwrap] Technical basis for VW client in a we… Dahlia Trimble
- Re: [vwrap] Technical basis for VW client in a we… Hurliman, John
- Re: [vwrap] Technical basis for VW client in a we… Mic Bowman
- Re: [vwrap] Technical basis for VW client in a we… Morgaine
- Re: [vwrap] Technical basis for VW client in a we… Dahlia Trimble
- Re: [vwrap] Technical basis for VW client in a we… Dan Olivares
- Re: [vwrap] Technical basis for VW client in a we… Dahlia Trimble
- Re: [vwrap] Technical basis for VW client in a we… Dan Olivares
- Re: [vwrap] Technical basis for VW client in a we… Joshua Bell
- Re: [vwrap] Technical basis for VW client in a we… Dahlia Trimble
- Re: [vwrap] Technical basis for VW client in a we… Morgaine
- Re: [vwrap] Technical basis for VW client in a we… Dahlia Trimble
- Re: [vwrap] Technical basis for VW client in a we… Dzonatas Sol
- Re: [vwrap] Technical basis for VW client in a we… Dzonatas Sol