Re: [hybi] Anti-patterns (was Re: I-D Action:draft-ietf-hybi-thewebsocketprotocol-01.txt)

"Shelby Moore" <shelby@coolpage.com> Fri, 03 September 2010 00:45 UTC

Return-Path: <shelby@coolpage.com>
X-Original-To: hybi@core3.amsl.com
Delivered-To: hybi@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 3B5A13A6782 for <hybi@core3.amsl.com>; Thu, 2 Sep 2010 17:45:32 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.882
X-Spam-Level:
X-Spam-Status: No, score=-0.882 tagged_above=-999 required=5 tests=[AWL=-1.198, BAYES_50=0.001, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id nMoPFVGiuzS5 for <hybi@core3.amsl.com>; Thu, 2 Sep 2010 17:45:30 -0700 (PDT)
Received: from www3.webmail.pair.com (www3.webmail.pair.com [66.39.3.34]) by core3.amsl.com (Postfix) with SMTP id DE8E53A635F for <hybi@ietf.org>; Thu, 2 Sep 2010 17:45:29 -0700 (PDT)
Received: (qmail 99909 invoked by uid 65534); 3 Sep 2010 00:45:58 -0000
Received: from 121.97.54.174 ([121.97.54.174]) (SquirrelMail authenticated user shelby@coolpage.com) by sm.webmail.pair.com with HTTP; Thu, 2 Sep 2010 20:45:58 -0400
Message-ID: <e4d27f72fc2f748051b28d5b35c98d4a.squirrel@sm.webmail.pair.com>
In-Reply-To: <3ECFC088-CAE9-4082-8B62-EBED891AC767@gbiv.com>
References: <e0b4f16de694ec610c00bd55b1dd7ad1.squirrel@sm.webmail.pair.com> <3ECFC088-CAE9-4082-8B62-EBED891AC767@gbiv.com>
Date: Thu, 02 Sep 2010 20:45:58 -0400
From: Shelby Moore <shelby@coolpage.com>
To: "Roy T. Fielding" <fielding@gbiv.com>
User-Agent: SquirrelMail/1.4.20
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
X-Priority: 3 (Normal)
Importance: Normal
Cc: Hybi HTTP <hybi@ietf.org>
Subject: Re: [hybi] Anti-patterns (was Re: I-D Action:draft-ietf-hybi-thewebsocketprotocol-01.txt)
X-BeenThere: hybi@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
Reply-To: shelby@coolpage.com
List-Id: Server-Initiated HTTP <hybi.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/hybi>, <mailto:hybi-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/hybi>
List-Post: <mailto:hybi@ietf.org>
List-Help: <mailto:hybi-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/hybi>, <mailto:hybi-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 03 Sep 2010 00:45:32 -0000

>> This topic has again diverged from the work of hybi.
>
Joe Hildebrand wrote:
> Agree that this has nothing to do with the work at hand.

I will demonstrate otherwise below...


======================
Roy T. Fielding wrote:
[snip]
> think of a way to turn your argument into proposed text for the draft.

I will below make specific points as to how this applies to question of
whether the SOP and cross-protocol language should be in the draft.

[snip]
> I am not the current VP, but you are welcome in any case.

Thank you and also for clarifying.

[snip]
> to save, in an attempt at design by committee.  Treating everyone around
> you as combatants does not help your arguments.


Imagined applies. I didn't. Factual technical debate (without pedantic
obfuscation) applies. So I think we can agree to not discuss these
imaginary emotions further.


>> In my logic (feel free to retort/clarify), your comparison of
>> client-server and P2P architecture in your dissertation contains
>> egregious
>> errors which enlighten our designs to degenerative evolution[4].
>
> No, you just haven't read it carefully.


Or perhaps you can entertain the possibility that I did read it carefully,
and we have a different perspective of reality.


>> We know from nature[1], that systems self-organize via evolution and
>> that
>> the reason evolution converges so fast amongst the frightening chaos of
>> astronomical random possibilities, is because the number of mutations
>> (the
>> population of offspring) is very large for each generational step. With
>> small populations that do not grow exponentially, evolution loses its
>> efficacy[2] and Raison d'être.
>
> Natural systems self-organize.  Software systems do not.


First it is not correct to imply that running software can not
self-organize without programming change. There are genetic and generative
algorithms that can be programmed.

Second, I understood even before your reply, that you are making a
distinction between the programming of software and the executing
software-- the Cathedral model.

I suggest to you that is has already been proven that lowering the cycle
costs for the feedback synergy loop between what software is doing and the
programming (i.e. Bazaar and peer programmer model) squashes the Cathedral
model in the reality of the free market:

http://www.catb.org/~esr/writings/cathedral-bazaar/

And WebSockets is about applications, application programmers, and end
users. That is an evolutionary feedback system.


>> In your "desireable architecture properties" comparison tables to P2P,
>> you
>> claim[4] that client-server is extremely more scalable and equivalently
>> evolvable. Granted you do assert that P2P is more extensible, but I
>> think
>> that misses the critical distinction below.
>
> No, I claimed that client-server improves scalability and evolvability
> by deliberately (by design) separating concerns.  Those are a well-known
> properties of applying a well-known software engineering principle.


Improves scalability and evolvability within specific chosen limitations,
naming the inability to scale and evolve with a centralized server. And
the users don't have atomic control of the server, thus it is not as
generally scalable and evolvable as a system where the users have atomic
control over their interaction with each other.

I understand that you could make the point that users can not interact
with each other until someone programs the feature to do so, and thus you
could make the incorrect conclusion that servers can scale and evolve as
well as P2P.  However, the basic presumption of the WWW is that user is
the programmer.  HTML was specifically designed for the amatuer
programmer. And WebSockets opens a whole new world of possibilities
especially if we don't force users to interact with a server. Rather
spontaneously users could start interacting with each other.

Users do via social networks and they get quite frustrated that the
central server does not let them do what they want to do. I was apparently
the first one who revealed to the disgruntled users that facebook privacy
thing that users were complaining about (I can provide a link for proof).
The users thought they could hide their friends from their other friends,
but they didn't realize it wasn't doing what they thought it was doing. 
Facebook had so many different layers and ways to obfuscate (and I
theorize that it is because their $billions valuation model is based on
making sure all your friends find each other whether you like it or not).
A few months after that, the ground swell of discontent reached the news.

Servers suck but they are anti-freedom. Period. We need servers, but as
little as possible. The sooner we flatten back out the internet, the
faster it can evolve.


> There is no single P2P style.


That is a good thing for evolution.  I hope there will be a quadrillion
styles.


> Peer-to-peer network-based applications
> is just the absence of a (client-server) constraint.


I guess I agree with that, and I assume you agree that P2P can cooperate
with a server.


>  There are some
> peer-to-peer styles that do have other added constraints for layering
> and evolvability, which are briefly described in my survey.  However,
> they were not applicable to the Web and thus not applicable to the
> rest of my dissertation.  You can believe anything you want about the
> quality of my work, but you simply cannot genuinely claim that the Web
> is not a client-server system or that the clear separation afforded by
> using standard transfer protocols with defined semantics has not enabled
> Web clients to evolve independently of servers (and of each other).


If your work was only intended to classify what was, and not include what
could be, then I agree that you probably had limited data to work on P2P.
And I agree the client-server model predominates now.

But think what is fundamentally driving WebSockets? It is not just the
need for real-time push, but it is fundamentally that the client is
growing more important than the server! The code in the client is now
becoming sophisticated enough that the server must treat the client as a
peer!

Ah ha!!!

Hmmm. The web is moving towards P2P and we better get out of the way,
because it is going to squash us whether we like it or not.

Nature (evolution) won't put up with any rigor mortis.

And folks is why we need to remove the SOP (same-origin policy) and
cross-protocol language from the draft of WebSocket.  Factor in everything
I will write below too.


>> Perhaps one could claim client-server is more scalable because the
>> current
>> network is asymmetrically skewed to more download than upload bandwidth,
>> but I understand this is an artifact of client-server, not a cause of
>> it.
>
> It depends on the applications for which the system is designed.
> There is no such thing as an optimum system in the absence of
> requirements.


I agree, but please realize the fundamental requirement that is driving
the need for WebSockets.  "push" is myopic, the generative essence is that
the client is becoming fatter (running more code) and becoming a peer.


>> The client-server architecture's monopolistic dominance (and the
>> incorrect
>> security models[3] that promulgate because of the myopia towards others
>> architectures) violates the E2E (end-to-end) principle of the original
>> internet (i.e. IPv4 before NATs and firewalls). This has eliminated 99%
>> of
>> the possible permutations of connections between peers (forcing the
>> coding
>> of programs with same patterns/footprint as viruses, employing
>> unreliable
>> tunneling through firewalls and NATs, artificial barriers that were
>> never
>> supposed to exist in IPv4).
>
> You are simply wrong here.  The E2E principle only applies to transport
> of bits.  No successful Internet application adheres to the E2E principle
> because of the multi-organizational reality of the open Internet.


Baloney I am right.

Gnutella, Kareeza, etc are not successful?  Millions of users use them.

What about Skype and Yahoo Messenger? I think Skype has like 100 million
users and apparently being bought by the company that employs our Chair.


>  People
> want their firewalls, need to scan for viruses,


Yeah because for at least 11 years we have refused to do real security and
encrypt data on the client side:

https://bugzilla.mozilla.org/show_bug.cgi?id=19184
http://www.ietf.org/mail-archive/web/http-state/current/msg00939.html

Instead we give them half-baked insecurity of perimeter walls and vacuum
cleaners.


> and have no desire to
> support
> illegal copying across their network infrastructure.


There are legitimate uses for sharing files.  Skype is a legitimate
application.  Yahoo Messenger is a legitimate application.  Between those
you've probably touched 30 - 50% of the users of the internet.


> That is why we have
> MX hosts, http proxies, netnews hubs, etc.  Any application protocol that
> does not support intermediation will be blocked by major networks as
> soon as it is perceived as a threat (or just by default until it has
> proven itself not to be a threat).  That's life.


Threat to some vested interests yes, but not a threat to the users.

And guess who will win that eventually? The users and nature. Every time. 
<joke>The georgia guidestones notwithstanding.</joke>

I should tell you about an irrefutable mathematical proof I came across
recently, but I am afraid it would be too shocking, so I will never
mention it on this list.


> This in no way detracts from the very real benefits of adhering to
> the E2E principle at the transport layer and out on the wider Internet
> between organizations.


It does detract from what nature wants, which is unfettered E2E
interaction.  Vested interests can stick dikes into for a while, but the
leaks are going to overcome them, or we will shut the whole thing down in
process of hardening the perimeter.  Eventually the hackers are going to
attack your whitelisted itermediation so that you have to whitelist
everything, that is where the current model ends up.


>  Nevertheless, it should be recognized that
> rigid belief in that principle has long hindered deployment of IPv6.
> NATs and other network barriers are a desirable thing for users,
> regardless of how many problems they cause for our protocols, and
> we need to learn to work with them instead of decrying their faults.


I have no problem with NATs and firewalls, if they will expose an API so
that the user can tunnel his/her application through unfettered and with a
single click of the mouse. But I know that won't happen, because no one
wants to give the users control over themselves. They are just sheep to be
controlled by the middle men. The internet itself is evolving into a MITM
attack model where the attacker are the people who claim to be protecting
us.


>> I assert that the population of mutations (possibilities) for network
>> interaction has been dangerously reduced, so that the evolution of the
>> web
>> is threatened to degenerate. So I can not see how you claim that a
>> centralized architecture evolves equally well to P2P, not even close by
>> orders-of-magnitude.
>
> None of the browser software that existed when I started that work was
> still
> in use by the time I finished it.  In contrast, I don't know of a single
> P2P system that has evolved past the first version written by a single
> developer.  They tend to be tightly coupled (not necessarily because
> of the interaction style -- most CS systems are that way too).


You are unaware then of Skype, Yahoo Messenger, etc..


> One way to solve that problem is self-descriptive messaging within
> stateless protocols across multiple transport mechanisms with
> message-based (not connection-based) encryption applied only when
> necessary.


Okay sure, I can support that. Anything that gets the ball rolling towards
what nature and users want.



>> As for scalability, an architecture which can not evolve in real-time,
>> can
>> not scale to real-time change[1]. So I find your claim regarding
>> superior
>> scalability for client-server to be completely opposite of the facts of
>> preponderance of evolutionary science.
>>
>>
>> [1] http://esr.ibiblio.org/?p=2491#comment-276759
>
> Again, your assertion does not match reality.


I assert you are not knowledgeable about the architecture of Skype then.


>  The architecture does not
> need to evolve in real-time if it has already been designed to be
> scalable.


And P2P is more scalable, because multiple atomic processes interacting is
always more scalable than a centrally managed one.


> That is the whole point of scalable architectures, load balancing,
> MapReduce, etc.


Those are attempts to granularize the processes into more atomic actors.

But they still funnel through a trunk of control (who can program them,
i.e. lack of diversity/mutations that evolution requires).


>  Meta-architectures and introspection are an interesting
> technique in the lab, but generally fail to evolve fast enough for the
> kind of load spikes seen on the Internet.


I am not thinking about only load scaling.  I am thinking about fitness
scaling of any attribute, i.e. what ever is important to users, e.g.
features we can't even imagine and some we can imagine.  The whole point
of evolution is that we don't know all the ways it will evolve yet.


>> [2] Evolution works via annealing in the N-dimensional solution space of
>> fitness, which is why stochastically large populations are critical for
>> evolution to not get (gradient search, i.e. Newton's method) stuck in
>> local minima (valley) of the global solutions space. We know the brain
>> learns via stochastic annealing also. It is also why when ice forms via
>> slow cooling, it has less cracks. Etc, etc, etc.
>
> Evolution of software happens by choice as organizations and people
> evolve,


Exactly!


> not as a natural mutation effect like that in biology.


Wrong.  The network is alive. The ends contain intelligience aka humans.


>  There are distinct
> differences between the social sciences and the biological sciences.
>
> This topic has again diverged from the work of hybi.  Please stop
> hijacking
> the threads.


We were having a great technical debate, then you add that slander to the
end ostensibly to try to coerce/embarrass me to not make future technical
input.