Re: [hybi] Anti-patterns (was Re: I-D Action:draft-ietf-hybi-thewebsocketprotocol-01.txt)

"Shelby Moore" <> Thu, 02 September 2010 20:22 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id BCA653A69A4 for <>; Thu, 2 Sep 2010 13:22:54 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.337
X-Spam-Status: No, score=-2.337 tagged_above=-999 required=5 tests=[AWL=0.262, BAYES_00=-2.599]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 7jqgNwgxx6ba for <>; Thu, 2 Sep 2010 13:22:52 -0700 (PDT)
Received: from ( []) by (Postfix) with SMTP id 7B2213A68AB for <>; Thu, 2 Sep 2010 13:22:52 -0700 (PDT)
Received: (qmail 64423 invoked by uid 65534); 2 Sep 2010 20:23:21 -0000
Received: from ([]) (SquirrelMail authenticated user by with HTTP; Thu, 2 Sep 2010 16:23:21 -0400
Message-ID: <>
In-Reply-To: <>
References: <>
Date: Thu, 2 Sep 2010 16:23:21 -0400
From: "Shelby Moore" <>
User-Agent: SquirrelMail/1.4.20
MIME-Version: 1.0
Content-Type: text/plain;charset=iso-8859-1
Content-Transfer-Encoding: 8bit
X-Priority: 3 (Normal)
Importance: Normal
Cc: "Roy T. Fielding" <>,
Subject: Re: [hybi] Anti-patterns (was Re: I-D Action:draft-ietf-hybi-thewebsocketprotocol-01.txt)
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Server-Initiated HTTP <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 02 Sep 2010 20:22:54 -0000


I mean that as no disrespect to Dr. Fielding, who has imho done an
enviably awesome job at organizing the network architecture concepts we
need to categorize. But unfortunately, imo he has that critical error in
the specific conclusions he made within his otherwise afais (so far)
excellent framework.

And the problem is that much of the work that is being done here and else
where is influenced by errors of that genre.

Specifically, imo the entire cross-protocol and same-origin policy (SOP)
language needs to be stripped out of the draft. Imo, it is worse than

I have no disagreement that SOP does apply to client-side resources (in
client-server architecture). All client-side resources loaded by a page
should only be available to that page (i.e. that origin). Do not think
that I am against SOP for that specific purpose.

However, SOP has no applicability to network requests because the origin
server is completely in control of the network requests it will initiate
from the client. The only exceptions are bad scripts that are injected at
the discretion of the origin server (or bugs in the browser). The argument
others make is that users can not reason about which origins allow bad
network requests, thus we have to protect users from themselves.

First, that is a centralized model of security, so I suggest you read what
I wrote about evolvability and scalability to dynamic change. Hackers
invoke dynamic change. As Adam said, we don't find out about new attacks
until (even up to years) after they occurred. So the hackers route around
the useless barriers that we call security (Coase's theorem), while the
useful features are not available to the good programmers. The hackers
will always work harder than the useful programmers, so if we apply these
incorrect security models, then we trend towards no programming features
at all (no evolution, death).  I wrote about the read security we should
be doing that can really stop the hackers without blocking any

Forcing browser features to be crippled by the need to protect against
every server programming mistake ever made, is the direction that
cross-protocol security model leads[6].

Neither SOP for blocking network requests nor cross-protocol blocking,
offer any real security[5]. What they do is obfuscate the real bugs, so
that evolution is not able to fix them.  For example, if SOP stops an
origin from loading resources over the network from another origin, all
this does is removes features. It doesn't add any security, because if the
page is doing bad things, the user can clearly say "this is a bad page,
don't load it again" (word gets around, bad things lose popularity and
wither). Ditto if a bad page is attacking a protocol or different origin,
which as I explained is not security at all, because the bad page could
even attack its own protocol or origin server[6].

Afaiu, the Cathedral and the Bazaar model of "more eyeballs = shallower
bugs" is simply a statement of the way evolution works.  Read what I wrote
about Dr. Fielding's critical mistake below...



>> Please don't add versioning to the protocol.  Versioning is a
>> anti-pattern for the web.
>> FTR, that statement is false.  There is a very small group of
>> inexperienced developers that think client-side rendering of
>> uncontrolled content somehow defines universal principles for
>> all software engineering.  The rest of us use whatever techniques
>> we can to reduce entropy.
> Hi Dr. Fielding,
> I respectfully note the relevant fact that you are apparently VP for afaik
> the most popular web server on the planet. And I use Apache and appreciate
> its existence. Thank you for your efforts.
> Let's talk about anti-patterns, and I assure you this isn't "Quixote
> ripostes".
> In my logic (feel free to retort/clarify), your comparison of
> client-server and P2P architecture in your dissertation contains egregious
> errors which enlighten our designs to degenerative evolution[4].
> We know from nature[1], that systems self-organize via evolution and that
> the reason evolution converges so fast amongst the frightening chaos of
> astronomical random possibilities, is because the number of mutations (the
> population of offspring) is very large for each generational step. With
> small populations that do not grow exponentially, evolution loses its
> efficacy[2] and Raison d'être.
> In your "desireable architecture properties" comparison tables to P2P, you
> claim[4] that client-server is extremely more scalable and equivalently
> evolvable. Granted you do assert that P2P is more extensible, but I think
> that misses the critical distinction below.
> Perhaps one could claim client-server is more scalable because the current
> network is asymmetrically skewed to more download than upload bandwidth,
> but I understand this is an artifact of client-server, not a cause of it.
> The client-server architecture's monopolistic dominance (and the incorrect
> security models[3] that promulgate because of the myopia towards others
> architectures) violates the E2E (end-to-end) principle of the original
> internet (i.e. IPv4 before NATs and firewalls). This has eliminated 99% of
> the possible permutations of connections between peers (forcing the coding
> of programs with same patterns/footprint as viruses, employing unreliable
> tunneling through firewalls and NATs, artificial barriers that were never
> supposed to exist in IPv4).
> I assert that the population of mutations (possibilities) for network
> interaction has been dangerously reduced, so that the evolution of the web
> is threatened to degenerate. So I can not see how you claim that a
> centralized architecture evolves equally well to P2P, not even close by
> orders-of-magnitude.
> As for scalability, an architecture which can not evolve in real-time, can
> not scale to real-time change[1]. So I find your claim regarding superior
> scalability for client-server to be completely opposite of the facts of
> preponderance of evolutionary science.
> [1]
> [2] Evolution works via annealing in the N-dimensional solution space of
> fitness, which is why stochastically large populations are critical for
> evolution to not get (gradient search, i.e. Newton's method) stuck in
> local minima (valley) of the global solutions space. We know the brain
> learns via stochastic annealing also. It is also why when ice forms via
> slow cooling, it has less cracks. Etc, etc, etc.
> [3]
> [4] Roy Thomas Fielding, Architectural Styles and the Design of
> Network-based Software Architectures, 2000
> 3.6 Peer-to-Peer Styles:
> 3.4 Hierarchical Styles:
> _______________________________________________
> hybi mailing list