Re: [hybi] Why not just use ssh?

"Shelby Moore" <> Wed, 01 September 2010 01:36 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id AC7623A68D7 for <>; Tue, 31 Aug 2010 18:36:19 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.05
X-Spam-Status: No, score=-1.05 tagged_above=-999 required=5 tests=[AWL=-1.051, BAYES_50=0.001]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id GOMCzzBTt9JI for <>; Tue, 31 Aug 2010 18:36:16 -0700 (PDT)
Received: from ( []) by (Postfix) with SMTP id 935F83A68A5 for <>; Tue, 31 Aug 2010 18:36:15 -0700 (PDT)
Received: (qmail 44656 invoked by uid 65534); 1 Sep 2010 01:36:42 -0000
Received: from ([]) (SquirrelMail authenticated user by with HTTP; Tue, 31 Aug 2010 21:36:42 -0400
Message-ID: <>
Date: Tue, 31 Aug 2010 21:36:42 -0400
From: "Shelby Moore" <>
To: "Eric Rescorla" <>
User-Agent: SquirrelMail/1.4.20
MIME-Version: 1.0
Content-Type: text/plain;charset=iso-8859-1
Content-Transfer-Encoding: 8bit
X-Priority: 3 (Normal)
Importance: Normal
Subject: Re: [hybi] Why not just use ssh?
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Server-Initiated HTTP <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 01 Sep 2010 01:36:20 -0000

Adam Barth wrote:
> Right.  We're talking about what APIs a browser exposes to arbitrary
> web content.  I don't want to expose the non-TLS version as an API
> inside a web browser because I'm not convinced it can't be used by
> to mount a cross-protocol attack.

Seems to me that is a weak argument relative to the amount of damage it
will do if we burden the browser will perfection against cross-protocol,
as explained below in my response to Eric.

> Cross-protocol attacks are quite subtle and generally take years to
> uncover.  Your statement is roughly equivalent to "protocol XYZ uses
> encryption, it might be secure against man-in-the-middle attacks,
> right?"

HTTP isn't secure against MITM, so by your logic, we should turn off HTTP?

I tried to resolve this in private with Eric, but he said he did not have
time to fully understand and engage my points privately.

> Shelby Moore wrote:
>> >> I think we need to deliver on HTTP Upgrade.
>> >
>> > TLS works over any port.  The point of using TLS alone is to block
cross-protocol attacks.  If we provide both TLS and non-TLS options,
the attackers will choose the non-TLS option for their attackers
whereas the folks who actually want to connect to the server more
than 60-some percent of the time will use the TLS option.  Offering
both is a lose-lose.
>> Nothwithstanding that I think cross-protocol attacks are the fault of the
>> target protocol, do not forget that browsers can allow users to turn
off or opt in to certain features.

Eric Rescorla wrote:
> I don't see the relevance of this.

You chopped off the part of my prior message that explained the relevance
as follows:

I am reminded of my "don't go chasing shadows" post about security:

Blaming security holes on the messenger instead of on the actual hole
(injection point), or not hardening behind the firewall is analogous to
hardening a castle more by making the walls ever higher and the doors ever
smaller or more difficult to open.

The end result is starvation.

The most security is to break yourself into million parts and distribute

So moving the security farther from the center is the answer.

We don't want to lock ourselves inside with the security holes, we expose
the holes so we can be outside and prosper, and so the holes will get
identified and fixed.

It goes back to "more eyeballs = shallower bugs" (Cathedral and Bazaar
model). It is all about maximizing the # of mutations per evolutionary
generation. Wasn't there a math book that said "thou shall go forth and

Conflation (blaming something for something else, or trying to
implementation one thing by implementing multiple things) is worse than
wasteful, it is actually classified as evil in some ancient math books
(Parable of the Talents).

> The extant protocols have whatever set of security properties they have.

If someone builds popular way to send email by requesting an image url
with GET params, does that mean it is a protocol attack against HTTP so we
will apply SOP to images because it could be used to send spam?

How many examples of bad server programming do you need me to give before
you realize it is insanity to trap ourselves inside the prior analogy of
the castle walls by forcing the perimeter clients to defend the bad

I did not denounce SOP for client side resources.

I said SOP is not the security hole for server requests. I have explained
at the sub-links, that the security hole for cross-protocol are the bad
servers and protocols. And the security hole for CSRF payloads are the
injection points of the bad scripting which has nothing to do with
cross-origin. Refer to the links I provided (I can't retype them all

We can't have a situation where the browser become surety for the servers
of the world. That will only encourage servers and protocols to become
more lax and never be fixed. This means the morass of badness will
compound until we end up in abject failure.  We can't designing things
where we are handcuffed by every bad thing anyone every did in history. It
is the epitome of socialism. Capitalism is to let each entity be
responsible for itself.  Do you know where socialism ends every time in
history? Horrific failure and misery. I am begging you please think this
over carefully. Don't reply quickly.  Go reflect on it for a week.

> The browser security model is designed to prevent Web sites from exploiting
> them.

It should apply to the client side resources only.  The servers must
protect themselves. If the castle analogy of starvation is not sufficient
to convince you, I will mathematically justify below.

> I don't consider (and as far as I can tell, pretty much nobody else
> shipping a solution that threatens the security of those protocols to be an
> option.

I know some experts would prefer the starvation (central control) model
aka Almost-Better-Than-Nothing security, but I understand that
self-organization is the way nature solves chaos:

The science is that convergence to self-organization (aka evolution)
adapts faster the more mutations per generation.  Thus larger populations
are more convergent.  Thus hiding the real holes (thus only a few
mutations per generation can reveal them) is **LESS** secure.  It is the
analogous point made for open source.  Security holes are bugs, we must
reveal them, not obscure them in conflated security castle models.

I am thinking far, far ahead of you.  I am thinking of ramifications of
the castle model and where it ends up, analogous to how I warned before
and was ignored but proven correct:
(I am not gloating, just at peace with our success)

The bottom line is that the security problem is the injection hole at the
bad protocols or servers.  If you try to fix that by applying filters at a
perimeter defense line, then Coase's Theorem insures that the viruses
(cock roaches over castle walls) will go around you, and the good programs
(guys inside the castle) will loose features (food).

When you conflate the outer perimeter (i.e. the browser sending a request)
with the actual hole (i.e. the bad protocol), then you are in the castle
model with ever higher walls and smaller doors.  It is a model from which
you never escape, it only gets worse and worse, because it can not
mathematically converge to fixes and it can only converge to ever
increasing (higher) barriers to functionality.

> As for turning features on or off, I have no idea what you propose
there. If
> some
> feature needs to be turned on to make legitimate sites work, then users
> turn
> it on for every site. If it's made too difficult to turn on then sites
> be able to
> count on it and it will go unused. Extensive HCI research indicates that
> simply can't discriminate effectively in these matters.

If we create the HTTP drafton HTTP initially, it doesn't mean the world is
required to implement it.  We can create a TLS spec too, of whatever. If
we design the layers orthogonally as we should, we don't need to be
talking about what protocol we run on top of right now.

My understanding of the mathematical model of the ramifications, is that
we shouldn't be deciding for the users. It is an abomination of nature
where a few guys at the IETF decide for billions of people what their
priorities are. The HCI research apparently proves that the protocols
better fix themselves or die. If we try to conflate an outer perimeter
with an inner vulnerability, what we do is gradually close the web's doors
until one day there will be no light, nothing. It is an extremely serious
problem, but if you don't look far, far ahead then you don't see the
creep. Programmers (and humans in general) are notoriously incapable at
perceiving exponential (proportional rate of) change until it is too late
to do anything about the bad ramifications (because in exponential creep
the nominal change is small until the very end where it accelerates too