Re: [Pearg] descriptive censorship work: draft-hall-censorship-tech

Vittorio Bertola <> Fri, 17 May 2019 10:00 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id C5013120354 for <>; Fri, 17 May 2019 03:00:08 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -4.301
X-Spam-Status: No, score=-4.301 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id phCBlRZnDENS for <>; Fri, 17 May 2019 03:00:06 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id D0A1412035F for <>; Fri, 17 May 2019 03:00:05 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 2F41D6A355; Fri, 17 May 2019 12:00:03 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;; s=201705; t=1558087203; bh=SzpKMCB+lIrFoU+aVEu4BK4zi3J8qtCs9/gSsfukK0A=; h=Date:From:Reply-To:To:Cc:In-Reply-To:References:Subject:From; b=uevs2/SrM9tZtN4mhJMHMVw/dREvey8LjqozBW06+kv4fSJNcuaeKPVFqxyiDE27Y Ytbqu0sAAElKdf0DFzpAnfV5xyWnw4t3uFdId6+P3W90FPUF1FtzqxfHVj0v9vuwZ7 F7edWx6qgWjgFR34WU3n+GM30E9+ZxXAeXJxVl81kRGRs/1jk+tgXA3EuJyGVa/jY/ 3bMb+YI4BCfyTt3rcO16WLoShd81FUEMPWgx0gXeGBBNjE95ocLu86caWJnW6eXxX2 AAi9GagIFSHoD7Z3u6biuomTM2yeO5fFgOqryS2C1ww5Sj6BYMPM7akSHuALf2M2Fm G4Vi+6WVkXQCQ==
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPSA id 225473C03DB; Fri, 17 May 2019 12:00:03 +0200 (CEST)
Date: Fri, 17 May 2019 12:00:03 +0200
From: Vittorio Bertola <>
Reply-To: Vittorio Bertola <>
To: Stephen Farrell <>
Message-ID: <>
In-Reply-To: <>
References: <> <> <>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-Priority: 3
Importance: Medium
X-Mailer: Open-Xchange Mailer v7.10.2-Rev4
X-Originating-Client: open-xchange-appsuite
Autocrypt:; prefer-encrypt=mutual; keydata= mQENBFhFR+UBCACfoywFKBRfzasiiR9/6dwY36eLePXcdScumDMR8qoXvRS55QYDjp5bs+yMq41qWV9 xp/cqryY9jnvHbeF3TsE5yEazpD1dleRbkpElUBpPwXqkrSP8uXO9KkS9KoX6gdml6M4L+F82WpqYC1 uTzOE6HPmhmQ4cGSgoia2jolxAhRpzoYN99/BwpvoZeTSLP5K6yPlMPYkMev/uZlAkMMhelli9IN6yA yxcC0AeHSnOAcNKUr13yXyMlTyi1cdMJ4sk88zIbefxwg3PAtYjkz3wgvP96cNVwAgSt4+j/ZuVaENP pgVuM512m051j9SlspWDHtzrci5pBKKFsibnTelrABEBAAG0NUJlcnRvbGEsIFZpdHRvcmlvIDx2aXR 0b3Jpby5iZXJ0b2xhQG9wZW4teGNoYW5nZS5jb20+iQFABBMBAgAqBAsJCAcGFQoJCAsCBRYCAwEAAp 4BAhsDBYkSzAMABQMAAAAABYJYRUflAAoJEIU2cHmzj8qNaG0H/ROY+suCP86hoN+9RIV66Ej8b3sb8 UgwFJOJMupZfeb9yTIJwE4VQT5lTt146CcJJ5jvxD6FZn1Htw9y4/45pPAF7xLE066jg3OqRvzeWRZ3 IDUfJJIiM5YGk1xWxDqppSwhnKcMOuI72iioWxX0nGQrWxpnWJsjt08IEEwuYucDkul1PHsrLJbTd58 fiMKLVwag+IE1SPHOwkPF6arZQZIfB5ThtOZV+36Jn8Hok9XfeXWBVyPkiWCQYVX39QsIbr0JNR9kQy 4g2ZFexOcTe8Jo12jPRL7V8OqStdDes3cje9lWFLnX05nrfLuE0l0JKWEg8akN+McFXc+oV68h7nu5A Q0EWEVH5QEIAIDKanNBe1uRfk8AjLirflZO291VNkOAeUu+dIhecGnZeQW6htlDinlYOnXhtsY1mK9W PUu+xshDq7lXn2G0LxldYwyJYZaJtDgIKqVqwxfA34Lj27oqPuXwcvGhdCgt0SW/YcalRdAi0/AzUCu 5GSaj2kaGUSnBYYUP4szGJXjaK2psP5toQSCtx2pfSXQ6MaqPK9Zzy+D5xc6VWQRp/iRImodAcPf8fg JJvRyJ8Jla3lKWyvBBzJDg6MOf6Fts78bJSt23X0uPp93g7GgbYkuRMnFI4RGoTVkxjD/HBEJ0CNg22 hoHJondhmKnZVrHEluFuSnW0wBEIYomcPSPB+cAEQEAAYkBMQQYAQIAGwUCWEVH5QIbDAQLCQgHBhUK CQgLAgUJEswDAAAKCRCFNnB5s4/KjdO8B/wNpvWtOpLdotR/Xh4fu08Fd63nnNfbIGIETWsVi0Sbr8i E5duuGaaWIcMmUvgKe/BM0Fpj9X01Zjm90uoPrlVVuQWrf+vFlbalUYVZr51gl5UyUFHk+iAZCAA0WB rsmACKvuV1P7GuiX3UV9b59T9taYJxN3dNFuftrEuvsqHimFtlekUjUwoCekTJdncFusBhwz2OrKhHr WWrEsXkfh0+pURWYAlKlTxvXuI7gAfHEQM+6OnrWvXYtlhd0M1sBPnCjbyG63Qws7Rek9bEWKtH6dA6 dmT2FQT+g1S9Mdf0WkPTQNX0x24dm8IoHuD3KYwX7Svx43Xa17aZnXqUjtj1
Archived-At: <>
Subject: Re: [Pearg] descriptive censorship work: draft-hall-censorship-tech
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Privacy Enhancements and Assessment Proposed RG <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 17 May 2019 10:00:18 -0000

> Il 17 maggio 2019 09:58 Stephen Farrell <> ha scritto:
> > We can discuss
> > whether we want to keep this a regularly-updated-draft or something else in
> > the RG meeting at IETF 105 (we can do a short presentation, if you're
> > interested).
> Personally, I'd prefer to see the thing just finished as
> soon as possible. A presentation for that is no harm, but
> not really needed IMO. My argument for that is that an
> RFC on this topic could be informative for protocol
> designers, whereas the draft hasn't been so far (at least
> afaik). Telling someone to go read and think about RFCnnnn
> is more likely to be effective than explaining the status
> of draft-hall-* I reckon.

First of all, thanks for bringing up this document again, as I had read it quickly and even promised the authors some feedback on the DNS section, which however I never did (apologies).

Now I had the time to go through it a little more in depth, and I must say I have broader issues with it.

First of all, the document assumes that any content control is censorship. By the definition in section 1, even blocking the connection from a bot to its C&C center is censorship. That's way too broad.

The basic problem is that many of these techniques are also used for positive reasons, which makes it difficult to determine how to deal with them at the technical level, as any counter-technique that makes them impossible is bound to also disrupt those positive uses, creating problems to their users (see the DoH debate). 

If this is meant as possible guidance to protocol designers, it should be clear that there are policy impact evaluations to be made when designing anything that affects these points of control in one way or the other, possibly involving non-technical stakeholders in the discussion. There is a whole set of nuances and complex tradeoffs in this field, yet I don't see this in the document - it looks like a simplified, biased view of the problem that considers any content control technique negative in itself.

Also, some wording seems to assume that only public authorities are censors. In the description in section 3.1, there are statements like "Application service providers can be pressured, coerced, or legally required to censor specific content or flows of data.", or in section 3.2.3, "In addition to censorship by the state, many governments pressure content providers to censor themselves." 

But there are also cases in which Internet platforms decide to block stuff on their own, for their own business interests or company views (example: I don't think Facebook is censoring breastfeeding images due to any legal requirement or governmental pressure). In democratic countries, this is the kind of censorship that currently creates more concerns, as any legally mandated censorship comes from the democratic process and usually has at least some degree of scrutiny and transparence, while citizens have no way of controlling or even influencing the policies of a private company operating in another jurisdiction, and these policies are usually very intransparent. This kind of practices do not seem to be considered in section 3 (and by the way, there is one more point of control, client applications, which is not even mentioned).

I also find section 3 confusing in itself, since 3.1 is a list of actors and use cases, while the other sections are about technical points where information on content could be found. Perhaps 3.1 should become a section of its own, and there should be a much more nuanced discussion of the problem, before getting into the technicalities of how control can be technically applied.

Also I'm not sure about section 6 - the document is supposed to be about technical mechanisms, so what's the meaning of a "non-technical interference" section? But if you want to keep it, then there's lots of stuff missing, starting from, for example, Internet platforms shutting down accounts of people they don't like. 

A couple of weeks ago, for example, Facebook blocked the account of a European elections candidate that happened to be a descendant of Mussolini. A couple of days ago, Facebook shut down 23 pages in Italy that supported the government parties, for spreading "fake news" that, according to those who were posting them, were not fake at all. I'm not arguing that this was bad behaviour by them, but these are indeed cases of silencing someone for their political views, yet I don't see where this kind of things are covered in this document.

So, in the end: IMHO either this is made strictly technical, only describing technologies that can be used to identify and block content without assuming whether such activity is good or bad, and giving up all the non-technical bits and judgements; or it requires broadening the viewpoints and the cases it covers.

Vittorio Bertola | Head of Policy & Innovation, Open-Xchange 
Office @ Via Treviso 12, 10144 Torino, Italy