Re: [HR-rt] [hrpc] UN Special Rapporteur analyses AI’s impact on human rights

Corinne Cath <corinnecath@gmail.com> Thu, 08 November 2018 09:39 UTC

Return-Path: <cattekwaad@gmail.com>
X-Original-To: hr-rt@ietfa.amsl.com
Delivered-To: hr-rt@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2241E130E6C; Thu, 8 Nov 2018 01:39:26 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.998
X-Spam-Level:
X-Spam-Status: No, score=-1.998 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id nt_KsFlCLytm; Thu, 8 Nov 2018 01:39:22 -0800 (PST)
Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com [IPv6:2a00:1450:4864:20::435]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 4376A130E2E; Thu, 8 Nov 2018 01:39:22 -0800 (PST)
Received: by mail-wr1-x435.google.com with SMTP id v18-v6so1153320wrt.8; Thu, 08 Nov 2018 01:39:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=dsCVyZDMeMOOxof2bnRYy1QmT4fMdsqoYLt4xBtsfcE=; b=a8Tzffg5nPAmHlqWdWCPQ8qNyI9URC40ZiYnG2Nsqsec6U9TE/43f87XOK6P1WXTtO XqdXRCdU2VdRsVa30RcGGs3PNrwll4Lq1LmComhZGruBJSrES+aTeiV9VbBnwApijF9K rHCDffPux7mGyz7yvvQXiN3HnXCP1FaWR7Rnmvl733dkKAWKQPB2WuFaizbQ86LEU+zt 4qhgeRX8omrL8WRLiyN3OfDKMXGLj/eG3jtapiiqoWA5Iof6vbzWmgtJn5jBCYyBfQz+ +PNhZ2WpsprjJLzjxyQTmF90ou0+29UGO9mQRawmZ22J5RrMG4WBVGVV5+7U3DOy0S5g Whlg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=dsCVyZDMeMOOxof2bnRYy1QmT4fMdsqoYLt4xBtsfcE=; b=R5EgdfUMrC2gy4+eehM27lh7En/B0ylzcpccGzXFPWjK4m7rZhZyvn4BofNMgeUeyo 02gpmKDwRvVsVBZcCitzHRa8Mg4fhlQkucHSKi4AJCZ+1SdCuSVMGJVa+h0kt5VxqLwO DSCacqdNvmdAgBtzBrFyaiBzoNdzl/poQaIJrXP+gy/QUU/q6v5ADo8RKdJ8oZsn5us5 m9sbMGPB+hpv/Nll5WYKG5Uie1BtvFLO4LP8nxR9H4bIVWlBt3A7K2j35Mt3a7dDx/Es 6ZzDipL2nAuuC4broJU3kzFSPVGuGAXJ7mtJ480amoTxPWYmhVQakZycZu7Jwzqsg8mk VocA==
X-Gm-Message-State: AGRZ1gKhgAVpdwhBAME0/JtYA8bAEqSKOudB/oTeMYwCfEtqzNUFqN2Y mbURloYwUf4THrmc55KSVcLu7/NM2+1HmPWcrSE=
X-Google-Smtp-Source: AJdET5eyNu1wvWiqW+1BjLOZUANBWCHji3WQCoGP59czGl9zqc8KVevzyPDA+YFskxic6uStqiE1hflOl7JOE7Ccz9g=
X-Received: by 2002:adf:df0a:: with SMTP id y10-v6mr3220868wrl.127.1541669960459; Thu, 08 Nov 2018 01:39:20 -0800 (PST)
MIME-Version: 1.0
References: <1850578816.785963.1541654760579.ref@mail.yahoo.com> <1850578816.785963.1541654760579@mail.yahoo.com>
In-Reply-To: <1850578816.785963.1541654760579@mail.yahoo.com>
From: Corinne Cath <corinnecath@gmail.com>
Date: Thu, 8 Nov 2018 16:38:41 +0700
Message-ID: <CAD499e+cCztcQcXAY1E8oMQtaL1QehOJyXknsjdZCn2uH_j_NQ@mail.gmail.com>
To: Mark Perkins <marknoumea=40yahoo.com@dmarc.ietf.org>
Cc: Hrpc <hrpc@irtf.org>, hr-rt@irtf.org
Content-Type: multipart/alternative; boundary="0000000000005aaab7057a24024c"
Archived-At: <https://mailarchive.ietf.org/arch/msg/hr-rt/9EPmstO8rUuk4qN57GVulGnpCcw>
Subject: Re: [HR-rt] =?utf-8?q?=5Bhrpc=5D_UN_Special_Rapporteur_analyses_AI?= =?utf-8?q?=E2=80=99s_impact_on_human_rights?=
X-BeenThere: hr-rt@irtf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Human Rights Protocol Considerations Review Team <hr-rt.irtf.org>
List-Unsubscribe: <https://www.irtf.org/mailman/options/hr-rt>, <mailto:hr-rt-request@irtf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/hr-rt/>
List-Post: <mailto:hr-rt@irtf.org>
List-Help: <mailto:hr-rt-request@irtf.org?subject=help>
List-Subscribe: <https://www.irtf.org/mailman/listinfo/hr-rt>, <mailto:hr-rt-request@irtf.org?subject=subscribe>
X-List-Received-Date: Thu, 08 Nov 2018 09:39:26 -0000

Edri also did a good short(ish) write-up of the report:


=======================================================================
4. UN Special Rapporteur analyses AI’s impact on human rights
=======================================================================

In October 2018, the United Nations (UN) Special Rapporteur for the
promotion and protection of the right to freedom of opinion and
expression, David Kaye, released his report on the implications of
artificial intelligence (AI) technologies for human rights. The report
was submitted to the UN General Assembly on 29 August 2018 but has only
been published recently. The text focuses in particular on freedom of
expression and opinion, privacy and non-discrimination. In the report,
the UN Special Rapporteur David Kaye first clarifies what he understands
by artificial intelligence and what using AI entails for the current
digital environment, debunking several myths. He then provides an
overview of all potential human rights affected by relevant
technological developments, before laying down a framework for a human
rights-based approach to these new technologies.

1. Artificial intelligence is not a neutral technology

David Kaye defines artificial intelligence as a “constellation of
processes and technologies enabling computers to complement or replace
specific tasks otherwise performed by humans” through “computer code
[...] carrying instructions to translate data into conclusions,
information or outputs.” He states that AI is still highly dependent on
human intervention, as humans need to design the systems, define their
objectives and organise the datasets for the algorithms to function
properly. The report points out that AI is therefore not a neutral
technology, as the use of its outputs remains in the hands of humans.

-----------------------------------------------------------------
Support our work - make a recurrent donation!
https://edri.org/supporters/
-----------------------------------------------------------------

Current forms of AI systems are far from flawless, as they demand human
scrutiny and sometimes even correction. The report considers that AI
systems’ automated character, the quality of data analysis as well as
systems’ adaptability are sources of bias. Automated decisions may
produce discriminatory effects as they rely exclusively on specific
criteria, without necessarily balancing them, and they undermine
scrutiny and transparency over the outcomes. AI systems also rely on
huge amounts of data that has questionable origins and accuracy.
Furthermore, AI can identify correlations that can be mistaken for
causations. David Kaye points at the main problem of adaptability when
losing human supervision: it poses challenges to ensuring transparency
and accountability.

2. Current uses of artificial intelligence interfere with human rights

David Kaye describes three main applications of AI technology that pose
important threats to several human rights.

The first problem raised is AI’s effect on freedom of expression and
opinion. On one hand, “artificial intelligence shapes the world of
information in a way that is opaque to the user” and conceals its role
in determining what the user sees and consumes. On the other, the
personalisation of information display has been shown to reinforce
biases and “incentivize the promotion and recommendation of inflammatory
content or disinformation in order to sustain users’ online engagement”.
These practices impact individuals’ self-determination and autonomy to
form and develop personal opinions based on factual and varied
information, therefore threatening freedom of expression and opinion.

Secondly, similar concerns can be raised in relation to our right to
privacy, in particular with regard to AI-enabled micro-targeting for
advertisement purposes. As David Kaye states, profiling and targeting
users foster mass collection of personal data, and lead to inferring
“sensitive information about people that they have not provided or
confirmed”. The few possibilities to control personal data collected and
generated by AI systems put into question the respect of privacy.

Third, the Special Rapporteur highlights AI as an important threat to
our rights to freedom of expression and non-discrimination due to AI’s
increasingly-allocated role in the moderation and filtering of content
online. Despite some companies’ claims that artificial intelligence can
support exceeded human capacities, the report sees the recourse to
automate moderation as impeding the exercise of human rights. In fact,
artificial intelligence is unable to resist discriminatory assumptions
or to grasp sarcasm and the cultural context for each piece of content
published. As a result, freedom of expression and our right not to be
discriminated against can be severely hampered by delegating complex
censorship exercises to AI and private actors.

3. A set of recommendations for both companies and States

Recalling that “ethics” is not a cover for companies and public
authorities to neglect binding and enforceable human rights-based
regulation, the UN Special Rapporteur recommends that “any efforts to
develop State policy or regulation in the field of artificial
intelligence should ensure consideration of human rights concerns”.

David Kaye suggests human rights should guide development of business
practices, AI design and deployment and calls for enhanced transparency,
disclosure obligations and robust data protection legislation -
including effective means for remedy. Online service providers should
make clear which decisions are made with human review and which by
artificial intelligence systems alone. This information should be
accompanied by explanations of the decision-making logic used by
algorithms. Further, the “existence, purpose, constitution and impact”
of AI systems should be disclosed in an effort to improve the level of
individual users’ education around this topic. The report also
recommends to make available and publicise data on the “frequency at
which AI systems are subject to complaints and requests for remedies, as
well as the types and effectiveness of remedies available”.

States are identified as key actors responsible for creating a
legislative framework hospitable to a pluralistic information landscape,
preventing technology monopolies and supportive of network and device
neutrality.

Lastly, the Special Rapporteur provides useful tools to oversee AI
development: (1) human rights impact assessments performed prior, during
and after the use of AI systems, (2) external audits and consultations
with human rights organisations; (3) enabled individual choice thanks to
notice and consent; (4) effective remedy processes to end human rights
violations.

UN Special Rapporteur on Freedom of Expression and Opinion Report on AI
and Freedom of Expression (29.08.2018)
https://freedex.org/wp-content/blogs.dir/2015/files/2018/10/AI-and-FOE-GA.pdf

Civil society calls for evidence-based solutions to disinformation
(19.10.2018)
https://edri.org/civil-society-calls-for-evidence-based-solutions-to-disinformation/

(Contribution by Chloé Berthélémy, EDRi intern)

On Thu, Nov 8, 2018 at 12:26 PM Mark Perkins <marknoumea=
40yahoo.com@dmarc.ietf.org>; wrote:

> UN Special Rapporteur analyses AI’s impact on human rights
>
> https://edri.org/un-special-rapporteur-report-artificial-intelligence-impact-human-rights/
>
> Interesting reading
>
> Mark P.
> _______________________________________________
> hrpc mailing list
> hrpc@irtf.org
> https://www.irtf.org/mailman/listinfo/hrpc
>


-- 
Corinne Cath - Speth
Ph.D. Candidate, Oxford Internet Institute & Alan Turing Institute

Web: www.oii.ox.ac.uk/people/corinne-cath
Email: ccath@turing.ac.uk & corinnecath@gmail.com
Twitter: @C_Cath