Re: [Idnet] A few ideas/suggestions to get us going

David Meyer <> Thu, 23 March 2017 14:16 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 9141F129439 for <>; Thu, 23 Mar 2017 07:16:44 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.898
X-Spam-Status: No, score=-1.898 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id VjqUMWZCjP-S for <>; Thu, 23 Mar 2017 07:16:42 -0700 (PDT)
Received: from ( [IPv6:2607:f8b0:400d:c0d::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id ED1C81267BB for <>; Thu, 23 Mar 2017 07:16:41 -0700 (PDT)
Received: by with SMTP id r45so176198441qte.3 for <>; Thu, 23 Mar 2017 07:16:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=aguJj/rqcb3dgZC4NGSaNhTMhIlsdalxaDtWrnqI2oc=; b=W1MdMHdP/uUIFSvUvILzSLssdfpYTP2JjwAeHWZLxlETMnOuUg4Bc6i8eoma5P7/y4 x7fknYzu5v/kOL9YUbBQp/hG1vVJ6lwI5qVYXLxiGI/ODBCuJbcyTejXp53vSnqVVN/M +fXKsLNZ05oh/4Y7STXQ9giICSq6RmWcqgyGIwGIYVP0mGoP6OIumisQxwkK6pSszd93 HfJ4espUum4hrAuXHN+PCjNKN2qgxCCYz+6JUQq0wUU5PTesP3Xu/EKMgZ70KEgaiFWV MsGISanxKTRN6+wBuHjfdGG4Der0r+tM7ePVewO4rvS3MWA9Sw76PfOwKYMvMR998LU1 6DCA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=aguJj/rqcb3dgZC4NGSaNhTMhIlsdalxaDtWrnqI2oc=; b=UipcD6FV4SKBDG9n6g5DBRpvR8y/vAXtoU8Tw+vCnJUbFB8/6H3j6cO3NbbLLH7FiK sOk3wUveZ2a3IIzrbSxbMP1XoaLmGb6NoN6Q5KhHWxYs0WXi9sFyrKOWfVj+1w8YVvLe z5wgWZwrTMLCkPQEdX0s3RRzmTjup9L2zo0l/0xRBbh/kVM+7OT5L1vzXeeypxL/rrAh t9iYJZk1L90zRbLxeoAlHd82+MQAYka4QXDLXYkyyDXCsFzvlnFznSpvfrojf6XwkbEZ h4JLOQFl9vgUFEP1r7M160Za6QDBDEoiQ5iAsR63lUL29TZFHGEYFNPko8lb1iYGGzSQ xYYg==
X-Gm-Message-State: AFeK/H0xLbKLkXakwoPEbY1DCU0UsIPxdcaHiwj/mJ3ocHX+fqEvpvw6hnr8wKT6EEcsrXX5bGTsJn6sIOaqRg==
X-Received: by with SMTP id z5mr2552613qtd.203.1490278600649; Thu, 23 Mar 2017 07:16:40 -0700 (PDT)
MIME-Version: 1.0
Received: by with HTTP; Thu, 23 Mar 2017 07:16:39 -0700 (PDT)
X-Originating-IP: []
In-Reply-To: <>
References: <>
From: David Meyer <>
Date: Thu, 23 Mar 2017 07:16:39 -0700
Message-ID: <>
Content-Type: multipart/alternative; boundary=94eb2c1252bc9bb956054b6686d2
Archived-At: <>
Subject: Re: [Idnet] A few ideas/suggestions to get us going
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: "The IDNet \(Intelligence-Defined Network\) " <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 23 Mar 2017 14:16:45 -0000

Interestingly, Andrew also points out the need for data sets and the
problem with talent pools (among many other things):


On Wed, Mar 22, 2017 at 10:29 AM, David Meyer <> wrote:

> Folks,
> I thought I'd try to get some discussion going by outlining some of my
> views as to why networking is lagging other areas in the development and
> application of Machine Learning (ML). In particular, networking is way
> behind what we might call the "perceptual tasks" (vision, NLP, robotics,
> etc) as well as other areas (medicine, finance, ...). The attached slide
> from one of my decks tries to summarize the situation, but I'll give a bit
> of an outline below.
> So why is networking lagging many other fields when it comes to the
> application of machine learning? There are several reasons which I'll try
> to outline here (I was fortunate enough to discuss this with the
> packetpushers crew a few weeks ago, see [0]). These are in no particular
> order.
> First, we don't have a "useful" theory of networking (UTON). One way to
> think about what such a theory would look like is by analogy to what we see
> with the success of convolutional neural networks (CNNs) not only for
> vision but now for many other tasks. In that case there is a theory of how
> vision works, built up from concepts like receptive fields, shared weights,
> simple and complex cells, etc. For example, the input layer of a CNN isn't
> fully connected; rather connections reflect the receptive field of the
> input layer, which is in a way that is "inspired" by biological vision
> (being very careful with "biological inspiration"). Same with the
> alternation of convolutional and pooling layers; these loosely model the
> alternation of simple and complex cells in the primary visual cortex (V1),
> the secondary visual cortex(V2) and the Brodmann area (V3). BTW, such a
> theory seems to be required for transfer learning [1], which we'll need if
> we don't want every network to be analyzed in an ad-hoc, one-off style
> (like we see today).
> The second thing that we need to think about is publicly available
> standardized data sets. Examples here include MNIST, ImageNet, and many
> others. The result of having these data sets has been the steady ratcheting
> down of error rates on tasks such as object and scene recognition, NLP, and
> others to super-human levels. Suffice it to say we have nothing like these
> data sets for networking. Networking data sets today are largely
> proprietary, and because there is no UTON, there is no real way to compare
> results between them.
> Third, there is a large skill set gap. Network engineers (us!) typically
> don't have the mathematical background required to build effective machine
> learning at scale. See [2] for an outline of some of the mathematical
> skills that are essential for effective ML. There is a lot more to this,
> involving how progress is made in ML (open data, open source, open models,
> in general open science and associated communities, see e.g., OpenAi [3],
> Distill [4], and many others). In any event we need build community and
> gain new skills if we want to be able to develop and apply state of the art
> machine learning algorithms to network data, at scale. The bottom line is
> that it will be difficult if not impossible to be effective in the ML space
> if we ourselves don't understand how it works and further, if we can build
> explainable systems (noting that explaining what the individual neurons in
> a deep neural network are doing is notoriously difficult; that said much
> progress is being made). So we want to build explainable, end-to-end
> trained systems, and to accomplish this we ourselves need to understand how
> these algorithms work, but in training and in inference.
> This email is already TL;DR but I'll add one more here: We need to learn
> control, not just prediction. Since we live in an inherently adversarial
> environment we need to take advantage of Reinforcement Learning as well as
> the various attacks being formulated against ML; [5] gives one interesting
> example of attacks against policy networks using adversarial examples. See
> also slides 31 and 32 of [6] for some more on this topic.
> I hope some of this gets us thinking about the problems we need to solve
> in order to be successful in the ML space. There's plenty more of this on
> and
> I'm looking forward to the discussion.
> Thanks,
> --dmm
> [0]
> applicability-machine-learning-networking/
> [1]
> [2]
> [3]
> [4]
> [5]
> [6]