Re: [Idnet] A few ideas/suggestions to get us going

João Paulo S. Medeiros <jpsm1985@gmail.com> Wed, 29 March 2017 18:54 UTC

Return-Path: <jpsm1985@gmail.com>
X-Original-To: idnet@ietfa.amsl.com
Delivered-To: idnet@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C0BDE129562 for <idnet@ietfa.amsl.com>; Wed, 29 Mar 2017 11:54:11 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.748
X-Spam-Level:
X-Spam-Status: No, score=-1.748 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=no autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id F326g8L6C0mC for <idnet@ietfa.amsl.com>; Wed, 29 Mar 2017 11:54:08 -0700 (PDT)
Received: from mail-qt0-x22d.google.com (mail-qt0-x22d.google.com [IPv6:2607:f8b0:400d:c0d::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 3DB30127077 for <idnet@ietf.org>; Wed, 29 Mar 2017 11:54:02 -0700 (PDT)
Received: by mail-qt0-x22d.google.com with SMTP id x35so20850761qtc.2 for <idnet@ietf.org>; Wed, 29 Mar 2017 11:54:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=tDcT3+0lRkSpQfb6VSzwklvTF8bo79L6anKrcbQ88LI=; b=qcV/a+/L7hA214RD2TkEDq0NwpjlHZCEbDXhU22S/N+kNFQ065OU+T//MK6HLY7t44 CzTzR/q1pam/4Vfjaq6jlpPc2uyE/blANKppcHo4OxI9Oj0E4TR9YdDYWJqxQqzR0uST mFOpiBj/oMoBslQORwY1WSf1sG7lEZ/auDFa4a0klibtIaW1WwEGs6IfiD3r5iwnJgN3 Aq6H5IWcw4Q/lXGkA4a/ARVqrVudFtFfFcZQPkYyVwLJYaxmjG9KUrCrOXDSSbcsSCCN szmjNc83fPrqdT/OWm7Tjv7TGFEwBOS+G92FxXDw4uSfO9Ym6k/yf09+5hTfW9Bx8zXA /T7A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=tDcT3+0lRkSpQfb6VSzwklvTF8bo79L6anKrcbQ88LI=; b=FnsGDp+KkTB+FBwiEnBnjCZssz3vRo1aBUKGhNnxHPRNBhmk/XuCq4Vq8FVHKGv6DN aYbIeQBWio0qbWS34Cl9CZx+KUAmwNQeCUZ7KKR5p9MP1IUDuaCONxCifkpzcC1T7wRd YJlS+O9cMMevAK/vzckb1KkUlPTXlomWfJ5BGo1uhLZ7iNSnEszj3G5jpm8v+tFa3h/g CX5lI7CoJ01JMgEGdXTD0txPCr/rIPu5cDGDa89jeJp9tlyn+ZGfD8wwyCE+FiEn9tKk o+XtaHIjQlu44e9Htu3tsKtDbaH75UHcaxHBiO5/sk7RWaLFc+D2P6WpHLcIUnMxT75+ xAwg==
X-Gm-Message-State: AFeK/H1zY5P9RHSjEG+FIufmz6Zu0lnCqdw/m28yC9k3+JanATxzhAZ29usiqCFa8zpiL3adAeEH4OQyeHVwoA==
X-Received: by 10.200.45.59 with SMTP id n56mr2137273qta.137.1490813640877; Wed, 29 Mar 2017 11:54:00 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.12.174.118 with HTTP; Wed, 29 Mar 2017 11:53:20 -0700 (PDT)
In-Reply-To: <CAHiKxWh26ciY-Pf78EH3CLO1+d3utikMr1N8GwKWJzkZQAAu9g@mail.gmail.com>
References: <CAHiKxWh26ciY-Pf78EH3CLO1+d3utikMr1N8GwKWJzkZQAAu9g@mail.gmail.com>
From: "João Paulo S. Medeiros" <jpsm1985@gmail.com>
Date: Wed, 29 Mar 2017 15:53:20 -0300
Message-ID: <CA+64pfvVO_UFdQ2qGQyMH4Lf5vRvqErCoaTYqEioqz3YrPcS-g@mail.gmail.com>
To: David Meyer <dmm@1-4-5.net>
Cc: idnet@ietf.org
Content-Type: multipart/alternative; boundary="001a1142ae367db7b0054be319d0"
Archived-At: <https://mailarchive.ietf.org/arch/msg/idnet/ruU58WD0W1AJ2iJsraEXigSbQhk>
Subject: Re: [Idnet] A few ideas/suggestions to get us going
X-BeenThere: idnet@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: "The IDNet \(Intelligence-Defined Network\) " <idnet.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/idnet>, <mailto:idnet-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/idnet/>
List-Post: <mailto:idnet@ietf.org>
List-Help: <mailto:idnet-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/idnet>, <mailto:idnet-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 29 Mar 2017 18:54:12 -0000

Dear David Meyer and all,

I would like to contribute with some ideas and published works from my
academic education.

I have been working with Machine Learning (ML) and Computer Network since
2007. I initially started my research interests with the use of neural
networks to aid the performance of classification and characterization of
remote computer fingerprinting (e.g. [0] [1], and most recently [2] [3]).
Although it is not my current main line of research, my experience will
agree with the David's comment that ``we need to think about is publicly
available standardized data''. This probably is one of the main problems
for researchers trying to advance or reproduce state-of-the-art research on
Intrusion Detection systems (and others feature extraction + pattern
recognition tasks) using ML.

My current main line of research is related to two of David's concerns:
namely, (i) the UTON and the (ii) Controllability of Computer Networks. My
PhD thesis work was related to the use of model which could be used to
minimize the overhead of network monitoring. My last published work about
this is in [4]. I used the theory of Complex Networks Controllability [5]
to achieve my PhD goal. However, I realized that its too more practical to
use this theory to build Observable (dual problem) network monitoring
systems with minimal sensor nodes, since in controllability we need to
directly change (or induce) the state of network devices. In this sense,
the theory of Adaptive Filtering (e.g. Kalman Filter) is important too.
Even so, Controlability of computer networks it's still a very interesting
and challenging problem involving not only Complex Networks theory, but
also, probably, Markov Process and ML.

Still about UTON, the network topology almost always plays an important
role in the ML system design. For many reasons, the topology is not
available and its estimation is also another important problem we could
approach using ML [6].

Finally, I would like to share an inspiring paper entitled ``Mathematics
and the Internet: A Source of Enormous Confusion and Great Potential'' [7].

Best regards!

[0] http://dx.doi.org/10.1109/EFTA.2007.4416854
[1] http://dx.doi.org/10.1007/978-3-540-89173-4_20
[2] http://dx.doi.org/10.1007/978-3-319-05885-6_12
[3] http://dx.doi.org/10.1201/b17333-10
[4] http://dx.doi.org/10.1109/CIT/IUCC/DASC/PICOM.2015.15
[5] http://dx.doi.org/10.1038/nature10011
[6] http://dx.doi.org/10.1109/TNET.2011.2175747
[7] http://www.ams.org/notices/200905/tx090500586p.pdf

-- Prof. João Paulo Souza Medeiros

On Wed, Mar 22, 2017 at 2:29 PM, David Meyer <dmm@1-4-5.net> wrote:

> Folks,
>
> I thought I'd try to get some discussion going by outlining some of my
> views as to why networking is lagging other areas in the development and
> application of Machine Learning (ML). In particular, networking is way
> behind what we might call the "perceptual tasks" (vision, NLP, robotics,
> etc) as well as other areas (medicine, finance, ...). The attached slide
> from one of my decks tries to summarize the situation, but I'll give a bit
> of an outline below.
>
> So why is networking lagging many other fields when it comes to the
> application of machine learning? There are several reasons which I'll try
> to outline here (I was fortunate enough to discuss this with the
> packetpushers crew a few weeks ago, see [0]). These are in no particular
> order.
>
> First, we don't have a "useful" theory of networking (UTON). One way to
> think about what such a theory would look like is by analogy to what we see
> with the success of convolutional neural networks (CNNs) not only for
> vision but now for many other tasks. In that case there is a theory of how
> vision works, built up from concepts like receptive fields, shared weights,
> simple and complex cells, etc. For example, the input layer of a CNN isn't
> fully connected; rather connections reflect the receptive field of the
> input layer, which is in a way that is "inspired" by biological vision
> (being very careful with "biological inspiration"). Same with the
> alternation of convolutional and pooling layers; these loosely model the
> alternation of simple and complex cells in the primary visual cortex (V1),
> the secondary visual cortex(V2) and the Brodmann area (V3). BTW, such a
> theory seems to be required for transfer learning [1], which we'll need if
> we don't want every network to be analyzed in an ad-hoc, one-off style
> (like we see today).
>
> The second thing that we need to think about is publicly available
> standardized data sets. Examples here include MNIST, ImageNet, and many
> others. The result of having these data sets has been the steady ratcheting
> down of error rates on tasks such as object and scene recognition, NLP, and
> others to super-human levels. Suffice it to say we have nothing like these
> data sets for networking. Networking data sets today are largely
> proprietary, and because there is no UTON, there is no real way to compare
> results between them.
>
> Third, there is a large skill set gap. Network engineers (us!) typically
> don't have the mathematical background required to build effective machine
> learning at scale. See [2] for an outline of some of the mathematical
> skills that are essential for effective ML. There is a lot more to this,
> involving how progress is made in ML (open data, open source, open models,
> in general open science and associated communities, see e.g., OpenAi [3],
> Distill [4], and many others). In any event we need build community and
> gain new skills if we want to be able to develop and apply state of the art
> machine learning algorithms to network data, at scale. The bottom line is
> that it will be difficult if not impossible to be effective in the ML space
> if we ourselves don't understand how it works and further, if we can build
> explainable systems (noting that explaining what the individual neurons in
> a deep neural network are doing is notoriously difficult; that said much
> progress is being made). So we want to build explainable, end-to-end
> trained systems, and to accomplish this we ourselves need to understand how
> these algorithms work, but in training and in inference.
>
> This email is already TL;DR but I'll add one more here: We need to learn
> control, not just prediction. Since we live in an inherently adversarial
> environment we need to take advantage of Reinforcement Learning as well as
> the various attacks being formulated against ML; [5] gives one interesting
> example of attacks against policy networks using adversarial examples. See
> also slides 31 and 32 of [6] for some more on this topic.
>
> I hope some of this gets us thinking about the problems we need to solve
> in order to be successful in the ML space. There's plenty more of this on
> http://www.1-4-5.net/~dmm/ml and http://www.1-4-5.net/~dmm/vita.html.
> I'm looking forward to the discussion.
>
> Thanks,
>
> --dmm
>
>
>
>
> [0]  http://packetpushers.net/podcast/podcasts/pq-show-107-
> applicability-machine-learning-networking/
>
> [1]  http://sebastianruder.com/transfer-learning/index.html
> [2]  http://datascience.ibm.com/blog/the-mathematics-of-machine-learning/
> [3] https://openai.com/blog/
> [4] http://distill.pub/
> [5] http://rll.berkeley.edu/adversarial/arXiv2017_AdversarialAttacks.pdf
> [6]  http://www.1-4-5.net/~dmm/ml/talks/2016/cor_ml4networking.pptx
>
>
> _______________________________________________
> IDNET mailing list
> IDNET@ietf.org
> https://www.ietf.org/mailman/listinfo/idnet
>
>