Re: [Idnet] Intelligence-Defined Network Architecture and Call for Interests

David Meyer <dmm@1-4-5.net> Wed, 29 March 2017 17:12 UTC

Return-Path: <dmm@1-4-5.net>
X-Original-To: idnet@ietfa.amsl.com
Delivered-To: idnet@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3A98F1272E1 for <idnet@ietfa.amsl.com>; Wed, 29 Mar 2017 10:12:13 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.598
X-Spam-Level:
X-Spam-Status: No, score=-2.598 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=1-4-5-net.20150623.gappssmtp.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 05doPTOThULd for <idnet@ietfa.amsl.com>; Wed, 29 Mar 2017 10:12:08 -0700 (PDT)
Received: from mail-qk0-x231.google.com (mail-qk0-x231.google.com [IPv6:2607:f8b0:400d:c09::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 90AB1128D8B for <idnet@ietf.org>; Wed, 29 Mar 2017 10:12:08 -0700 (PDT)
Received: by mail-qk0-x231.google.com with SMTP id d10so18847391qke.1 for <idnet@ietf.org>; Wed, 29 Mar 2017 10:12:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1-4-5-net.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=/jrjMnk1x9n9gcE6OS/jua2zAokcla8iXX0stCT3eSA=; b=riKH+dHrxKgp+T8CFHtNtLJuq4eg7Akaq++cJeKog5wLd4qIA4P27ESHy/ONIFJyMS cJdbXVnh39gfT1vU3dZuZ0o/2cpM+s8e7qcFBCTV3NUs4kzz6lcf2ybpnS6F5/iATrmt vsUqNsHo7pzvxbeCEjb4wwTn9y61ZHfnVgDkMiLj2ZnRLq2o2HQakPONo7IZkrHn4OR4 9kVWAzfWslzA2FYMYruWfXx6wT5wbuZUdv2tt3MJnHFWmM2xOGjfR64e09im7iS2wBww 2bdWNBC7RvSCSGO2CYf4aMWTMhO2zRFLQ7JI1Q8SMpoF4qojSw02eptLI4yzh4hzbQHT CjuQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=/jrjMnk1x9n9gcE6OS/jua2zAokcla8iXX0stCT3eSA=; b=dL/q0kU0IRWIDmZsb+P+NrqC3TUSCnrLRiMf/QmCPSoDSt3n/qveCRrm1V+nzgDuUU pwHZ5jfpr7b/u1s8BC2KpaulFM4OIUe4A5Gq5xkLuqGguMrEPrexrF05v/AXwtM5lUoY XbTv1tG+dK8cmdAj0ii5lEoE2y+PjIXZsszbi354sxbRplFG+C9Z22nhUzY5nPWt8XU4 5wDRoc+s2MgojfHhgxzW5Y4xp7iwQzoJk27eGKL74BKHv+m291+cMhGcxLNEGxAeri42 /8RGmu3CsvStDYcOJ8W8Pv07x7DRuFNbBtGVosZ8NXcaEdyYyC5pBZ6tX0atmXC70KIl szSA==
X-Gm-Message-State: AFeK/H3FuQsSoohF21KTA6XmYdS6AU+eFaQpR6c9z+LeqLrr3nueFX5t9FS82aHZ+oJ4bMsoqvGYUO+tHVy0FQ==
X-Received: by 10.55.177.68 with SMTP id a65mr1603593qkf.45.1490807527522; Wed, 29 Mar 2017 10:12:07 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.12.149.34 with HTTP; Wed, 29 Mar 2017 10:12:06 -0700 (PDT)
X-Originating-IP: [128.223.156.253]
In-Reply-To: <5BC916BD50F92F45870ABA46212CB29CE4D07A@SMTP1.etri.info>
References: <5D36713D8A4E7348A7E10DF7437A4B927CD15A18@NKGEML515-MBS.china.huawei.com> <CABo5upUAQaGXTP5Q+pp++ABipMc-Yu2rKp=DGVFky+L3qzdUEg@mail.gmail.com> <CAAAu=jwv=gmtFPJC3RQ9YBjTSukz5p7BoGLmHubJnHCWgkQnCA@mail.gmail.com> <f4a0ef2b-bba1-8b02-ba63-b119438fc13e@inria.fr> <CAAAu=jytqiHmL17z_x6828Jy5YZegV=sJ_RrS0uTsNnSZZr7cg@mail.gmail.com> <5BC916BD50F92F45870ABA46212CB29CE4C966@SMTP1.etri.info> <CAHiKxWg-D3fCj0at2sxSr76MV8jiHO_TXisyiwSwM7hOXUSOdw@mail.gmail.com> <CAHiKxWjmW6hbTCgKyVPvh98WjOBCOPJsRF9w+pJJdxr6jDv0AQ@mail.gmail.com> <5BC916BD50F92F45870ABA46212CB29CE4D07A@SMTP1.etri.info>
From: David Meyer <dmm@1-4-5.net>
Date: Wed, 29 Mar 2017 10:12:06 -0700
Message-ID: <CAHiKxWiOGa8RB_BjJyL9qGjYkVXdmZRcPW2LEzztP416Me59Ag@mail.gmail.com>
To: 김민석 <mskim16@etri.re.kr>
Cc: Brian Njenga <iambrianmuhia@gmail.com>, Jérôme François <jerome.francois@inria.fr>, Oscar Mauricio Caicedo Rendon <omcaicedo@unicauca.edu.co>, Sheng Jiang <jiangsheng@huawei.com>, "idnet@ietf.org" <idnet@ietf.org>
Content-Type: multipart/alternative; boundary="94eb2c070e981b71c6054be1ad6a"
Archived-At: <https://mailarchive.ietf.org/arch/msg/idnet/EjjOpxTB0KqD8CMHf9ZG9uST8LQ>
Subject: Re: [Idnet] Intelligence-Defined Network Architecture and Call for Interests
X-BeenThere: idnet@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: "The IDNet \(Intelligence-Defined Network\) " <idnet.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/idnet>, <mailto:idnet-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/idnet/>
List-Post: <mailto:idnet@ietf.org>
List-Help: <mailto:idnet-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/idnet>, <mailto:idnet-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 29 Mar 2017 17:12:13 -0000

Hey Min-Suk,


On Wed, Mar 29, 2017 at 8:29 AM, 김민석 <mskim16@etri.re.kr> wrote:

> Hi Dave,
>
>
> Thank you for giving me the great information.
>
> I absolutely agree your opinion that we need real ML data applied by data
> pre-processing so that we have been already trying to make available ML
> data on many ways such as clustering and classification. (using datasets of
> contents and URL)
>
> It's so challenge-able steps before using adaptive ML algorithm to network
> field.
>
> As you mentioned, RL is classical ML algorithm, but it is rapidly going
> develpment and make great results with tensorflow in many fields,
> unfortunately not network.
>
> For our tutorial, I attach some of practical examples with tensorflow as
> below,
>
> https://github.com/tensorflow/models
>
>
> Additionally, I submitted a personal draft to NMLRG even if it was closed
> from last meeting. It's about collaborative distributed multi-agent using
> re-inforcement learning and we trying to apply it to network real
> architecture.
>
> The attachment is on the email. I really appreciate giving me a small
> piece of your feedback and comment if you have a chance.
>

Thanks. I will try to read/comment later today.

Thanks again,

Dave


>
> Sincerely,
>
>
> Min-Suk Kim
>
> Senior Researcher / Ph.D.
> Intelligent IoE Network Research Section,
> ETRI
>
>
>
>
>
>
> ------------------------------
> *보낸 사람 : *"David Meyer" <dmm@1-4-5.net>
> *보낸 날짜 : *2017-03-29 23:25:48 ( +09:00 )
> *받는 사람 : *김민석 <mskim16@etri.re.kr>
> *참조 : *Brian Njenga <iambrianmuhia@gmail.com>, Jérôme François <
> jerome.francois@inria.fr>, Oscar Mauricio Caicedo Rendon <
> omcaicedo@unicauca.edu.co>, Sheng Jiang <jiangsheng@huawei.com>,
> idnet@ietf.org <idnet@ietf.org>
> *제목 : *Re: [Idnet] Intelligence-Defined Network Architecture and Call for
> Interests
>
>
> Apparently you can't attach a .pptx. The attachment is here (pptx and
> pdf):
>
>
> http://www.1-4-5.net/~dmm/ml/misc/musings.pptx
> http://www.1-4-5.net/~dmm/ml/misc/musings.pdf
>
>
>
>
> Thx,
>
>
> Dave
>
>
>
>
> On Wed, Mar 29, 2017 at 7:17 AM, David Meyer <dmm@1-4-5.net> wrote:
>
>
>
>>
>> Hey Min-Suk,
>>
>>
>> Totally agree we need to learn from our environment, and RL is a natural
>> approach. After all, the network is always changing, has adversaries, etc.
>> All of this means. among other things,  that we can't make simplifying
>> assumptions like stationary distributions,  iid data, .... So RL is one way
>> to attack these problems, and the classic algorithms you mention below are
>> certainly a reasonable approach (I've been working with policy gradients
>> [0], trying to model/adapt the two-player game approach of AlphaGo to
>> networking; the problem there is that we don't have a source of labeled
>> expert data like the KGS Go server (https://www.gokgs.com/) to build the
>> supervised policy network....).
>>
>>
>> You might also want to check out the recent "boot" of evolution
>> strategies as a black-box approach to RL (in particular no gradients). See
>> [1],  [2],  [3]. There is also a ton of code around if you want to try some
>> of this out (see e.g.,https://github.com/dennyb
>> ritz/reinforcement-learning; this one is in tensorflow). Finally, I've
>> attached a few summary slides with some of my musings on this topic from
>> past talks.
>>
>>
>> Thanks,
>>
>>
>> Dave
>>
>>
>> [BTW, two player minimax games seem to be popping up everywhere: AlphaGo,
>> variational autoencoders [4], GANs [5], and many others; something to thing
>> about for our domain]
>>
>>
>> [0] https://papers.nips.cc/paper/1713-policy-gradient-method
>> s-for-reinforcement-learning-with-function-approximation.pdf
>> [1] https://blog.openai.com/evolution-strategies/
>> [2] https://arxiv.org/pdf/1703.03864.pdf
>> [3] http://jmlr.csail.mit.edu/papers/volume15/wierstra14a/wierstra14a.pdf
>> [4] http://www.1-4-5.net/~dmm/ml/vae.pdf
>> [5] https://arxiv.org/pdf/1406.2661.pdf
>>
>>
>>
>>
>> On Tue, Mar 28, 2017 at 4:04 PM, 김민석 <mskim16@etri.re.kr> wrote:
>>
>>
>>
>>> Hi Brian,
>>>
>>>
>>>
>>> As you mentioned by the prior email, anticipating network DDos
>>> attacks is really trendy issue to solve by ML techniques.
>>>
>>> We also make some efforts how to avoid fagile nodes by a trustworthy
>>> communication, that means quantifying trustworthiness of node with
>>> normalization of various requirements such as security function, bandwidth
>>> and etc.
>>>
>>> We are freshly approaching in routing layer with confidence using our
>>> own requirements, TPD(Trust Policy Distribution) and TD(Trust Degree).
>>> These requirements are considered to be solved by Reinforcement Learning
>>> (RL) that is one of the ML algorithms. RL is useful to control some of
>>> network policy about specific actions and states with reinforced and
>>> purnished rewards (+/-), but the problem is too slow to acquire satisified
>>> performance. Other ways to say it, anormaly dectection and regression
>>> analysis might be both efficient approaching methods to solve the issues
>>> Dave mentioned.
>>>
>>>
>>>
>>> Best Regards,
>>>
>>>
>>> Min-Suk Kim
>>>
>>> Senior Researcher / Ph.D.
>>> Intelligent IoE Network Research Section,
>>> *E*lectronics and *T*elecommunications *R*esearch *I*nstitute (*ET**R*
>>> *I)*
>>> e-mail          :  mskim16@etri.re.kr <nskim@etri.re.kr>
>>> http://www.etri.re.kr/
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>