Re: [Idnet] Intelligence-Defined Network Architecture and Call for Interests

David Meyer <dmm@1-4-5.net> Wed, 29 March 2017 18:18 UTC

Return-Path: <dmm@1-4-5.net>
X-Original-To: idnet@ietfa.amsl.com
Delivered-To: idnet@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7B3181201FA for <idnet@ietfa.amsl.com>; Wed, 29 Mar 2017 11:18:39 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.598
X-Spam-Level:
X-Spam-Status: No, score=-2.598 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=1-4-5-net.20150623.gappssmtp.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id SpDq-DALIqnQ for <idnet@ietfa.amsl.com>; Wed, 29 Mar 2017 11:18:35 -0700 (PDT)
Received: from mail-qk0-x233.google.com (mail-qk0-x233.google.com [IPv6:2607:f8b0:400d:c09::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 3F5EE1267BB for <idnet@ietf.org>; Wed, 29 Mar 2017 11:18:35 -0700 (PDT)
Received: by mail-qk0-x233.google.com with SMTP id f11so20401036qkb.0 for <idnet@ietf.org>; Wed, 29 Mar 2017 11:18:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1-4-5-net.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=vrl0HkyXmKRp9fc8RlMWLqKNK5HHgtLEroD0My1MpMM=; b=ckg0Hbc6ANGCKdKS2L118H5lyShKoBl9hfx8op/IlakOHDY6wcM+18HhDd6+OZGR20 S55AAVTJ+iwehyiM2BAx00kA9vWGuHd0nkHkLQmOklBgrQnUJpHrMiRNhslj4uo5yeMi kGFHyX2PwvHL3/ndPDDT3QGLuQZTQAJLX8GsrbjOGsViklu9qr+NEXNCfWK1F1+gUu+6 pxF1M1HoZR3lqgrEDyL6lcguuA6ouS9EjSkRmxwdwWkfPCH0QX2QMRBxmq/tkRzE09tb YoWp7p9i8nkIzo2cibbdGbdzFnWZoGolGApVgaeKI/wjeJHoCjY3QL4tDJ4Z6os+1Nd8 KESQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=vrl0HkyXmKRp9fc8RlMWLqKNK5HHgtLEroD0My1MpMM=; b=OSQb9C542UKd2nFSvbd/1nzxi0UPEFXdwgnHBX59sn1znVfw6toj75Au5lnsp6XE/9 yE6RXMcbWwy+Xp0LoyKMCFRGkxeBiJclMcHpTH21UeI+7NOtPPn8ehWELWKFZx9M7RYW I85b0o9el1bRaxgEaZCZR9ZbVUrTj/rEb457YUgGs9Crusjld+QETacRk+1d6DULs4aJ YBIvqVbpV1n7R0vAB896ZDubKMCvDL4+Nc6bqNeyldR7NFyj3QxY5cQc6HM4ROx9CcjZ UxQRVOVXu7Nvu1wCSqNBzsgcB5tMRC5rmLGo5NEtNkAYafXCpgqt1cpe94o/rMAToEOP WSwA==
X-Gm-Message-State: AFeK/H0NCwGRT5+NGBhOaeIwJzslD0hKe7SPvQCiaKqry4x7nCe6i6Sn+E7RxF+qG2IrxlrN5I5c201IL1yKcw==
X-Received: by 10.55.65.81 with SMTP id o78mr1848234qka.82.1490811514147; Wed, 29 Mar 2017 11:18:34 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.12.149.34 with HTTP; Wed, 29 Mar 2017 11:18:33 -0700 (PDT)
X-Originating-IP: [128.223.156.253]
In-Reply-To: <793B28A1-395C-4570-A971-885801A72FE0@etri.re.kr>
References: <5D36713D8A4E7348A7E10DF7437A4B927CD15A18@NKGEML515-MBS.china.huawei.com> <CABo5upUAQaGXTP5Q+pp++ABipMc-Yu2rKp=DGVFky+L3qzdUEg@mail.gmail.com> <CAAAu=jwv=gmtFPJC3RQ9YBjTSukz5p7BoGLmHubJnHCWgkQnCA@mail.gmail.com> <f4a0ef2b-bba1-8b02-ba63-b119438fc13e@inria.fr> <CAAAu=jytqiHmL17z_x6828Jy5YZegV=sJ_RrS0uTsNnSZZr7cg@mail.gmail.com> <5BC916BD50F92F45870ABA46212CB29CE4C966@SMTP1.etri.info> <CAHiKxWg-D3fCj0at2sxSr76MV8jiHO_TXisyiwSwM7hOXUSOdw@mail.gmail.com> <CAHiKxWjmW6hbTCgKyVPvh98WjOBCOPJsRF9w+pJJdxr6jDv0AQ@mail.gmail.com> <5BC916BD50F92F45870ABA46212CB29CE4D07A@SMTP1.etri.info> <CAHiKxWiOGa8RB_BjJyL9qGjYkVXdmZRcPW2LEzztP416Me59Ag@mail.gmail.com> <793B28A1-395C-4570-A971-885801A72FE0@etri.re.kr>
From: David Meyer <dmm@1-4-5.net>
Date: Wed, 29 Mar 2017 11:18:33 -0700
Message-ID: <CAHiKxWhjjaGCNpcC_GsuHeD7NKfxT2xerTG9QZb5KjDgOJJmzA@mail.gmail.com>
To: 김민석 <mskim16@etri.re.kr>
Cc: Brian Njenga <iambrianmuhia@gmail.com>, Jérôme François <jerome.francois@inria.fr>, Oscar Mauricio Caicedo Rendon <omcaicedo@unicauca.edu.co>, Sheng Jiang <jiangsheng@huawei.com>, "idnet@ietf.org" <idnet@ietf.org>
Content-Type: multipart/mixed; boundary="001a114ac54ebae725054be29a79"
Archived-At: <https://mailarchive.ietf.org/arch/msg/idnet/kTshnzQhUg7wFNN-04on1iyQJIU>
Subject: Re: [Idnet] Intelligence-Defined Network Architecture and Call for Interests
X-BeenThere: idnet@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: "The IDNet \(Intelligence-Defined Network\) " <idnet.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/idnet>, <mailto:idnet-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/idnet/>
List-Post: <mailto:idnet@ietf.org>
List-Help: <mailto:idnet-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/idnet>, <mailto:idnet-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 29 Mar 2017 18:18:39 -0000

Minsuk,

Attached are a few quick comments. I'll read more carefully this afternoon.
I also have to read the references.

Thanks,

Dave


On Wed, Mar 29, 2017 at 10:37 AM, 김민석 <mskim16@etri.re.kr> wrote:

> Thank you so much =:)
>
> -Minsuk Kim
>
> Sent from my iPhone
>
> On 29 Mar 2017, at 12:12 PM, David Meyer <dmm@1-4-5.net> wrote:
>
>
> Hey Min-Suk,
>
>
> On Wed, Mar 29, 2017 at 8:29 AM, 김민석 <mskim16@etri.re.kr> wrote:
>
>> Hi Dave,
>>
>>
>> Thank you for giving me the great information.
>>
>> I absolutely agree your opinion that we need real ML data applied by data
>> pre-processing so that we have been already trying to make available ML
>> data on many ways such as clustering and classification. (using datasets of
>> contents and URL)
>>
>> It's so challenge-able steps before using adaptive ML algorithm to
>> network field.
>>
>> As you mentioned, RL is classical ML algorithm, but it is rapidly going
>> develpment and make great results with tensorflow in many fields,
>> unfortunately not network.
>>
>> For our tutorial, I attach some of practical examples with tensorflow as
>> below,
>>
>> https://github.com/tensorflow/models
>>
>>
>> Additionally, I submitted a personal draft to NMLRG even if it was closed
>> from last meeting. It's about collaborative distributed multi-agent using
>> re-inforcement learning and we trying to apply it to network real
>> architecture.
>>
>> The attachment is on the email. I really appreciate giving me a small
>> piece of your feedback and comment if you have a chance.
>>
>
> Thanks. I will try to read/comment later today.
>
> Thanks again,
>
> Dave
>
>
>>
>> Sincerely,
>>
>>
>> Min-Suk Kim
>>
>> Senior Researcher / Ph.D.
>> Intelligent IoE Network Research Section,
>> ETRI
>>
>>
>>
>>
>>
>>
>> ------------------------------
>> *보낸 사람 : *"David Meyer" <dmm@1-4-5.net>
>> *보낸 날짜 : *2017-03-29 23:25:48 ( +09:00 )
>> *받는 사람 : *김민석 <mskim16@etri.re.kr>
>> *참조 : *Brian Njenga <iambrianmuhia@gmail.com>, Jérôme François <
>> jerome.francois@inria.fr>, Oscar Mauricio Caicedo Rendon <
>> omcaicedo@unicauca.edu.co>, Sheng Jiang <jiangsheng@huawei.com>,
>> idnet@ietf.org <idnet@ietf.org>
>> *제목 : *Re: [Idnet] Intelligence-Defined Network Architecture and Call
>> for Interests
>>
>>
>> Apparently you can't attach a .pptx. The attachment is here (pptx and
>> pdf):
>>
>>
>> http://www.1-4-5.net/~dmm/ml/misc/musings.pptx
>> http://www.1-4-5.net/~dmm/ml/misc/musings.pdf
>>
>>
>>
>>
>> Thx,
>>
>>
>> Dave
>>
>>
>>
>>
>> On Wed, Mar 29, 2017 at 7:17 AM, David Meyer <dmm@1-4-5.net> wrote:
>>
>>
>>
>>>
>>> Hey Min-Suk,
>>>
>>>
>>> Totally agree we need to learn from our environment, and RL is a natural
>>> approach. After all, the network is always changing, has adversaries, etc.
>>> All of this means. among other things,  that we can't make simplifying
>>> assumptions like stationary distributions,  iid data, .... So RL is one way
>>> to attack these problems, and the classic algorithms you mention below are
>>> certainly a reasonable approach (I've been working with policy gradients
>>> [0], trying to model/adapt the two-player game approach of AlphaGo to
>>> networking; the problem there is that we don't have a source of labeled
>>> expert data like the KGS Go server (https://www.gokgs.com/) to build
>>> the supervised policy network....).
>>>
>>>
>>> You might also want to check out the recent "boot" of evolution
>>> strategies as a black-box approach to RL (in particular no gradients). See
>>> [1],  [2],  [3]. There is also a ton of code around if you want to try some
>>> of this out (see e.g.,https://github.com/dennyb
>>> ritz/reinforcement-learning; this one is in tensorflow). Finally, I've
>>> attached a few summary slides with some of my musings on this topic from
>>> past talks.
>>>
>>>
>>> Thanks,
>>>
>>>
>>> Dave
>>>
>>>
>>> [BTW, two player minimax games seem to be popping up everywhere:
>>> AlphaGo, variational autoencoders [4], GANs [5], and many others; something
>>> to thing about for our domain]
>>>
>>>
>>> [0] https://papers.nips.cc/paper/1713-policy-gradient-method
>>> s-for-reinforcement-learning-with-function-approximation.pdf
>>> [1] https://blog.openai.com/evolution-strategies/
>>> [2] https://arxiv.org/pdf/1703.03864.pdf
>>> [3] http://jmlr.csail.mit.edu/papers/volume15/wierstra14a/wi
>>> erstra14a.pdf
>>> [4] http://www.1-4-5.net/~dmm/ml/vae.pdf
>>> [5] https://arxiv.org/pdf/1406.2661.pdf
>>>
>>>
>>>
>>>
>>> On Tue, Mar 28, 2017 at 4:04 PM, 김민석 <mskim16@etri.re.kr> wrote:
>>>
>>>
>>>
>>>> Hi Brian,
>>>>
>>>>
>>>>
>>>> As you mentioned by the prior email, anticipating network DDos
>>>> attacks is really trendy issue to solve by ML techniques.
>>>>
>>>> We also make some efforts how to avoid fagile nodes by a trustworthy
>>>> communication, that means quantifying trustworthiness of node with
>>>> normalization of various requirements such as security function, bandwidth
>>>> and etc.
>>>>
>>>> We are freshly approaching in routing layer with confidence using our
>>>> own requirements, TPD(Trust Policy Distribution) and TD(Trust Degree).
>>>> These requirements are considered to be solved by Reinforcement Learning
>>>> (RL) that is one of the ML algorithms. RL is useful to control some of
>>>> network policy about specific actions and states with reinforced and
>>>> purnished rewards (+/-), but the problem is too slow to acquire satisified
>>>> performance. Other ways to say it, anormaly dectection and regression
>>>> analysis might be both efficient approaching methods to solve the issues
>>>> Dave mentioned.
>>>>
>>>>
>>>>
>>>> Best Regards,
>>>>
>>>>
>>>> Min-Suk Kim
>>>>
>>>> Senior Researcher / Ph.D.
>>>> Intelligent IoE Network Research Section,
>>>> *E*lectronics and *T*elecommunications *R*esearch *I*nstitute (*ET**R*
>>>> *I)*
>>>> e-mail          :  mskim16@etri.re.kr <nskim@etri.re.kr>
>>>> http://www.etri.re.kr/
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>
>
>