Re: [Idnet] Intelligence-Defined Network Architecture and Call for Interests

김민석 <> Wed, 29 March 2017 17:38 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 10CDB1279EB for <>; Wed, 29 Mar 2017 10:38:11 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id P4a_Gpn4uxVT for <>; Wed, 29 Mar 2017 10:38:07 -0700 (PDT)
Received: from ( []) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id D0655127698 for <>; Wed, 29 Mar 2017 10:38:06 -0700 (PDT)
Received: from ( by ( with Microsoft SMTP Server (TLS) id 14.3.319.2; Thu, 30 Mar 2017 02:38:04 +0900
Received: from ([]) by ([]) with mapi id 14.03.0319.002; Thu, 30 Mar 2017 02:37:59 +0900
From: =?utf-8?B?6rmA66+87ISd?= <>
To: David Meyer <>
CC: Brian Njenga <>, =?utf-8?B?SsOpcsO0bWUgRnJhbsOnb2lz?= <>, "Oscar Mauricio Caicedo Rendon" <>, Sheng Jiang <>, "" <>
Thread-Topic: [Idnet] Intelligence-Defined Network Architecture and Call for Interests
Thread-Index: AQHSp96NotmNK6LAE0GQptHszQd4taGp43QAgAAR6oCAAAWvAIAAA0cAgADS4nyAAHZ5AIAAAmIAgACdXnr//5EiAIAAnhz8
Date: Wed, 29 Mar 2017 17:37:59 +0000
Message-ID: <>
References: <> <> <> <> <> <> <> <> <>, <>
In-Reply-To: <>
Accept-Language: ko-KR, en-US
Content-Language: en-US
Content-Type: multipart/alternative; boundary="_000_793B28A1395C4570A971885801A72FE0etrirekr_"
MIME-Version: 1.0
Archived-At: <>
Subject: Re: [Idnet] Intelligence-Defined Network Architecture and Call for Interests
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: "The IDNet \(Intelligence-Defined Network\) " <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 29 Mar 2017 17:38:11 -0000

Thank you so much =:)

-Minsuk Kim

Sent from my iPhone

On 29 Mar 2017, at 12:12 PM, David Meyer <<>> wrote:

Hey Min-Suk,

On Wed, Mar 29, 2017 at 8:29 AM, 김민석 <<>> wrote:

Hi Dave,

Thank you for giving me the great information.

I absolutely agree your opinion that we need real ML data applied by data pre-processing so that we have been already trying to make available ML data on many ways such as clustering and classification. (using datasets of contents and URL)

It's so challenge-able steps before using adaptive ML algorithm to network field.

As you mentioned, RL is classical ML algorithm, but it is rapidly going develpment and make great results with tensorflow in many fields, unfortunately not network.

For our tutorial, I attach some of practical examples with tensorflow as below,

Additionally, I submitted a personal draft to NMLRG even if it was closed from last meeting. It's about collaborative distributed multi-agent using re-inforcement learning and we trying to apply it to network real architecture.

The attachment is on the email. I really appreciate giving me a small piece of your feedback and comment if you have a chance.

Thanks. I will try to read/comment later today.

Thanks again,



Min-Suk Kim

Senior Researcher / Ph.D.
Intelligent IoE Network Research Section,

보낸 사람 : "David Meyer" <<>>
보낸 날짜 : 2017-03-29 23:25:48 ( +09:00 )
받는 사람 : 김민석 <<>>
참조 : Brian Njenga <<>>, Jérôme François <<>>, Oscar Mauricio Caicedo Rendon <<>>, Sheng Jiang <<>>,<> <<>>
제목 : Re: [Idnet] Intelligence-Defined Network Architecture and Call for Interests

Apparently you can't attach a .pptx. The attachment is here (pptx and pdf):



On Wed, Mar 29, 2017 at 7:17 AM, David Meyer <<>> wrote:

Hey Min-Suk,

Totally agree we need to learn from our environment, and RL is a natural approach. After all, the network is always changing, has adversaries, etc. All of this means. among other things,  that we can't make simplifying assumptions like stationary distributions,  iid data, .... So RL is one way to attack these problems, and the classic algorithms you mention below are certainly a reasonable approach (I've been working with policy gradients [0], trying to model/adapt the two-player game approach of AlphaGo to networking; the problem there is that we don't have a source of labeled expert data like the KGS Go server ( to build the supervised policy network....).

You might also want to check out the recent "boot" of evolution strategies as a black-box approach to RL (in particular no gradients). See [1],  [2],  [3]. There is also a ton of code around if you want to try some of this out (see e.g.,; this one is in tensorflow). Finally, I've attached a few summary slides with some of my musings on this topic from past talks.



[BTW, two player minimax games seem to be popping up everywhere: AlphaGo, variational autoencoders [4], GANs [5], and many others; something to thing about for our domain]


On Tue, Mar 28, 2017 at 4:04 PM, 김민석 <<>> wrote:

Hi Brian,

As you mentioned by the prior email, anticipating network DDos attacks is really trendy issue to solve by ML techniques.

We also make some efforts how to avoid fagile nodes by a trustworthy communication, that means quantifying trustworthiness of node with normalization of various requirements such as security function, bandwidth and etc.

We are freshly approaching in routing layer with confidence using our own requirements, TPD(Trust Policy Distribution) and TD(Trust Degree). These requirements are considered to be solved by Reinforcement Learning (RL) that is one of the ML algorithms. RL is useful to control some of network policy about specific actions and states with reinforced and purnished rewards (+/-), but the problem is too slow to acquire satisified performance. Other ways to say it, anormaly dectection and regression analysis might be both efficient approaching methods to solve the issues Dave mentioned.

Best Regards,

Min-Suk Kim

Senior Researcher / Ph.D.
Intelligent IoE Network Research Section,
Electronics and Telecommunications Research Institute (ETRI)
e-mail          :<>