Re: Should we merge the draft-ietf-rtgwg-net2cloud-problem-statement-03 with draft-ietf-rtgwg-net2cloud-gap-analysis

Chris Bowers <chrisbowers.ietf@gmail.com> Tue, 29 October 2019 17:13 UTC

Return-Path: <chrisbowers.ietf@gmail.com>
X-Original-To: rtgwg@ietfa.amsl.com
Delivered-To: rtgwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5F09B12010D; Tue, 29 Oct 2019 10:13:44 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.897
X-Spam-Level:
X-Spam-Status: No, score=-0.897 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, FREEMAIL_REPLY=1, HTML_MESSAGE=0.001, HTTPS_HTTP_MISMATCH=0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=no autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Xu18YMTNEABR; Tue, 29 Oct 2019 10:13:41 -0700 (PDT)
Received: from mail-qt1-x829.google.com (mail-qt1-x829.google.com [IPv6:2607:f8b0:4864:20::829]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id E90E012097E; Tue, 29 Oct 2019 10:13:40 -0700 (PDT)
Received: by mail-qt1-x829.google.com with SMTP id u22so21243952qtq.13; Tue, 29 Oct 2019 10:13:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ipLx2uIY2TAxfAvmOGjDgLTnMtbmbi9onVmV7Bq2qOc=; b=Qp0Lj8MRg/JsOhDsiDmbJ64IRflkXxk5XutHpcMyvn6yxIIKm8HWzvHiYAtB6N/xg9 tXt55sixXfNEQM7NQZfGO7W3PwZtn4S5XtuvqnVD8/yq9xM/zp0hWPp6Z7319CO8mesl 4AwyRDF2WtZgm8WC8yG0KyiHiDBPUd/dRJiQOl9LKMoxM2RHwbAelZ9/k2Mxowp1g7x/ Gh1qjMQT3iSUKtjhMj1aRgdhm4lvpQQV6Rw3fEPM4sUEORBsVHay3BZ0sTxCCL7Se8Iw 7OzigH2+WisIBTdkfuAmq8c4QOxH7HJnmqkHIRXv89vqxZzsxEhBNL2+Qs3eobiqIg8t 1pNA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ipLx2uIY2TAxfAvmOGjDgLTnMtbmbi9onVmV7Bq2qOc=; b=pwagSS8PXkfw3Vtk4LGoerk8ZMXy+DR3INZ7n79L18h1fjgzo6r69Rm/A2qZfGp/PP N2Xdq+JA0jDZk11dkPsG33IHiZ3GysE6W0mdc6ywX1oL4JwehG9tw5lZjwNPlxv1ifb9 loSAMXCPnoXcXgyDYfYGI0sP+vcVEXJgDq/PUgzsx3PingYNcUBvSxMCABtKuodsYa03 2dqNmfu3k3Z1lWwQWrtQPDY+k9mhkZuDfpkH/2dDR8+YrQNG2qB9SfeJ8zO6eEZpLgae yKQFMwUchcSiLVBs/a8p4i4bHIpURURVySycrA5OKu5ymgntXayVLj3a//Br+obPaa/2 xhEA==
X-Gm-Message-State: APjAAAVDIeuPo8f/lPxK+HD7HdndQUR6q4Wb5vxy2PcoOpEk81VVOMgv B+7lGA0QDzDcCqH4G14ibgUrcgasQB86lXt2Cnv5S6Pr
X-Google-Smtp-Source: APXvYqwIbxSf9kZ0W+JgKkUIOatioCuEX9sutVcRIRcBoY66SszD+7MeD1IiirQjW0LEol0R4a3PSv8eHIJYbORS0Vo=
X-Received: by 2002:aed:3ee4:: with SMTP id o33mr5417661qtf.267.1572369219898; Tue, 29 Oct 2019 10:13:39 -0700 (PDT)
MIME-Version: 1.0
References: <MN2PR13MB2637919CE387947332F16BC585650@MN2PR13MB2637.namprd13.prod.outlook.com>
In-Reply-To: <MN2PR13MB2637919CE387947332F16BC585650@MN2PR13MB2637.namprd13.prod.outlook.com>
From: Chris Bowers <chrisbowers.ietf@gmail.com>
Date: Tue, 29 Oct 2019 12:12:27 -0500
Message-ID: <CAHzoHbsVLqgg8s3B=84HkH7V8FPfc8p815XbP65h2Q4m=N6YVg@mail.gmail.com>
Subject: Re: Should we merge the draft-ietf-rtgwg-net2cloud-problem-statement-03 with draft-ietf-rtgwg-net2cloud-gap-analysis
To: Linda Dunbar <linda.dunbar@futurewei.com>
Cc: rtgwg-chairs <rtgwg-chairs@ietf.org>, RTGWG <rtgwg@ietf.org>
Content-Type: multipart/alternative; boundary="000000000000cecfff05960fbcd3"
Archived-At: <https://mailarchive.ietf.org/arch/msg/rtgwg/kATtTgbVhFOovySOaudsHbvrDHQ>
X-BeenThere: rtgwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Routing Area Working Group <rtgwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/rtgwg>, <mailto:rtgwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/rtgwg/>
List-Post: <mailto:rtgwg@ietf.org>
List-Help: <mailto:rtgwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/rtgwg>, <mailto:rtgwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 29 Oct 2019 17:13:44 -0000

Linda,

I think the primary goal should be improving the clarity of
draft-ietf-rtgwg-net2cloud-problem-statement. Revision 03 helped clarify
some of the details. However, in my opinion, it still needs more work. The
authors received feedback on the RTGWG list that the overall objective of
the draft is unclear. The draft was then updated as revision 04 with text
mainly related to multi-cloud.  However, in my opinion, the overall
objective of the draft remains unclear.

Regarding the question below of whether or not the authors should merge
draft-ietf-rtgwg-net2cloud-gap-analysis with
draft-ietf-rtgwg-net2cloud-problem-statement.  Looking at the actual
content of draft-ietf-rtgwg-net2cloud-gap-analysis, I don't think that
merging the two drafts will help to improve the clarity of
draft-ietf-rtgwg-net2cloud-problem-statement.

Thanks,
Chris


On Fri, Oct 25, 2019 at 2:05 PM Linda Dunbar <linda.dunbar@futurewei.com>
wrote:

> Chris and Jeff,
>
>
>
> I remember during IETF105  someone mentioned that it is better to merge
> the Problem Statement with the Gap Analysis because IESG doesn’t like to
> approve Gap Analysis draft. Is it a formal request? Do we have to do this
> merge in order to move the document to WGLC?
>
>
>
> Thank you very much.
>
>
>
> Linda
>
>
>
> *From:* Linda Dunbar
> *Sent:* Monday, October 21, 2019 4:05 PM
> *To:* 'Alia Atlas' <akatlas@gmail.com>; 刘 鹏 <liupengyjy@outlook.com>
> *Cc:* Robert Raszuk <rraszuk@gmail.com>; Andrew G. Malis <
> agmalis@gmail.com>; draft-ietf-rtgwg-net2cloud-problem-statement@ietf.org;
> RTGWG <rtgwg@ietf.org>
> *Subject:* RE: RE: solicit feedback for adding Interconnection among
> different Cloud DCs to draft-ietf-rtgwg-net2cloud-problem-statement-03
>
>
>
> Alia,
>
>
>
> Thanks for the feedback.
>
>
>
> Here are the updates from Last week’s ONUG on Cloud: Every ONUG have IT
> user communities voting for the use cases they care the most. The highest
> voted use case from ONUG Fall 2019 is *Hybrid Multi-cloud deployments
> incorporating VPCs/VNETs hosted by multiple public cloud providers, access
> to many SaaS application & Integration with SD-WAN overlays*.
>
>
>
> I don’t mean that  “SDWAN as the unifying API for the network fabric”,
> instead, the network fabric would be combination of legacy VPNs (AWS’
> DirectConnect, Azure’s Express Route) and over the top SDWAN.
>
>
>
>
>
>
>
> Linda
>
>
>
> *From:* Alia Atlas <akatlas@gmail.com>
> *Sent:* Monday, October 14, 2019 4:57 PM
> *To:* 刘 鹏 <liupengyjy@outlook.com>
> *Cc:* Linda Dunbar <linda.dunbar@futurewei.com>; Robert Raszuk <
> rraszuk@gmail.com>; Andrew G. Malis <agmalis@gmail.com>;
> draft-ietf-rtgwg-net2cloud-problem-statement@ietf.org; RTGWG <
> rtgwg@ietf.org>
> *Subject:* Re: RE: solicit feedback for adding Interconnection among
> different Cloud DCs to draft-ietf-rtgwg-net2cloud-problem-statement-03
>
>
>
> Hi Linda,
>
>
>
> I think this is a good start.  I am not clear on how or why you see SDWAN
> as the unifying API for the network fabric?
>
> Places where I see interesting requirements on multicloud are around
> authorization, how to indicate which VPC, consistent
>
> APIs or abstractions, how it connects into other details like NAT or DNS,
> which DCs are connected, and so on.
>
>
>
> Thanks for adding it,
>
> Alia
>
>
>
> On Sun, Oct 13, 2019 at 2:51 AM 刘 鹏 <liupengyjy@outlook.com> wrote:
>
> Hi Linda,
>
>
>
> The added section on hybrid cloud Interconnection is meaningful. I think
> there will be many new business models in the future, such as edge
> computing, which will involve the interconnection between hybrid clouds, so
> it is a very worthy topic to discuss!
>
>
>
> Thanks,
>
> Peng
>
>
> ------------------------------
>
> liupengyjy@outlook.com
>
>
>
> *From:* Linda Dunbar <linda.dunbar@futurewei.com>
>
> *Date:* 2019-09-23 23:32
>
> *To:* Robert Raszuk <rraszuk@gmail.com>; Andrew G. Malis
> <agmalis@gmail.com>
>
> *CC:* draft-ietf-rtgwg-net2cloud-problem-statement@ietf.org; RTGWG
> <rtgwg@ietf.org>
>
> *Subject:* RE: solicit feedback for adding Interconnection among
> different Cloud DCs to draft-ietf-rtgwg-net2cloud-problem-statement-03
>
> Robert,
>
>
>
> Very good question.
>
>
>
> All Cloud connectivity options are between Cloud providers’ DCs and the
> Enterprises, but not between cloud DCs (unless you instantiate your own
> virtual routers in different Cloud DCs and administer the IPsec tunnels
> among them yourselves, which by itself is a task) .
>
>
>
> AWS’s DirectConnect allows enterprises to use 3rd party provided private
> Layer 2 path from enterprises’ GW to AWS DirectConnect GW. Microsoft’s
> ExpressRoute allows extension of a private network to any of the Microsoft
> cloud services, including Azure and Office365. ExpressRoute is configured
> using Layer 3 routing.
>
> Therefore, to connect applications in AWS Cloud to applications in Azure
> Cloud, there must be a third-party gateway (physical or virtual) to
> interconnect the AWS’s Layer 2 DirectConnect path with Azure’s Layer 3
> ExpressRoute, or hairpinned to the customer’s own GW.
>
>
>
> Google’s Cloud Dedicated Interconnect offers similar network connectivity
> options as AWS and Microsoft. One distinct difference, however, is that
> Google’s service allows customers access to the entire global cloud network
> by default. It does this by connecting your on-premises network with the
> Google Cloud using BGP and Google Cloud Routers to provide optimal paths to
> the different regions of the global cloud infrastructure.
>
>
>
> The purpose of adding this section (as requested by RTGwg Chair and Alia)
> is to make this problem clear as the document is about Network to
> interconnect Cloud. Many Enterprises today can instantiate their workloads
> or applications in Cloud DCs owned by different Cloud providers.
> Interconnecting those workloads involves three parties: The Enterprise, its
> network service providers, and the Cloud providers.
>
>
>
> Hope my explanation is clear.
>
>
>
> Thank you.
>
>
>
> Linda
>
>
>
> *From:* Robert Raszuk <rraszuk@gmail.com>
> *Sent:* Saturday, September 21, 2019 11:12 AM
> *To:* Andrew G. Malis <agmalis@gmail.com>
> *Cc:* Linda Dunbar <linda.dunbar@futurewei.com>;
> draft-ietf-rtgwg-net2cloud-problem-statement@ietf.org; RTGWG <
> rtgwg@ietf.org>
> *Subject:* Re: solicit feedback for adding Interconnection among
> different Cloud DCs to draft-ietf-rtgwg-net2cloud-problem-statement-03
>
>
>
> Hi Andrew,
>
>
>
> Having been directly involved in interconnecting global enterprise to
> public clouds for over 2.5 years and having read your document I must ask -
> what is the objective here ?
>
>
>
> Say ... Direct Connect it works seamlessly both from direct attachment of
> my routers to cloud edge or via private peering offered as a service via
> most if not all carriers to your doors or DCs.
>
>
>
> Take - SDWAN - the one I have been personally using for few years now - as
> day one product focused on seamless interconnect to public cloud. It has
> multi tenancy build as root of the controller design from the very start.
> That offers flexible interconnect at scale with not only fixed sites but
> also mobile users.
>
>
>
> IPSec to vGW also works pretty solid.
>
>
>
> So I am not even asking what is missing ... but first what are you trying
> to really accomplish by this draft ?
>
>
>
> Many thx,
>
> Robert
>
>
>
>
>
> On Sat, Sep 21, 2019 at 3:02 PM Andrew G. Malis <agmalis@gmail.com> wrote:
>
> RTGwg,
>
>
>
> As a co-author, may we interpret no responses as "quiet consensus" for the
> update?
>
>
>
> Thanks,
>
> Andy
>
>
>
>
>
> On Tue, Sep 17, 2019 at 7:24 PM Linda Dunbar <linda.dunbar@futurewei.com>
> wrote:
>
> RTGwg,
>
>
>
> During IETF105, we got comments that
> draft-ietf-rtgwg-net2cloud-problem-statement-03 should expand to cover
> Interconnection between Cloud DCs owned and operated by different Cloud
> Operators, in addition to the current focusing on interconnecting
> Enterprises <-> Cloud DC.
>
>
>
> Here is what we would like to add to the draft. Want to get some feedback
> on the mailing list. Thank you.
>
> Linda
>
>
> 4.  Multiple Clouds Interconnection  4.1. Multi-Cloud Interconnection
>
> Enterprises today can instantiate their workloads or applications in Cloud
> DCs owned by different Cloud providers, e.g. AWS, Azure, GoogleCloud,
> Oracle, etc. Interconnecting those workloads involves three parties: The
> Enterprise, its network service providers, and the Cloud providers.
>
> All Cloud Operators offer secure ways to connect enterprises’ on-prem
> sites/DCs with their Cloud DCs. For example, Google Cloud has Cloud VPN,
> AWS has VPN/CloudHub, and Azure has VPN gateway.
>
> Some Cloud Operators allow enterprises to connect via private networks.
> For example, AWS’s DirectConnect allows enterprises to use 3rd party
> provided private Layer 2 path from enterprises’ GW to AWS DirectConnect GW.
> Microsoft’s ExpressRoute allows extension of a private network to any of
> the Microsoft cloud services, including Azure and Office365. ExpressRoute
> is configured using Layer 3 routing. Customers can opt for redundancy by
> provisioning dual links from their location to two Microsoft Enterprise
> edge routers (MSEEs) located within a third-party ExpressRoute peering
> location. The BGP routing protocol is then setup over WAN links to provide
> redundancy to the cloud. This redundancy is maintained from the peering
> data center into Microsoft's cloud network.
>
> Google’s Cloud Dedicated Interconnect offers similar network connectivity
> options as AWS and Microsoft. One distinct difference, however, is that
> Google’s service allows customers access to the entire global cloud network
> by default. It does this by connecting your on-premises network with the
> Google Cloud using BGP and Google Cloud Routers to provide optimal paths to
> the different regions of the global cloud infrastructure.
>
> All those connectivity options are between Cloud providers’ DCs and the
> Enterprises, but not between cloud DCs.  For example, to connect
> applications in AWS Cloud to applications in Azure Cloud, there must be a
> third-party gateway (physical or virtual) to interconnect the AWS’s Layer 2
> DirectConnect path with Azure’s Layer 3 ExpressRoute.
>
> It is possible to establish IPsec tunnels between different Cloud DCs, for
> example, by leveraging open source VPN software such as strongSwan, you
> create an IPSec connection to the Azure gateway using a shared key. The
> strong swan instance within AWS not only can connect to Azure but can also
> be used to facilitate traffic to other nodes within the AWS VPC by
> configuring forwarding and using appropriate routing rules for the VPC.
> Most Cloud operators, such as AWS VPC or Azure VNET, use non-globally
> routable CIDR from private IPv4 address ranges as specified by RFC1918. To
> establish IPsec tunnel between two Cloud DCs, it is necessary to exchange
> Public routable addresses for applications in different Cloud DCs.
> [BGP-SDWAN] describes one method. Other methods are worth exploring.
>
> In summary, here are some approaches, available now (which might change in
> the future), to interconnect workloads among different Cloud DCs:
>
> a.      Utilize Cloud DC provided inter/intra-cloud connectivity services
> (e.g., AWS Transit Gateway) to connect workloads instantiated in multiple
> VPCs. Such services are provided with the cloud gateway to connect to
> external networks (e.g., AWS DirectConnect Gateway).
>
> b.      Hairpin all traffic through the customer gateway, meaning all
> workloads are directly connected to the customer gateway, so that
> communications among workloads within one Cloud DC must traverse through
> the customer gateway.
>
> c.      Establish direct tunnels among different VPCs (AWS’ Virtual
> Private Clouds) and VNET (Azure’s Virtual Networks) via client’s own
> virtual routers instantiated within Cloud DCs. DMVPN (Dynamic Multipoint
> Virtual Private Network) or DSVPN (Dynamic Smart VPN) techniques can be
> used to establish direct Multi-point-to-Point or multi-point-to multi-point
> tunnels among those client’s own virtual routers.
>
>
>
> Approach a) usually does not work if Cloud DCs are owned and managed by
> different Cloud providers.
>
> Approach b) creates additional transmission delay plus incurring cost when
> exiting Cloud DCs.
>
> For the Approach c), DMVPN or DSVPN use NHRP (Next Hop Resolution
> Protocol) [RFC2735] so that spoke nodes can register their IP addresses &
> WAN ports with the hub node. The IETF ION (Internetworking over NBMA
> (non-broadcast multiple access) WG standardized NHRP for
> connection-oriented NBMA network (such as ATM) network address resolution
> more than two decades ago.
>
> There are many differences between virtual routers in Public Cloud DCs and
> the nodes in an NBMA network. NHRP cannot be used for registering virtual
> routers in Cloud DCs unless an extension of such protocols is developed for
> that purpose, e.g. taking NAT or dynamic addresses into consideration.
> Therefore, DMVPN and/or DSVPN cannot be used directly for connecting
> workloads in hybrid Cloud DCs.
>
> Other protocols such as BGP can be used, as described in [BGP-SDWAN].
>
>
> 4.2. Desired Properties for Multi-Cloud Interconnection
>
> Different Cloud Operators have different APIs to access their Cloud
> resources. It is difficult to move applications built by one Cloud
> operator’s APIs to another. However, it is highly desirable to have a
> single and consistent way to manage the networks and respective security
> policies for interconnecting applications hosted in different Cloud DCs.
>
> The desired property would be having a single network fabric to which
> different Cloud DCs and enterprise’s multiple sites can be attached or
> detached, with a common interface for setting desired policies. SDWAN is
> positioned to become that network fabric enabling Cloud DCs to be
> dynamically attached or detached. But the reality is that different Cloud
> Operators have different access methods, and Cloud DCs might be
> geographically far apart. More Cloud connectivity problems are described in
> the subsequent sections.
>
>
>
> The difficulty of connecting applications in different Clouds might be
> stemmed from the fact that they are direct competitors. Usually traffic
> flow out of Cloud DCs incur charges. Therefore, direct communications
> between applications in different Cloud DCs can be more expensive than
> intra Cloud communications.
>
>
>
> _______________________________________________
> rtgwg mailing list
> rtgwg@ietf.org
> https://www.ietf.org/mailman/listinfo/rtgwg
> <https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ietf.org%2Fmailman%2Flistinfo%2Frtgwg&data=02%7C01%7Clinda.dunbar%40futurewei.com%7C195f9b031dc946a234b608d750f16c02%7C0fee8ff2a3b240189c753a1d5591fedc%7C1%7C0%7C637066870150628608&sdata=J%2FiGg%2FhFcsvWWxQCd3kD8EQgK5yw52y4TKDz7mbq%2BZs%3D&reserved=0>
>
> _______________________________________________
> rtgwg mailing list
> rtgwg@ietf.org
> https://www.ietf.org/mailman/listinfo/rtgwg
> <https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ietf.org%2Fmailman%2Flistinfo%2Frtgwg&data=02%7C01%7Clinda.dunbar%40futurewei.com%7C195f9b031dc946a234b608d750f16c02%7C0fee8ff2a3b240189c753a1d5591fedc%7C1%7C0%7C637066870150628608&sdata=J%2FiGg%2FhFcsvWWxQCd3kD8EQgK5yw52y4TKDz7mbq%2BZs%3D&reserved=0>
>
>