Re: solicit feedback for adding Interconnection among different Cloud DCs to draft-ietf-rtgwg-net2cloud-problem-statement-03

"Andrew G. Malis" <agmalis@gmail.com> Sat, 21 September 2019 13:02 UTC

Return-Path: <agmalis@gmail.com>
X-Original-To: rtgwg@ietfa.amsl.com
Delivered-To: rtgwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3E9B412006D; Sat, 21 Sep 2019 06:02:32 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.998
X-Spam-Level:
X-Spam-Status: No, score=-1.998 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id f2rv6lnpZuSP; Sat, 21 Sep 2019 06:02:29 -0700 (PDT)
Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com [IPv6:2607:f8b0:4864:20::82c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 2C5D3120033; Sat, 21 Sep 2019 06:02:29 -0700 (PDT)
Received: by mail-qt1-x82c.google.com with SMTP id w14so7454248qto.9; Sat, 21 Sep 2019 06:02:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=l62E6JyRudUgSRQcLqrM0G2aGITTDM+cMmXgFiM1jkY=; b=MDdDbBGSIDc2U0Pr8kYXI1bQ+bf/U39tnsjZ3aRtXs9xJT1n9PgiBkb1VMWzKXduIB L1UyoCgGKroTDkSJJZf/jiEtcFGhMr46qUhz6ctbpYc2QXWt5PBeXepbOUSL0lneSoKc N+FsZoMxCoDlCcrAC5Cusrcp0ok5QfTtR/gx9BIzjd2rdvtUitEIvSYxvORaQib7+4TU c77XN7uOnIgSLFhVd9n/PW8JOUztyVDJOnc+U7kItp8dGvielo3vUHbO07xICCH6v68/ vvzY64hKRfzh7cKjOk3kQ3FXWHXY2rsD2t0Rq7/zoSnsK1Du54gD+RcNH5o95XchiKw5 jDdw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=l62E6JyRudUgSRQcLqrM0G2aGITTDM+cMmXgFiM1jkY=; b=ejQOOC+kQEUSBIVn5eXsS2jYY6s2lT/aqDbFDgKSJWksJZVDNytQtfssbazUrNRwQ+ EZmPzwyDMO14lcReJtdJQJSFP+N03h56B87pOZDp4B8smRJ7MBIzzO0U5SAh3UDseTaf pPUpTDRYIJSiPI21xGiHsBnH2aR2doeVTQ1YikprCji1dzU9NUd70xDWS4Ts7cpq7XVy Uqc/uq5zgUlrGK4mfFirvmFxNtDes3wu+O9OjtpKavp/2IUOGa3Abc7Lyv+Z/xwWjxKm 6dZcGO4GaU3eDZ9exbYAKaIPUHSxpBNcXl6s9MvourlQSXQicWNdYppmLnOZUIRD2btY Xy5w==
X-Gm-Message-State: APjAAAXMzzttwIqU+/mLGB01HrjaopBogy6gnrCRsHEVyMhsD3HWfH4R 0030/FHnqu00bffcOsZ7tqS8D2B356gGmLbNUtoRmQ==
X-Google-Smtp-Source: APXvYqx6QT8iB8Z2gSKZn2RK7o1VhrsAbr4heoB+pyvReuc74Ull7EJoDosdf10rmVL52ABbGqe3MU4oJE0pXYgrSYE=
X-Received: by 2002:a0c:fa49:: with SMTP id k9mr17403667qvo.72.1569070948199; Sat, 21 Sep 2019 06:02:28 -0700 (PDT)
MIME-Version: 1.0
References: <MN2PR13MB358245819050F944B107218E858F0@MN2PR13MB3582.namprd13.prod.outlook.com>
In-Reply-To: <MN2PR13MB358245819050F944B107218E858F0@MN2PR13MB3582.namprd13.prod.outlook.com>
From: "Andrew G. Malis" <agmalis@gmail.com>
Date: Sat, 21 Sep 2019 09:02:17 -0400
Message-ID: <CAA=duU0Wo8EOqdcgFTCGmNoO=6wjOm_93Jg0dCDywU3OUAH5JA@mail.gmail.com>
Subject: Re: solicit feedback for adding Interconnection among different Cloud DCs to draft-ietf-rtgwg-net2cloud-problem-statement-03
To: Linda Dunbar <linda.dunbar@futurewei.com>
Cc: RTGWG <rtgwg@ietf.org>, "draft-ietf-rtgwg-net2cloud-problem-statement@ietf.org" <draft-ietf-rtgwg-net2cloud-problem-statement@ietf.org>
Content-Type: multipart/alternative; boundary="0000000000007eb16f05930fcc2e"
Archived-At: <https://mailarchive.ietf.org/arch/msg/rtgwg/dYyDWBjqkKv9WasxU8zVy2Xjpzw>
X-BeenThere: rtgwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Routing Area Working Group <rtgwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/rtgwg>, <mailto:rtgwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/rtgwg/>
List-Post: <mailto:rtgwg@ietf.org>
List-Help: <mailto:rtgwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/rtgwg>, <mailto:rtgwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 21 Sep 2019 13:02:32 -0000

RTGwg,

As a co-author, may we interpret no responses as "quiet consensus" for the
update?

Thanks,
Andy


On Tue, Sep 17, 2019 at 7:24 PM Linda Dunbar <linda.dunbar@futurewei.com>
wrote:

> RTGwg,
>
>
>
> During IETF105, we got comments that
> draft-ietf-rtgwg-net2cloud-problem-statement-03 should expand to cover
> Interconnection between Cloud DCs owned and operated by different Cloud
> Operators, in addition to the current focusing on interconnecting
> Enterprises <-> Cloud DC.
>
>
>
> Here is what we would like to add to the draft. Want to get some feedback
> on the mailing list. Thank you.
>
> Linda
>
>
> 4.  Multiple Clouds Interconnection  4.1. Multi-Cloud Interconnection
>
> Enterprises today can instantiate their workloads or applications in Cloud
> DCs owned by different Cloud providers, e.g. AWS, Azure, GoogleCloud,
> Oracle, etc. Interconnecting those workloads involves three parties: The
> Enterprise, its network service providers, and the Cloud providers.
>
> All Cloud Operators offer secure ways to connect enterprises’ on-prem
> sites/DCs with their Cloud DCs. For example, Google Cloud has Cloud VPN,
> AWS has VPN/CloudHub, and Azure has VPN gateway.
>
> Some Cloud Operators allow enterprises to connect via private networks.
> For example, AWS’s DirectConnect allows enterprises to use 3rd party
> provided private Layer 2 path from enterprises’ GW to AWS DirectConnect GW.
> Microsoft’s ExpressRoute allows extension of a private network to any of
> the Microsoft cloud services, including Azure and Office365. ExpressRoute
> is configured using Layer 3 routing. Customers can opt for redundancy by
> provisioning dual links from their location to two Microsoft Enterprise
> edge routers (MSEEs) located within a third-party ExpressRoute peering
> location. The BGP routing protocol is then setup over WAN links to provide
> redundancy to the cloud. This redundancy is maintained from the peering
> data center into Microsoft's cloud network.
>
> Google’s Cloud Dedicated Interconnect offers similar network connectivity
> options as AWS and Microsoft. One distinct difference, however, is that
> Google’s service allows customers access to the entire global cloud network
> by default. It does this by connecting your on-premises network with the
> Google Cloud using BGP and Google Cloud Routers to provide optimal paths to
> the different regions of the global cloud infrastructure.
>
> All those connectivity options are between Cloud providers’ DCs and the
> Enterprises, but not between cloud DCs.  For example, to connect
> applications in AWS Cloud to applications in Azure Cloud, there must be a
> third-party gateway (physical or virtual) to interconnect the AWS’s Layer 2
> DirectConnect path with Azure’s Layer 3 ExpressRoute.
>
> It is possible to establish IPsec tunnels between different Cloud DCs, for
> example, by leveraging open source VPN software such as strongSwan, you
> create an IPSec connection to the Azure gateway using a shared key. The
> strong swan instance within AWS not only can connect to Azure but can also
> be used to facilitate traffic to other nodes within the AWS VPC by
> configuring forwarding and using appropriate routing rules for the VPC.
> Most Cloud operators, such as AWS VPC or Azure VNET, use non-globally
> routable CIDR from private IPv4 address ranges as specified by RFC1918. To
> establish IPsec tunnel between two Cloud DCs, it is necessary to exchange
> Public routable addresses for applications in different Cloud DCs.
> [BGP-SDWAN] describes one method. Other methods are worth exploring.
>
> In summary, here are some approaches, available now (which might change in
> the future), to interconnect workloads among different Cloud DCs:
>
>    1. Utilize Cloud DC provided inter/intra-cloud connectivity services
>    (e.g., AWS Transit Gateway) to connect workloads instantiated in multiple
>    VPCs. Such services are provided with the cloud gateway to connect to
>    external networks (e.g., AWS DirectConnect Gateway).
>    2. Hairpin all traffic through the customer gateway, meaning all
>    workloads are directly connected to the customer gateway, so that
>    communications among workloads within one Cloud DC must traverse through
>    the customer gateway.
>    3. Establish direct tunnels among different VPCs (AWS’ Virtual Private
>    Clouds) and VNET (Azure’s Virtual Networks) via client’s own virtual
>    routers instantiated within Cloud DCs. DMVPN (Dynamic Multipoint Virtual
>    Private Network) or DSVPN (Dynamic Smart VPN) techniques can be used to
>    establish direct Multi-point-to-Point or multi-point-to multi-point tunnels
>    among those client’s own virtual routers.
>
>
>
> Approach a) usually does not work if Cloud DCs are owned and managed by
> different Cloud providers.
>
> Approach b) creates additional transmission delay plus incurring cost when
> exiting Cloud DCs.
>
> For the Approach c), DMVPN or DSVPN use NHRP (Next Hop Resolution
> Protocol) [RFC2735] so that spoke nodes can register their IP addresses &
> WAN ports with the hub node. The IETF ION (Internetworking over NBMA
> (non-broadcast multiple access) WG standardized NHRP for
> connection-oriented NBMA network (such as ATM) network address resolution
> more than two decades ago.
>
> There are many differences between virtual routers in Public Cloud DCs and
> the nodes in an NBMA network. NHRP cannot be used for registering virtual
> routers in Cloud DCs unless an extension of such protocols is developed for
> that purpose, e.g. taking NAT or dynamic addresses into consideration.
> Therefore, DMVPN and/or DSVPN cannot be used directly for connecting
> workloads in hybrid Cloud DCs.
>
> Other protocols such as BGP can be used, as described in [BGP-SDWAN].
>
>
> 4.2. Desired Properties for Multi-Cloud Interconnection
>
> Different Cloud Operators have different APIs to access their Cloud
> resources. It is difficult to move applications built by one Cloud
> operator’s APIs to another. However, it is highly desirable to have a
> single and consistent way to manage the networks and respective security
> policies for interconnecting applications hosted in different Cloud DCs.
>
> The desired property would be having a single network fabric to which
> different Cloud DCs and enterprise’s multiple sites can be attached or
> detached, with a common interface for setting desired policies. SDWAN is
> positioned to become that network fabric enabling Cloud DCs to be
> dynamically attached or detached. But the reality is that different Cloud
> Operators have different access methods, and Cloud DCs might be
> geographically far apart. More Cloud connectivity problems are described in
> the subsequent sections.
>
>
>
> The difficulty of connecting applications in different Clouds might be
> stemmed from the fact that they are direct competitors. Usually traffic
> flow out of Cloud DCs incur charges. Therefore, direct communications
> between applications in different Cloud DCs can be more expensive than
> intra Cloud communications.
>
>
>