Re: solicit feedback for adding Interconnection among different Cloud DCs to draft-ietf-rtgwg-net2cloud-problem-statement-03

Robert Raszuk <rraszuk@gmail.com> Sat, 21 September 2019 16:12 UTC

Return-Path: <rraszuk@gmail.com>
X-Original-To: rtgwg@ietfa.amsl.com
Delivered-To: rtgwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 46C0A120071; Sat, 21 Sep 2019 09:12:37 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.998
X-Spam-Level:
X-Spam-Status: No, score=-1.998 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 47boqY2T4AGE; Sat, 21 Sep 2019 09:12:34 -0700 (PDT)
Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 92FDE120044; Sat, 21 Sep 2019 09:12:34 -0700 (PDT)
Received: by mail-pf1-x42e.google.com with SMTP id h195so6503291pfe.5; Sat, 21 Sep 2019 09:12:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=kRNJYfxZsW/xyH/jwd6a4hPjWqTXW6Z5zC7lOTuF5oE=; b=KGolJ6Ka8E3VbqQ3d7NghncYbUNZBUf8FwTl24SvY4Zjbi1IDgfL6RT54hNElRrzOv dBpr7QLiVFwjTl1rYOaEbTLIBRAwkvDeFp6Z4h77Qoa2p69PM0l/NTqOF96gFgNFrqsx GA7hpfjSihgTRzKF56SodTx0Kn4fY2TQUD+pPZL3UuiLC21dmSrXJBV2f1DzMTdJW2kr JpFHfj6hqSLgzVOVZPFj8xfEGGMRmiBrK2bwmSGfQi/ZxMyIyXbsOkwHuhvmeK6VQtPz TsnmOU4JfnUXICOk2xTjNff+VGkQss5l27mXM9Pcq0i3LmBKUs0y4Na/QAK+IAlaHpUc splg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=kRNJYfxZsW/xyH/jwd6a4hPjWqTXW6Z5zC7lOTuF5oE=; b=b322A9fYuth7LpJdZvtioXMU+Y/PZJwqOuxSrK8VQHC9YRjol5qLkmX6+83R5S/PDd 0fQAak9iNcpeklaowqb9qMnj7y8TWbTzgClgVONwvWMgXQiZDUaeHo5X3tgYn7KJfnNd wKD+toHF82Ura1YmyYSm0U15RXlZYbnqYiuF3TxKNobKshXW/jk4KrP7T0ANCdOb5ohl qM5OiVntxyvTsrW9qGXnqWyR8+RFqa5RrpAlzvNDDt62RuFld/W+kNWPBk/JOe+4hYGA f10HDRSDhmIXWdOWTnEgLwx5S1Q7oWcJHat+upkHoJtFV3JKcv+ZsR2uXAefU+rirFX0 O7jw==
X-Gm-Message-State: APjAAAWTitTUJeAQ5LQU/VOJpkSiuWhrhhO0Olx7gphLfuolKHBOBGEJ 58YbgPoFXoBPZFFCVE+vVLw9RkzEJ9epa1yfSik=
X-Google-Smtp-Source: APXvYqyGT4glwJ5mckyQZ4p881tOo0/X6mgO/QcT+tyiOABGuMQkpFcSHpRSMMq/0cFfk8bXo0f5Dam2jQT4/l4PdSk=
X-Received: by 2002:a17:90a:e008:: with SMTP id u8mr11224598pjy.46.1569082353473; Sat, 21 Sep 2019 09:12:33 -0700 (PDT)
MIME-Version: 1.0
References: <MN2PR13MB358245819050F944B107218E858F0@MN2PR13MB3582.namprd13.prod.outlook.com> <CAA=duU0Wo8EOqdcgFTCGmNoO=6wjOm_93Jg0dCDywU3OUAH5JA@mail.gmail.com>
In-Reply-To: <CAA=duU0Wo8EOqdcgFTCGmNoO=6wjOm_93Jg0dCDywU3OUAH5JA@mail.gmail.com>
From: Robert Raszuk <rraszuk@gmail.com>
Date: Sat, 21 Sep 2019 18:12:24 +0200
Message-ID: <CA+b+ERnSrkNK+jv3DrG559vvSd-WxEm5sAjSa=M_-Z2OSRh+Ug@mail.gmail.com>
Subject: Re: solicit feedback for adding Interconnection among different Cloud DCs to draft-ietf-rtgwg-net2cloud-problem-statement-03
To: "Andrew G. Malis" <agmalis@gmail.com>
Cc: Linda Dunbar <linda.dunbar@futurewei.com>, "draft-ietf-rtgwg-net2cloud-problem-statement@ietf.org" <draft-ietf-rtgwg-net2cloud-problem-statement@ietf.org>, RTGWG <rtgwg@ietf.org>
Content-Type: multipart/alternative; boundary="0000000000004d5d040593127442"
Archived-At: <https://mailarchive.ietf.org/arch/msg/rtgwg/8URXXuAmz97bhjMKin3S7YqQwh0>
X-BeenThere: rtgwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Routing Area Working Group <rtgwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/rtgwg>, <mailto:rtgwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/rtgwg/>
List-Post: <mailto:rtgwg@ietf.org>
List-Help: <mailto:rtgwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/rtgwg>, <mailto:rtgwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 21 Sep 2019 16:12:37 -0000

Hi Andrew,

Having been directly involved in interconnecting global enterprise to
public clouds for over 2.5 years and having read your document I must ask -
what is the objective here ?

Say ... Direct Connect it works seamlessly both from direct attachment of
my routers to cloud edge or via private peering offered as a service via
most if not all carriers to your doors or DCs.

Take - SDWAN - the one I have been personally using for few years now - as
day one product focused on seamless interconnect to public cloud. It has
multi tenancy build as root of the controller design from the very start.
That offers flexible interconnect at scale with not only fixed sites but
also mobile users.

IPSec to vGW also works pretty solid.

So I am not even asking what is missing ... but first what are you trying
to really accomplish by this draft ?

Many thx,
Robert


On Sat, Sep 21, 2019 at 3:02 PM Andrew G. Malis <agmalis@gmail.com> wrote:

> RTGwg,
>
> As a co-author, may we interpret no responses as "quiet consensus" for the
> update?
>
> Thanks,
> Andy
>
>
> On Tue, Sep 17, 2019 at 7:24 PM Linda Dunbar <linda.dunbar@futurewei.com>
> wrote:
>
>> RTGwg,
>>
>>
>>
>> During IETF105, we got comments that
>> draft-ietf-rtgwg-net2cloud-problem-statement-03 should expand to cover
>> Interconnection between Cloud DCs owned and operated by different Cloud
>> Operators, in addition to the current focusing on interconnecting
>> Enterprises <-> Cloud DC.
>>
>>
>>
>> Here is what we would like to add to the draft. Want to get some feedback
>> on the mailing list. Thank you.
>>
>> Linda
>>
>>
>> 4.  Multiple Clouds Interconnection  4.1. Multi-Cloud Interconnection
>>
>> Enterprises today can instantiate their workloads or applications in
>> Cloud DCs owned by different Cloud providers, e.g. AWS, Azure, GoogleCloud,
>> Oracle, etc. Interconnecting those workloads involves three parties: The
>> Enterprise, its network service providers, and the Cloud providers.
>>
>> All Cloud Operators offer secure ways to connect enterprises’ on-prem
>> sites/DCs with their Cloud DCs. For example, Google Cloud has Cloud VPN,
>> AWS has VPN/CloudHub, and Azure has VPN gateway.
>>
>> Some Cloud Operators allow enterprises to connect via private networks.
>> For example, AWS’s DirectConnect allows enterprises to use 3rd party
>> provided private Layer 2 path from enterprises’ GW to AWS DirectConnect GW.
>> Microsoft’s ExpressRoute allows extension of a private network to any of
>> the Microsoft cloud services, including Azure and Office365. ExpressRoute
>> is configured using Layer 3 routing. Customers can opt for redundancy by
>> provisioning dual links from their location to two Microsoft Enterprise
>> edge routers (MSEEs) located within a third-party ExpressRoute peering
>> location. The BGP routing protocol is then setup over WAN links to provide
>> redundancy to the cloud. This redundancy is maintained from the peering
>> data center into Microsoft's cloud network.
>>
>> Google’s Cloud Dedicated Interconnect offers similar network connectivity
>> options as AWS and Microsoft. One distinct difference, however, is that
>> Google’s service allows customers access to the entire global cloud network
>> by default. It does this by connecting your on-premises network with the
>> Google Cloud using BGP and Google Cloud Routers to provide optimal paths to
>> the different regions of the global cloud infrastructure.
>>
>> All those connectivity options are between Cloud providers’ DCs and the
>> Enterprises, but not between cloud DCs.  For example, to connect
>> applications in AWS Cloud to applications in Azure Cloud, there must be a
>> third-party gateway (physical or virtual) to interconnect the AWS’s Layer 2
>> DirectConnect path with Azure’s Layer 3 ExpressRoute.
>>
>> It is possible to establish IPsec tunnels between different Cloud DCs,
>> for example, by leveraging open source VPN software such as strongSwan, you
>> create an IPSec connection to the Azure gateway using a shared key. The
>> strong swan instance within AWS not only can connect to Azure but can also
>> be used to facilitate traffic to other nodes within the AWS VPC by
>> configuring forwarding and using appropriate routing rules for the VPC.
>> Most Cloud operators, such as AWS VPC or Azure VNET, use non-globally
>> routable CIDR from private IPv4 address ranges as specified by RFC1918. To
>> establish IPsec tunnel between two Cloud DCs, it is necessary to exchange
>> Public routable addresses for applications in different Cloud DCs.
>> [BGP-SDWAN] describes one method. Other methods are worth exploring.
>>
>> In summary, here are some approaches, available now (which might change
>> in the future), to interconnect workloads among different Cloud DCs:
>>
>>    1. Utilize Cloud DC provided inter/intra-cloud connectivity services
>>    (e.g., AWS Transit Gateway) to connect workloads instantiated in multiple
>>    VPCs. Such services are provided with the cloud gateway to connect to
>>    external networks (e.g., AWS DirectConnect Gateway).
>>    2. Hairpin all traffic through the customer gateway, meaning all
>>    workloads are directly connected to the customer gateway, so that
>>    communications among workloads within one Cloud DC must traverse through
>>    the customer gateway.
>>    3. Establish direct tunnels among different VPCs (AWS’ Virtual
>>    Private Clouds) and VNET (Azure’s Virtual Networks) via client’s own
>>    virtual routers instantiated within Cloud DCs. DMVPN (Dynamic Multipoint
>>    Virtual Private Network) or DSVPN (Dynamic Smart VPN) techniques can be
>>    used to establish direct Multi-point-to-Point or multi-point-to multi-point
>>    tunnels among those client’s own virtual routers.
>>
>>
>>
>> Approach a) usually does not work if Cloud DCs are owned and managed by
>> different Cloud providers.
>>
>> Approach b) creates additional transmission delay plus incurring cost
>> when exiting Cloud DCs.
>>
>> For the Approach c), DMVPN or DSVPN use NHRP (Next Hop Resolution
>> Protocol) [RFC2735] so that spoke nodes can register their IP addresses &
>> WAN ports with the hub node. The IETF ION (Internetworking over NBMA
>> (non-broadcast multiple access) WG standardized NHRP for
>> connection-oriented NBMA network (such as ATM) network address resolution
>> more than two decades ago.
>>
>> There are many differences between virtual routers in Public Cloud DCs
>> and the nodes in an NBMA network. NHRP cannot be used for registering
>> virtual routers in Cloud DCs unless an extension of such protocols is
>> developed for that purpose, e.g. taking NAT or dynamic addresses into
>> consideration. Therefore, DMVPN and/or DSVPN cannot be used directly for
>> connecting workloads in hybrid Cloud DCs.
>>
>> Other protocols such as BGP can be used, as described in [BGP-SDWAN].
>>
>>
>> 4.2. Desired Properties for Multi-Cloud Interconnection
>>
>> Different Cloud Operators have different APIs to access their Cloud
>> resources. It is difficult to move applications built by one Cloud
>> operator’s APIs to another. However, it is highly desirable to have a
>> single and consistent way to manage the networks and respective security
>> policies for interconnecting applications hosted in different Cloud DCs.
>>
>> The desired property would be having a single network fabric to which
>> different Cloud DCs and enterprise’s multiple sites can be attached or
>> detached, with a common interface for setting desired policies. SDWAN is
>> positioned to become that network fabric enabling Cloud DCs to be
>> dynamically attached or detached. But the reality is that different Cloud
>> Operators have different access methods, and Cloud DCs might be
>> geographically far apart. More Cloud connectivity problems are described in
>> the subsequent sections.
>>
>>
>>
>> The difficulty of connecting applications in different Clouds might be
>> stemmed from the fact that they are direct competitors. Usually traffic
>> flow out of Cloud DCs incur charges. Therefore, direct communications
>> between applications in different Cloud DCs can be more expensive than
>> intra Cloud communications.
>>
>>
>>
> _______________________________________________
> rtgwg mailing list
> rtgwg@ietf.org
> https://www.ietf.org/mailman/listinfo/rtgwg
>