Re: [MBONED] Updating charter / milestones?

Greg Shepherd <gjshep@gmail.com> Tue, 20 December 2016 18:50 UTC

Return-Path: <gjshep@gmail.com>
X-Original-To: mboned@ietfa.amsl.com
Delivered-To: mboned@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id CA62E12958B for <mboned@ietfa.amsl.com>; Tue, 20 Dec 2016 10:50:43 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.699
X-Spam-Level:
X-Spam-Status: No, score=-2.699 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id U4isEGRKJLhi for <mboned@ietfa.amsl.com>; Tue, 20 Dec 2016 10:50:40 -0800 (PST)
Received: from mail-qt0-x22f.google.com (mail-qt0-x22f.google.com [IPv6:2607:f8b0:400d:c0d::22f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id E97F8129585 for <mboned@ietf.org>; Tue, 20 Dec 2016 10:50:39 -0800 (PST)
Received: by mail-qt0-x22f.google.com with SMTP id w33so185231185qtc.3 for <mboned@ietf.org>; Tue, 20 Dec 2016 10:50:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:reply-to:in-reply-to:references:from:date:message-id :subject:to:cc; bh=tX74ztZizQNfJWTxAnzjvVPylVKrZvNob1cWnuEl/88=; b=qqqMTc1pRcTk9bugFs7oty3FhKxNsEVErN+rTmOJsvw56Y1d+gXiKR4b0AjV49kb1v l1EsN1/bM9h5BI4f8q7Vim3rBbwtiQS3xeja6ML3nkBPAvQfhlOG5xYkbQL/OuRrVNYV BS54RUilHCqq4+DhYbAfiAOxTgJe/S2yllriykPXG5QTjCQzfm9ab1jja5mgtDjOrXQp uKAZ9dyhUPbVdL07cG9T1Aa0lRXwdbQyyRYTPf0skiAkla5nwOy/BiTcNZggE9LOcM3x bwQArZVWitA0dgtgu6g9jHg6hc6NYu2vV/hCTdNmYXUI85ykDgfFWBNFMiAkYMfkVtH8 L4lg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:reply-to:in-reply-to:references :from:date:message-id:subject:to:cc; bh=tX74ztZizQNfJWTxAnzjvVPylVKrZvNob1cWnuEl/88=; b=cGbKSYTuSy6kZr03RTR+m/tq/8Pbh9aA6ZhsvBLrisNTBQcGdAikNnHRx8ZoMu4wR1 dpPqBsAe0tkZ1pDjcINKyP+J8RPhXQybhQpkaY+08+4uwBxrR6yQgJbevCjBDAIewkgN DEwD1kmEmm9lIAAuPmLlY4WpY6ZnU6zJ+6Zro7zo8PeIvFkmS/z1wXVQL3ZCIXjq5lVz sJ5oRfHOj8dMTcNAIj8wZojpqo24g3MQUIhn3ru6nkJGB6B8T5fJUMeRn+gSVknvPP5D e+lfW/3dHwx/+8iXYQhCdErKYKSNV3QxhBBeoXMftt1ncqcSUyiyGiU7GL0Ml2HLG6G6 JKuA==
X-Gm-Message-State: AIkVDXKdD7duj2BbdFV9gnPjPYmgTFU6jhXwbVLgGECHCaAQ+11dpPaOjCY15UOqsxgD0DDvPUhg03dGLHHQ0w==
X-Received: by 10.237.60.200 with SMTP id e8mr885268qtf.248.1482259838780; Tue, 20 Dec 2016 10:50:38 -0800 (PST)
MIME-Version: 1.0
Received: by 10.200.42.124 with HTTP; Tue, 20 Dec 2016 10:50:38 -0800 (PST)
In-Reply-To: <E998BC9A-A976-47F6-A7D0-8C1A9FF03B22@akamai.com>
References: <9B51109D-BF60-49DC-99FD-4CDEAA1A7E60@akamai.com> <CABFReBoodw=Fk3QnaB9BH7KT_mEKtk5Cb-3ZTBOz7K4D2axcvg@mail.gmail.com> <717AD41C-77DD-4CD4-9BDF-25A724C4F065@akamai.com> <alpine.DEB.2.02.1612060802310.3675@svl-jtac-lnx02> <E998BC9A-A976-47F6-A7D0-8C1A9FF03B22@akamai.com>
From: Greg Shepherd <gjshep@gmail.com>
Date: Tue, 20 Dec 2016 10:50:38 -0800
Message-ID: <CABFReBq-Z2NRPSCYsHWG+UdZ9PuhbxCefeBxiZgGDdL1riT0wQ@mail.gmail.com>
To: "Holland, Jake" <jholland@akamai.com>
Content-Type: multipart/alternative; boundary="001a114023ce27ce8a05441b8364"
Archived-At: <https://mailarchive.ietf.org/arch/msg/mboned/1F5BtplfHqy3rOrKTlcIYm7IbPw>
Cc: "mboned@ietf.org" <mboned@ietf.org>
Subject: Re: [MBONED] Updating charter / milestones?
X-BeenThere: mboned@ietf.org
X-Mailman-Version: 2.1.17
Precedence: list
Reply-To: gjshep@gmail.com
List-Id: Mail List for the Mboned Working Group <mboned.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/mboned>, <mailto:mboned-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/mboned/>
List-Post: <mailto:mboned@ietf.org>
List-Help: <mailto:mboned-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/mboned>, <mailto:mboned-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 20 Dec 2016 18:50:44 -0000

Inline:

On Thu, Dec 8, 2016 at 3:10 PM, Holland, Jake <jholland@akamai.com> wrote:

> Hi Lenny,
>
> Thanks for reading and responding.
>
> To make sure we’re not talking past each other:
> It seemed your email might have been talking about the CBACC draft I
> submitted, from the “solutions that open the door to other DoS attacks”
> quote near the end of your email.
>
> But this discussion is about the idea of working on an informational draft
> about overjoining (NOT the attempt at a solution that I submitted before).
> There’s a good chance I’ll end up abandoning the originally proposed
> solution at some point, but regardless, if there’s a solution to the
> problem that ends up adopted somewhere, I assume it’ll be outside mboned.
>
>
> To respond to other points you raised:
>
> 1. not unique to multicast:
> I agree that multicast overjoining is only one example from a general
> category of data plane risks inherent to IP.
>
> I also agree that “the opposite direction” is one of the key differences
> between the multicast and most of the unicast cases.
>
> I would argue that even on its own, that’s enough of a difference that
> it’s worth documenting and discussing, in the same way that several
> unicast-specific documents have discussed examples in specific protocols in
> the same general category of risk.
>

GS - Documenting for discussion is encouraged. But let's not couch it as
the sky is falling. :)


> But multicast also brings in another few key differences which are also
> relevant which also would separately justify writing a document outlining
> the unique issues, such as the unknown number of receivers, and the
> difficulties in applying some of the traditional unicast countermeasures
> like acl.


GS - ACLs are applied to multicast every day. May IPTV deployment using
multicast use ACLs for IGMP join limits. I believe what you're pointing out
is how the applicability of using ACLs ends when you move at least one-hop
up from the host-router interaction. Of course. So SOP today is ACLs at the
edge. This can also be applied at an AMT relay.


> 2. not a barrier to deployment:
> I agree that we can get away with some limited multicast deployment
> without solving this. I hear the BBC has some radio channels up, and I’m
> moving forward with deployment (slowly).


There are many other examples of ongoing multicast services.


> As long as there are is not enough total traffic in the world, there is
> not a real problem.
>
> However, one of the objections I’ve heard multiple times to my attempts to
> deploy a video service is some form of “multicast congestion control is not
> a solved problem”.
>

It IS a solved problem. But because of the one-to-many nature, mcast CC is
handled at application and service profile, and is often proprietary.


> When I respond with “yes, but we can roll out for a while before it
> becomes a real problem in practice, and we have some schemes that work with
> well-behaved receivers, and I’m working on tightening up the standards so
> that receiving networks will have an answer by the time it becomes a
> problem with evil or broken receivers”, then I get a skeptical “I guess so,
> as long as there are checks to make sure we don’t start actually using this
> in a way we depend on before it’s solved for real”.
>
> With that answer, I can move forward with treating it as an optimization
> that can help us whenever it happens not to be broken, and we can start
> measuring how much it can save us on bandwidth.
>
> I think it would be very effective at killing the project if I instead
> tried to defend “the plan is to wait until it actually turns into a problem
> and try to solve it then”. Likewise, if I tried to navigate an argument
> like “Me: this isn’t our problem, it’s the peering network’s problem. Them:
> how is the peering network supposed to solve it besides turning it off and
> yelling at us for convincing them to turn on multicast? Me: I’m sure
> they’ll think of something. And don’t worry, it’s only theory, malicious or
> broken receivers won’t really be an issue, and if they are it’ll probably
> be different from what we now think is the most obvious problem, so we
> don’t need to solve it”, I think the project would similarly be killed
> before large-scale deployment.
>

GS - Let's be sure we separate the discussion between a CC mcast service,
and the ocean of evil a compromised host can inflict.


> And to take that a little further: I think the open issues with multicast
> congestion control are an even better objection to a multicast backbone
> deployment than it is to an individual operator’s deployment with
> negotiated peering points like we’re starting with, which will probably
> have reasonably tight bandwidth caps at receiver ingress.


GS - It's been my experience that on very large backbones, the efficiencies
of mcast result in the overall U/M bandwidth ratio to be huge, leaving
mcast just above the noise.


> After the history from unicast, I’m surprised there isn’t more activity on
> the general issue of cleaning up multicast congestion control under mboned.
>

Most mission critical mcast deployments today are within walled gardens, or
at least for on-net customers. If you need capacity to service your
customers you better deploy that capacity. CC is there to allow for
individual optimization and as a safety net during unforeseen events.

As far as mcast across the Internet at large, there are many opinions as to
why it hasn't become the end-to-end service many of us envisioned 20 years
ago. This is just my opinion.
1) It required end-to-end deployment.
     Fix: AMT is now available as a way to benefit for mcast on your
network, then OTT to customers.
2) IPMcast (PIM) requires global flow state, is very complex, and unless
turning it on brings the operating network revenue nobody is going to do it
to bring you revenue alone.
    Fix: BIER allows for a stateless transport service to carry multipoint
traffic, and radically simplifies all aspects of running a multicast
service. But we're still in experimental and rumors of implementations are
just coming to the surface.

Mcast CC isn't even in the picture for the above issues. But I'm glad it's
of concern now. Many application solutions examples exist. It's far from
unsolved.

If your response is meant to suggest that the informational draft should
> expand scope and talk more generally about issues in multicast congestion
> control instead of the specific overjoining issue, that’s an interesting
> idea, but it seems like that would have more unknowns and I’d expect a
> harder time with consensus than I’d get from just explaining a specific and
> demonstrable issue. It also might be a more useful document if done right,
> but I think I’d rather put that off to a separate document.
>

I would prefer if both issues were addressed in one document, even if you
focus around over-joining (which again, I believe only possible with a
broken or compromised host. Trying to think like a bot-net operator here..
If I had access to large number of compromised hosts with the intent of
dishing out evil, using those hosts to hurt themselves would not even be on
my list.


> 3. imagined problems are different from real problems:
> I understand and agree that there might be issues which will not be
> addressed by attempts to solve this problem.
>
> However, having set this one up in lab, I’m confident that this one is not
> hard. I am skeptical of claims that nobody will do it after there’s enough
> live traffic around. And I think a proof of concept attack is enough to
> start with. It’s usually not considered a good practice to wait until it’s
> taking down services before starting to work on it, once you know of the
> existence of an issue (even though it turns out that way sometimes).
>
> And maybe I shouldn’t emphasize malicious receivers to the point it might
> suggest that’s the only problem, because a service with enough channels can
> get this by accident pretty easily, I think. It’s not like the techniques
> for picking the right bitrate are well-defined either, and when you leave
> it up to the application, then when the application updates with a buggy
> lookup table on flow bandwidths, you’ll probably see it take things down
> from time to time (including the competing traffic from somebody else, if
> that exists).
>
> I was thinking that a focus on malicious receivers highlights a point
> about needing to avoid fully trusting the receivers, but maybe it’s a fair
> point that this is not an attack you see in practice with unicast, and one
> of the possible explanations is that it’s useless for attackers. I’m less
> interested in debating that question than in making sure the receiving
> networks have a good way to handle it in general.
>

GS - Sure, but the difference is real. I'm not dismissing your concern, but
I would like to scope the concern so we don't just chase anyone's new
imagined problem before we can get anything out of the door. So giving each
issue some severity priority helps to keep each in context. Otherwise all
conceived issues look equally catastrophic and probable which is not the
case.


> One of the things that bothers me most is that I can’t fix it from the
> sending side, and I’m worried that when a receiving network sees this
> problem, the default reaction will be to just turn multicast back off for
> another 20 years. I want to have a better suggestion to give them that
> mostly keeps traffic flowing for the good receivers and doesn’t make the
> network regret turning it on. (And I do think it’s a “when”, not an “if”,
> conditional on deploying at least one real service with enough channels.)
>

Mcast traffic only traverse a link to a next-hop that requested it. So I
suppose your concern is that IF a host is DOSing itself, you can't detect
this at the source. But this is the case with many other unicast failures
as well. Tcp gives us that control in the stack. But how do you prevent a
host from opening a bazillion TCP sockets? And that's just for self DOSing.
When a host is used in a DDOS attack, the detection is in the network and
the mitigation is often with route updates to blackhole traffic. Pretty
heavy handed. But what we're talking about here is a tiny fraction of the
damage a unicast DDOS attack can inflict. Again, if I had a hammer with ill
intent I wouldn't be looking to him myself in the head with it. :-)


> I also think that as an early source of traffic, we’ll generally get
> blamed, since that’s the normal answer with unicast when it turns into a
> problem.
>

GS - This is a real concern, and I've seen it over and over. Blame the new
guy.


> It might be different if they were begging us to run multicast to save
> their networks, but what we’re seeing is it’s more like we want to send
> multicast, and we need to work to convince them it’s possibly worthwhile to
> accept multicast traffic from us.
>

GS - Yup. And this has been an issue all along. It's the top of the tree
that benefits most from mcast, but it won't work as a service unless the
leafs all turn it on, for no revenue.


> 4. unicast experience proves fear of future congestion isn’t the problem:
> I’m surprised at this claim. Unicast experience has shown the dangers of
> deploying systems that don’t solve congestion, and so now everybody knows
> it’s a bad idea to turn on a service that ignores the issue.
>
> You seem to be suggesting that we might as well wait until we’ve had a few
> multicast congestion collapses before we bother thinking about it, so that
> we understand the actual issues better?
>

GS - I don't think Lenny meant 'congestion collapse' when he said
congestion. A successful service should expect to experience congestion and
have solutions to adapt. TCP is there for unicast. Application/service
solutions are there for multicast. Mothers' Day events will happen
regardless of the traffic mix and your network should be engineered to
address these events, as should well behaved applications.


>
> 5. conclusion:
> I’m arguing that this topic makes a suitable informational RFC because
> although the overjoining concept is mentioned in passing in a few
> documents, I don’t have a good reference I can point to when proposing
> experiments on a few possible solutions, or explaining the issue to people
> who don’t know IGMP/MLD and PIM already.
>

GS - I support this.


> I don’t need a solution rolled out everywhere, but I do think I need to be
> able to point to reasons to believe that solutions (short of “turn off
> multicast”) will be available by the time it becomes a problem, and I
> thought a published problem statement would be a good first step.
>
> I’m not sure I understand the argument against this position. Is it saying
> that writing this up as an issue would make multicast deployment harder or
> less likely to move forward? Or that the exercise is pointless? Or that it
> doesn’t belong under mboned?
>

GS - My opinion, as stated above; be very clear as to the severity of each
case.


> From my side, I think having such a document would give me something I can
> point to as evidence of progress when people bring it up as a problem that
> needs solving, which they have done.
>
> We’re looking at probably a couple of years before we’re ready to start
> using our multicast senders as widely as possible, and I’m trying to have
> some workarounds documented and runnable in about that same couple of years
> that address the known problems that don’t yet have a known solution. At
> this point overjoining is the only such problem I have (though some known
> solutions are not trivial, hence the couple of years).
>
> From prior discussions, I’m expecting pushback on a wider rollout if I
> can’t either get traction on some kind of solution or a compelling response
> as to why it can’t turn into a real problem if there’s enough traffic. This
> is just an attempt to get things moving to get out in front of that.
>

GS - I applaud your work, and your participation here. Let's get the draft
going and we as a WG (and/or list of volunteer contributors) can ensure the
right information is addressed and conveyed in the proper scale.

Thanks,
Greg


>
> Regards,
> Jake
>
>
> On 12/6/16, 9:01 AM, "Leonard Giuliano" <lenny@juniper.net> wrote:
>
>     Lots there, but I do not find these arguments particularly convincing,
> and
>     do not see the "overjoining" threat as unique to mcast, but rather as
> just
>     one in a general category of data plane risks inherent to IP (or at
> least
>     non-TCP).  That is, if you replace "multicast receivers" with "unicast
>     senders," just about all of these arguments apply to some degree.  Or
> at
>     least they apply in the opposite direction- a mcast receiver can
> saturate
>     his own links with traffic magnification by overjoining multiple
> sources
>     just as an attacked unicast destination can have its links saturated
> with
>     traffic magnification of multiple unicast senders.
>
>     And it is reassuring to know that all that stands in the way of unicast
>     non-TCP attacks across the Internet is a recommendation of
> unsuitability
>     in an RFC.  How quaint.
>
>     I also find it difficult to believe that the threat of too much mcast
>     traffic on the Internet is somehow a barrier to deploying mcast in the
>     first place.  But if it is, I would love to have this problem to solve!
>     After spending the better part of the past 2 decades dealing with the
>     problem of not enough mcast traffic on the Internet, I would welcome
> the
>     opportunity to deal with an Internet with too much mcast traffic.
>
>     There is also a great risk of overanticipating narrow risks that exist
> in
>     theory and not yet in practice: premature optimization.  A while back
>     (maybe +10 years ago), I remember a great line that Dave Meyer had
> about
>     fears of mcast state explosion.  To paraphrase, he said something like
>     "For years we worried so much about multicast state explosion that we
>     built protocols that are so complex that no one uses them.  So we
> solved
>     the multicast state explosion problem by not having anyone use
> multicast.
>     From now on, I'd like to see a multicast state problem first before
>     worrying how to solve it."  The mid-90's fear of state when memory was
>     expensive and scarce has given us suboptimal (and unnecessary)
> mechanisms
>     in PIM-SM that we remain stuck with to this day.
>
>     Does this mean we shouldn't worry about data plane congestion caused by
>     mcast on the Internet?  No, but if the Internet ever does have problems
>     with too much mcast, I suspect those problems will be vastly different
>     than what we can imagine today.  I am also confident we'll be able to
> fix
>     them more swiftly and effectively ex post than through guesswork ex
> ante.
>     In the meantime, it's just easier to send streams with different
> bitrates
>     on different groups and let the receiver join the one which suits
> himself
>     the best.  As for malicious attackers, they are probably too smart to
> be
>     completely thwarted by solutions we come up with in anticipation of
> their
>     maliciousness.  Solutions which probably open the door to other DoS
>     vectors while adding the collateral damage to the well-behaved of far
> more
>     network complexity to a delivery mechanism that is way too complex
> already
>     and that we are trying to simplify.  And again, I just don't buy the
>     argument that fear of too much mcast traffic in the future prevents
>     deploying it in the first place.  Experience with unicast has shown
>     otherwise.
>
>     Simply put, we are far better at solving actual problems that do occur
> in
>     the real world than anticipating and prematurely solving the ones we
> can
>     currently conceive of possibly happening that have never occured.
>
>
>     -Lenny
>
>     On Fri, 2 Dec 2016, Holland, Jake wrote:
>
>     |
>     | On 11/16/16, 3:36 PM, "Greg Shepherd" <gjshep@gmail.com> wrote:
>     |
>     | Inline GS:
>     |
>     | On Tue, Nov 15, 2016 at 7:41 PM, Holland, Jake <jholland@akamai.com>
> wrote:
>     |
>     |       However, I also believe the problem it’s trying to address is
> a clear and present danger that is a significant barrier to a multicast
> backbone deployment, and even to interdomain multicast
>     |       deployment in general.
>     |
>     | GS - How is this different from Unicast? An ill-behaved unicast
> agent can open any number of TCP session to DOS itself, and/or any of the
> links between itself and the source. The difference being it's UDP and
>     | won't back-off on it's own. But a full pipe is a full pipe.
>     |
>     |
>     |
>     | JH - That’s a great question. So much so that I think the answers
> should become part of the informational doc. So you can consider this
> answer a first draft of a section I’ll plan to add (and comment is
>     | invited accordingly). And I apologize for its length.
>     |
>     |
>     |
>     |
>     |
>     | Differences between Unicast and Multicast, in the context of
> congestion attacks against the receiver’s network:
>     |
>     |
>     |
>     | 1. Multicast congestion control is specifically called out in BCP 41
> security section as a particular concern:
>     |
>     | https://tools.ietf.org/html/rfc2914#section-10
>     |
>     |    For example, individual congestion control mechanisms should be as
>     |
>     |    robust as possible to the attempts of individual end-nodes to
> subvert
>     |
>     |    end-to-end congestion control [SCWA99].  This is a particular
> concern
>     |
>     |    in multicast congestion control, because of the far-reaching
>     |
>     |    distribution of the traffic and the greater opportunities for
>     |
>     |    individual receivers to fail to report congestion.
>     |
>     |
>     |
>     | Since this still represents the consensus of the IETF, good evidence
> (and maybe a standards action) would be required before considering it safe
> to assume otherwise.
>     |
>     |
>     |
>     |
>     |
>     | 2. attack capacity per receiver:
>     |
>     | This item expands on the “greater opportunities for individual
> receivers to fail to report congestion” point from the prior reference.
>     |
>     |
>     |
>     | 2.1. Unicast:
>     |
>     | In both unicast and multicast, a single attacking endpoint’s
> capability to oversubscribe a network is limited to the traffic a receiver
> can induce a set of senders to send to him. However, there are some
>     | important differences between unicast and multicast in the nature of
> the factors that constitute that limit.
>     |
>     |
>     |
>     | With unicast, publicly reachable senders are almost universally
> using TCP or another protocol with a congestion control mechanism that
> requires continued acknowledgements from the receiver in order to continue
>     | sending (such as QUIC or SCTP), and which backs off quickly in their
> absence. Although some systems and protocol configurations do exist which
> are non-responsive (such as some RTP profiles, as described in
>     | Section 10 of [RFC 3550]), BCP 145 section 3.1 recommends they are
> only suitable for deployment in certain restricted environments, and NOT on
> the general internet.
>     |
>     |
>     |
>     | As a result, the sustained traffic that can be induced by a single
> attacking unicast receiver is usually limited by the ACKs (or similar UDP
> feedback) that the receiver can transmit which the sender will
>     | accept. The sustained attack capacity against the receiving network
> per receiver can be expressed by A*uplink_capacity for some value A,
> representing the maximum number of bits of sending traffic that can be
>     | induced per bit of receiving traffic and uplink_capacity,
> representing the number of outbound bits a receiver can send. (Note also
> that according to this model, the common practice of asymmetric bandwidth
>     | limits serves as a mitigating factor against this kind of attack.)
>     |
>     |
>     |
>     | 2.2. Multicast:
>     |
>     | In multicast, the key difference is that a receiver does not need to
> send any feedback in order for the sender to continue sending traffic into
> the receiver’s network. Although multicast transport systems
>     | using a feedback-based congestion control would require at least 1
> receiver to be responding with proper ACK-like feedback, the sender would
> not respond to receivers which do not send feedback, even though
>     | traffic sent by that sender might enter those receivers’ networks.
> For transport systems using a receiver-driven congestion control, there is
> no feedback to the sender at all, and any receiver can subscribe to
>     | any available traffic.
>     |
>     |
>     |
>     | As a result, the sustained traffic that can be induced by a single
> attacking multicast receiver is limited by the aggregate total bandwidth of
> all the groups that the receiver can join successfully at the same
>     | time. By contrast with unicast, this is not limited to some multiple
> of the upload speed, or necessarily by any aspect of the receiving network.
> A receiver on a 56kbps link could issue a multicast join that
>     | reaches a sender sending at 10gbps, as long as such a group is
> somewhere reachable on the internet. The sustained attack capacity against
> the receiving network per receiver can be expressed by
>     | min(N,S)*highest_sending_rate, where N is the maximum number of
> group a receiver can join, and S is the total number of senders on the
> internet sending at highest_sending_rate.
>     |
>     |
>     |
>     | Although today the number S is low, successful deployment of a
> multicast backbone would presumably result in an increase to that number,
> and absent any mitigating effects (such as for instance new IETF
>     | guidance about per-channel bandwidth limits), highest_sending_rate
> would tend to depend linearly on the capacity of high-speed networks and/or
> the bit-rate of high-bandwidth video.
>     |
>     |
>     |
>     | 2.3. Comparison:
>     |
>     | The existence of DDoS attacks over unicast-only networks proves that
> congestion-based attacks are viable for unicast as well, but executing such
> an attack requires an attacker to take over many receiving
>     | endpoints (and/or to find attack multipliers that have not been
> mitigated).
>     |
>     |
>     |
>     | In multicast, the barrier is the (current) lack of sources and the
> lack of a multicast network available for receivers to reach those senders,
> but there is no a requirement to control many receivers in order
>     | to oversubscribe a receiving network. If enough high-bandwidth
> sending channels were deployed (even with senders that properly implement
> one of the congestion control techniques from
>     | https://tools.ietf.org/html/draft-ietf-tsvwg-rfc5405bis-
> 19#section-4.1.1), congestion-based attacks would be available to
> attackers when they control much fewer receiving nodes than would be
> necessary with
>     | unicast.
>     |
>     |
>     |
>     | Limiting the total number of multicast channels that a receiver can
> join would mitigate the problem by limiting N (requiring control of more
> receivers, given a high enough S), but does not limit
>     | highest_sending_rate.
>     |
>     |
>     |
>     | It is also (probably) less useful for an attacker to disrupt himself
> (or his own botnet node) just to hit the network paths he shares with
> others, and the prevalence of attacks in the wild against a
>     | server-side service has historically been much higher compared to
> attacks against the receiving networks, even though such attacks are
> possible to a degree in unicast as well.
>     |
>     |
>     |
>     | However, it’s not clear whether we haven’t see such attacks because
> it’s entirely useless for attackers, or whether it’s because attackers have
> a better option when in possession of a botnet sufficient to
>     | execute such an attack. If it’s the latter, then wide deployment of
> multicast connectivity and services could reasonably hand attackers a new
> and cheaper option for disrupting networks which, while not a
>     | preferred option in a unicast-only world, might become an option
> that’s more easily available to attackers in a multicast-connected world
> without deployed countermeasures.
>     |
>     |
>     |
>     | Today we know more about the nature and existence of malicious end
> users than was known in earlier days of unicast IP deployment, and it has
> become accordingly more difficult to argue that an attackable scarce
>     | resource would NOT be attacked. Where such resources exist
> un-attacked today, the default explanation is that all capable adversaries’
> resources are better spent elsewhere.
>     |
>     |
>     |
>     |
>     |
>     | 3. Deployment status of countermeasures:
>     |
>     | There currently exists a body of research and deployed services
> regarding detection and mitigation of unicast DDOS attacks [1.
> https://en.wikipedia.org/wiki/Prolexic_Technologies#DDoS_mitigation, 2.
>     | http://projects.laas.fr/METROSEC/Security_and_DoS.pdf, 3. BCP 38,
> 4. RFC 4987, more good refs?].
>     |
>     |
>     |
>     | It is true that there have been few observed multicast congestion
> attacks in the wild so far. However, multicast has seen less deployment and
> would not today be received in most end user networks, and so it’s
>     | unsurprising that we haven’t seen the same prevalence of attacks.
>     |
>     |
>     |
>     | Research on multicast congestion attacks and countermeasures are
> accordingly less well-developed and less well-deployed. However, they are
> still relevant and necessary for successful deployment of a multicast
>     | backbone and services that rely on it.
>     |
>     |
>     |
>     |
>     |
>     | 4. Traffic duration after receiver black hole:
>     |
>     | In multicast, if a receiver vanishes from the network (closed
> laptop, etc.), his igmp/mld group membership would linger by default for up
> to 260 seconds (https://tools.ietf.org/html/rfc3376#section-8:
>     | Robustness (2) * Query Interval (125s) + Query Response Interval
> (10s)).
>     |
>     |
>     |
>     | By contrast, the longest delays before traffic stops or slows
> substantially for unicast congestion control systems would typically be
> well under 2 seconds.
>     |
>     |
>     |
>     |
>     |
>     | Maybe there are other differences, I’m not sure. That’s the first
> ones that occur to me. If anybody has more to add, please do. :)
>     |
>     |
>     |
>     |
>     |
>
>