Re: Blocking packets from suspicious ports

Willy Tarreau <w@1wt.eu> Tue, 03 May 2022 03:23 UTC

Return-Path: <w@1wt.eu>
X-Original-To: quic@ietfa.amsl.com
Delivered-To: quic@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id CADFCC15E40B for <quic@ietfa.amsl.com>; Mon, 2 May 2022 20:23:50 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.901
X-Spam-Level:
X-Spam-Status: No, score=-1.901 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_PASS=-0.001] autolearn=unavailable autolearn_force=no
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 7NkVvQEzyeVK for <quic@ietfa.amsl.com>; Mon, 2 May 2022 20:23:50 -0700 (PDT)
Received: from 1wt.eu (wtarreau.pck.nerim.net [62.212.114.60]) by ietfa.amsl.com (Postfix) with ESMTP id BDEC8C14F741 for <quic@ietf.org>; Mon, 2 May 2022 20:23:49 -0700 (PDT)
Received: (from willy@localhost) by pcw.home.local (8.15.2/8.15.2/Submit) id 2433NZHa021137; Tue, 3 May 2022 05:23:35 +0200
Date: Tue, 03 May 2022 05:23:35 +0200
From: Willy Tarreau <w@1wt.eu>
To: Paul Vixie <paul=40redbarn.org@dmarc.ietf.org>
Cc: Christian Huitema <huitema@huitema.net>, IETF QUIC WG <quic@ietf.org>
Subject: Re: Blocking packets from suspicious ports
Message-ID: <20220503032335.GB20878@1wt.eu>
References: <6830cf87-e1b6-14bb-7b10-9341fdb6d941@huitema.net> <1b686f1e-912d-5c02-cf5f-a8afbdd924bb@redbarn.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <1b686f1e-912d-5c02-cf5f-a8afbdd924bb@redbarn.org>
User-Agent: Mutt/1.10.1 (2018-07-13)
Archived-At: <https://mailarchive.ietf.org/arch/msg/quic/15DeR0ynfM3bqWoG_OvBWCNrPk0>
X-BeenThere: quic@ietf.org
X-Mailman-Version: 2.1.34
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <quic.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/quic>, <mailto:quic-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/quic/>
List-Post: <mailto:quic@ietf.org>
List-Help: <mailto:quic-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/quic>, <mailto:quic-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 May 2022 03:23:50 -0000

On Mon, May 02, 2022 at 02:27:15PM -0700, Paul Vixie wrote:
> a partially managed (not fully transparent) network (public or private) can
> be expected to implement port-based inbound UDP blocking of the kind you
> describe. the set of ports will be dynamic, updated during attacks. not
> something to be hardcoded or "set and forget".

I think that's the most important aspect, and for having participated many
times to fightng DDoS I agree with this. There are obvious common sanity
rules that are often applied based on what is possible/available:
  - have a (set of) local DNS and NTP server(s) that are the only ones
    susceptible of receiving UDP from privilege ports (< 1024) and have
    all other servers use DNS and NTP from there
  - have the edge components (routers, L3 switches) block all inbound
    UDP from privileged and well-known source ports to regular servers;
    this list may evolve during attacks
  - have all servers locally refine their blocking list at the kernel
    level (netfilter, BPF, PF and so on etc) preventing cross-protocol
    communication (53->123, 123->53 etc).
  - when possible, refine filtering by source/destination ports based
    on the expected protocol (e.g. verify that traffic to port 53 looks
    like a DNS request, from port 53 looks like a DNS response, from
    123 looks like an NTP response, with special cases for 53->53 or
    123->123)

If the wrong packet is delivered to the userspace application, you lose
anyway as most of the harm caused by network stack traversal by a packet
was already done. Worse once a packet passes through, it's often trivial
for the attacker to repeat it and flood the application. A good rule of
thumb is to count on 1 million packets per second delivered to the
application, per CPU core. Of course it will vary a lot between systems
and the filtering in place, but the order of magnitude is there. This
means that a 100G NIC can keep 148 cores busy just delivering bad
packets to be dropped by the application.

I would suggest that port filtering at the application layer is only
used to decide whether to respond or not (i.e. should I send a retry
for a packet that parses correctly). For example it could be suggested
that packets coming from suspicious but valid source ports ought to
be double-checked with a retry to preserve local resources. But that
will not be used to save CPU anyway.

Maintaining a public list of well-known suspicious ports can be both
helpful and dangerous. It's helpful in that it shows implementers that
some ranges should never appear as they're normally reserved for
listening services, that some numbers are well known and widely
deployed, and that some are less common but appear anywhere, indicating
that a range is not sufficient. This should help design a flexible
filtering mechanism. But it can also be dangerous if applied as-is:
this list cannot remain static as new entries may have to be added
within a few minutes to hours hours and each addition will cause extra
breakage, thus some previous ones will likely be dropped after a
previous service stopped being widely attacked. I.e. the list in
question likely needs to be accessible by configuration in field and
not be hard-coded.

Other approaches could work fairly well, such as keeping a rate counter
of incoming packets per source port (e.g. unparsable or initial packets)
which, above a moderate value, would serve to send retry, and above a
critical value, can be used to decide to block the port (possibly earlier
in the chain). This remains reasonably cheap to implement, though it may
end up causing some flapping as the rate will fall after the port is
blocked upstream, which may lead to it being reopened before the attack
is finished.

I think we'll discover new fun things over time and will learn new
attacks and workarounds as deployments grow.

Willy