Re: [Shutup] [ietf-smtp] Levels of proposals

Chris Lewis <> Fri, 04 December 2015 13:27 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 290C61B317E; Fri, 4 Dec 2015 05:27:00 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: 1.364
X-Spam-Level: *
X-Spam-Status: No, score=1.364 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, FH_RELAY_NODNS=1.451, MISSING_HEADERS=1.021, RDNS_NONE=0.793, SPF_PASS=-0.001] autolearn=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id WBmFdDESbOg7; Fri, 4 Dec 2015 05:26:59 -0800 (PST)
Received: from (unknown []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 1167E1B317D; Fri, 4 Dec 2015 05:26:58 -0800 (PST)
Received: from [] ( []) (authenticated bits=0) by (8.14.4/8.14.4/Debian-4.1ubuntu1) with ESMTP id tB4DQveQ023726 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NOT); Fri, 4 Dec 2015 08:26:57 -0500
References: <> <> <> <> <> <>
From: Chris Lewis <>
X-Enigmail-Draft-Status: N1110
Message-ID: <>
Date: Fri, 4 Dec 2015 08:26:57 -0500
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv: Gecko/20090812 Thunderbird/ Mnenhy/
MIME-Version: 1.0
In-Reply-To: <>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit
Archived-At: <>
Subject: Re: [Shutup] [ietf-smtp] Levels of proposals
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: SMTP Headers Unhealthy To User Privacy <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 04 Dec 2015 13:27:00 -0000

On 12/03/2015 10:48 PM, Russ Allbery wrote:
> Ted Lemon <> writes:
>> I am still a bit puzzled: how does increasing the number of attackers
>> help to bypass the throttling mechanism?  Why isn't the throttle per
>> id/password pair, rather than per ip-address/password/id triple?

> So what the attacker does instead is use their botnet of a million
> compromised personal computers (that's sadly not really an exaggeration),
> and has each one of those hosts try 100 combinations and then disconnect.
> This is now below (or at least near) the threshold for an actual customer
> typoing their address or password or something, and rate limiting becomes
> fairly useless as a defense.

Hi Russ, it's been a long time since we last were in the same place 
(physically or otherwise ;-)

One of the most depressing indicators I've seen is watching thousands of 
different IPs only attempt one or two submissions, and not trying again 
EVEN THO both submissions succeeded and none failed.

Secondly, we have full instrumentation, and the passwords they're trying 
are pretty obviously not "common password" dictionaries or brute-forcing 

Clearly they're not lacking in compromised accounts.

So, the problem is much larger than "brute force or common password 

>> Secondarily, if distributed processing makes throttling per id/password
>> pair difficult, why is it hard to do the botnet IP address matching at
>> the authentication point?  This seems like it would avoid a _lot_ of
>> extra processing.

> Chris addressed this quite well in his message.  I don't really have much
> to add to what he already said.

Yeah, it's obvious that the authentication point is the best place to do 
it, but the reality is that most of them don't (we're working on that), 

> The TLDR in case something about that message was confusing is that only
> the authentication point can block the IP addresses at the authentication
> point, but you can analyze Received headers to do a bunch of other things,
> such as determine compromised botnet IP addresses that someone else
> *didn't* block but that you *do* want to block for *your* service.  It
> improves the scale and flexibility of what you can do by basically giving
> you more threat intelligence.

you have to do it for them so that _you_ can block what they should have 
and didn't.

Furthermore, intelligent botnet operators will aim their cannons away 
from the sites successful at MSA-blocking to the ones who aren't.