Re: [Asrg] Countering Botnets to Reduce Spam

Chris Lewis <> Fri, 14 December 2012 05:44 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 310A121F89DF for <>; Thu, 13 Dec 2012 21:44:27 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -0.603
X-Spam-Status: No, score=-0.603 tagged_above=-999 required=5 tests=[AWL=0.445, BAYES_00=-2.599, FH_RELAY_NODNS=1.451, RDNS_NONE=0.1]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id KskvVkFaqjOy for <>; Thu, 13 Dec 2012 21:44:26 -0800 (PST)
Received: from (unknown []) by (Postfix) with ESMTP id CB88121F89D0 for <>; Thu, 13 Dec 2012 21:44:25 -0800 (PST)
Received: from [] ( []) (authenticated bits=0) by (8.14.4/8.14.4/Debian-2ubuntu2) with ESMTP id qBE5iKl6022876 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NOT) for <>; Fri, 14 Dec 2012 00:44:21 -0500
Message-ID: <>
Date: Fri, 14 Dec 2012 00:44:20 -0500
From: Chris Lewis <>
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv: Gecko/20090812 Thunderbird/ Mnenhy/
MIME-Version: 1.0
References: <SNT002-W143FB9A867C92FA80D90E04C54E0@phx.gbl>, <>, <SNT002-W1393526B62C0940EF697B2C54E0@phx.gbl>, <>, <>, <>, <>, <> <SNT002-W117523E9206C73F54784577C54D0@phx.gbl>
In-Reply-To: <SNT002-W117523E9206C73F54784577C54D0@phx.gbl>
X-Enigmail-Version: 1.4.6
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Subject: Re: [Asrg] Countering Botnets to Reduce Spam
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: Anti-Spam Research Group - IRTF <>
List-Id: Anti-Spam Research Group - IRTF <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 14 Dec 2012 05:44:27 -0000

On 12-12-13 11:49 PM, Adam Sobieski wrote:
> Internet Research Task Force,
> Anti-Spam Research Group,
> I have an idea to defend computers from botnets to reduce spam. On a
> computer security topic, what do you think about the idea of utilizing
> one or more P2P DHT's and the hashes of each file, or each important
> file, on computers? Based upon the hardware specifications, platform,
> compiler, and compiler version, the hashes of compiled item or
> downloaded binary items can be compared to the hashes of the files on
> other Linux servers. That is an example of how P2P technologies can
> enhance Linux servers.

Obviously, it can't be all files.  Otherwise, all computers would be
identical ;-)  Then you have to consider all the versions of the code.
Then the highly idiosyncratic mix of other software versions that may be
on the machine in unusual places.  Etc.

This is more-or-less a distributed version of tripwire (which dates back
to the 1980's, IIRC, introduced in an early edition of Gene Spafford's
UNIX Security O'Reilly book), or, for somewhat newer stuff, consider
rkhunter & "rkhunter --propupd".

I worked for a company in the mid 80's that did this in a
semi-distributed fashion.

While I'm not intimately familiar with all versions of Linux spamware,
you have the following considerations:

- As far as I am aware, Linux spam compromises relatively seldom involve
replacing existing programs.  They're entirely new files, often in
unusual places.  You're unlikely to have something to compare checksums
against.  Then what?

- A lot of compromises involve changed config files.  What does a
checksum comparison of a config file to other machines mean?  Nothing.
You can tripwire them, but in busy multi-hosting environments, you'll
get flooded with false positives.

- A large class of compromises are based around programs you _can't_
find on disk.  Each spamrun begins with: download program, start
program, program removes its own files, start spamming.  There's nothing
to checksum for more than a few seconds.

- Many hosting environments can have multiple versions of the same code
(especially stuff like Wordpress or Joomla) operating simultaneously.
How does the code know what to compare checksums with?

Such techniques sound promising, but once you get into trying to run one
of them in a large enough scale to do something useful, you find out
it's a lot harder than it looks, and not nearly as effective as you'd like.

I run rkhunter.  I run Rkhunter to see if I can tell people to use it to
find compromises.  I keep throwing infections on the machine (but don't
start them).  It hasn't found any of them...  Sigh.

[rkhunter has an explicit "rootkit finder" module in addition to its
tripwire capability.  I don't know how the RK finder works (they don't
say ;-) - I'm sure I could find out but...), but, it's not finding the
darkmailer and r57shell and ... tidbits I'm laying around as bait...  So...]

> Another security procedure, extending from that one, could be to remove
> the disks, the hard drives, from computers, periodically, and to scan
> the file systems and other disk sectors, using other computing devices,
> to obtain the hashes of each file and to then utilize some resource,
> e.g. a P2P DHT, to compare the hashes of those files to the hashes of
> the files on other computers.

That'd go over really well in large-scale production multi-hosting
environments.... ;-)

You don't have to go that far.  Boot from CD.  Or see how tripwire gets
around this.