[Greylist-users] Some more data points
Evan Harris
eharris at puremagic.com
Tue Jul 1 12:50:17 PDT 2003
On Tue, 1 Jul 2003, Scott Nelson wrote:
> Results of my testing so far;
> Out of 1102 attempts, 515 succeeded. Roughly 50%
> Most of those successes seemed to come near the end of my trial.
Over what period? How many of those attempts were unique triplets? How
many of the successes were unique? How many destination accounts were
there? Is this using greylisting by itself, or were you also using
spamassassin and/or rbl lists? Does the succeeded number you quote count
only actual passed emails, or is this before other mail checks, like invalid
recipients?
> Either I've goofed up somehow, or some spammers have already
> adapted to greylisting.
> Has anyone else noticed a sudden increase?
You probably got spammed by one spammer at the end who was using some type
of real mailer that retried, and threw off your numbers. A thousand
attempts over about a week (I'm assuming the timeframe) isn't a very large
sample size.
> A couple of notes;
>
> In other tests I ran, there was a marked difference in successes
> rates when tempfailing after the RCPT rather than after DATA.
> Eyeballing my logs, I notice a lot of instant retries on a different
> IPs after failure, usually three times.
That's why I tend to favor reporting by unique triplets. That removes these
types of accounting errors, even though I didn't see that many of them.
Keep in mind that if you have several MX servers for your domain(s) and are
using the same db for all, you'll often see an increment to the blocked
count for each MX host, since many legit servers (and spammers) will try all
MX hosts for a domain one after another when they recieve a tempfail from
one.
> It occurs to me that an unscrupulous anti-spam company could improve
> their spam catching /percentages/ by spamming themselves,
That's entirely true. Or, they could "seed" their domain with fake
addresses on the web for spammers to harvest, and count those toward their
totals too.
> If I do any future testing, I plan to compare results against
> a control group. Comparing the total number of spam actually received
> at addresses that have whatever anti-spam technique, to spam received
> at addresses that do not. It's more work, but I think it's necessary.
That's an excellent idea. Unfortunately unless you have a very large test
site and a large control site, your comparisons will probably have a large
error, since spamming by it's nature has pretty widely varying fluctuations,
since you can't be sure that both sites are on all the same spammer lists.
Evan
More information about the Greylist-users
mailing list