Re: Slicing and dicing

Cheryl Madson <> Fri, 12 September 1997 19:13 UTC

Received: (from majordom@localhost) by (8.8.2/8.8.2) id PAA09348 for ipsec-outgoing; Fri, 12 Sep 1997 15:13:15 -0400 (EDT)
From: Cheryl Madson <>
Message-Id: <>
Subject: Re: Slicing and dicing
To: Dan.McDonald@Eng.Sun.Com
Date: Fri, 12 Sep 1997 12:21:56 -0700
In-Reply-To: <> from "Dan McDonald" at Sep 12, 97 11:28:35 am
X-Mailer: ELM [version 2.4 PL25]
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: 7bit
Precedence: bulk

We've managed to come full circle. The initial proposal was to
fail the SA and kick ISAKMP to establish a new one. Then various
re-hashing strategies came about, so we woudn't have to take on
the overhead of asking for a new SA. Then it was taking the next
"hunk" of keymat (either change your pointer offset by one byte or 8 
bytes and try again). Now back to failing the SA.

I also didn't catch *what* would be the definitive reference 
for listing weak/semi-weak/possibly-weak keys, since some folks 
claim that Schneier is in error.  Saying that we MUST check and 
reject weak keys only makes sense if we all use the same list.

If said weak key list were available in reasonably-obtainable 
documentation, I could be reconvinced to keep the weak key check. 

I did interpret the recent comments to be along the line of "even
worrying about using a weak key for DES isn't worth it, given the
varying IVs". Still, I do agree that a general strategy should be
developed to handle such a case, so that future cipher writers
won't have to invent this. 

My own druthers in that case would be to simply kill the SA and ask 
for a new one. Why? Putting on my writer hat again: it's behavior that
can be applied regardless of the algorithm in question. We can 
simply state in some general document that this is what should 
be done. 

[I wasn't excited about the "simply move one byte over and
try again" approach, as it ends up being fairly algorithm-dependent,
and would require lots of explanation in both the cipher document
and any "general" document. Step back and envision writing the next
cipher draft which had weak key checks: you'd get to analyze what the
"correct" offset to move over would be.]

So, (1) we should develop a general strategy which would be applicable
to any algorithm with weak key checks in the context of AH/ESP, 
(2) we should decide if we even care in the case of DES, and (3) if 
so, *what* is the definitive list of weak keys. 

But, let's *decide* and be done with it. 

- C

> >   Given this, I'd say forget about handling it.
> Quick question, do you mean key mgmt. failing?  If so, I agree, and you state
> the perfect reasons why below...
> >   The world isn't just DES, though. The question about what to do with weak
> > keys in general. Are weak keys in other algorithms equally improbable?
> I dunno about other algorithms, but you can't discount that possibility.
> >   Given the difficulty in even test code to replace the weak keys with
> > other keys, I'd prefer to simply fail the SA, and cause ISAKMP to start
> > over again. I think even my vic-20 can afford to do this once every
> > (86400/300 * 365)/(2* 10**-52) years.
> Pardon the small plug, but PF_KEY has, since its inception, and at the
> insistence of the many, REQUIRED to return errors when an algorithm's key is
> deemed weak.  This means either SADB_ADD, or SADB_UPDATE will fail miserably
> when/if a weak key is fed down.
> I agree with Michael, in that the SA should fail, and ISAKMP should kick-in
> again.
> Just my $0.02.
> Dan