Re: [Syslog] I-D Action:draft-ietf-syslog-transport-tls-12.txt

"Rainer Gerhards" <rgerhards@hq.adiscon.com> Mon, 12 May 2008 20:05 UTC

Return-Path: <syslog-bounces@ietf.org>
X-Original-To: syslog-archive@megatron.ietf.org
Delivered-To: ietfarch-syslog-archive@core3.amsl.com
Received: from [127.0.0.1] (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id E5BB13A67C0; Mon, 12 May 2008 13:05:32 -0700 (PDT)
X-Original-To: syslog@core3.amsl.com
Delivered-To: syslog@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 813373A67A2 for <syslog@core3.amsl.com>; Mon, 12 May 2008 13:05:30 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level:
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UYRyic18UpEF for <syslog@core3.amsl.com>; Mon, 12 May 2008 13:05:27 -0700 (PDT)
Received: from mailin.adiscon.com (hetzner.adiscon.com [85.10.198.18]) by core3.amsl.com (Postfix) with ESMTP id BFFEB3A67C0 for <syslog@ietf.org>; Mon, 12 May 2008 13:05:25 -0700 (PDT)
Received: from localhost (localhost [127.0.0.1]) by mailin.adiscon.com (Postfix) with ESMTP id 5ED4C7AD8C0; Mon, 12 May 2008 22:01:08 +0200 (CEST)
Received: from mailin.adiscon.com ([127.0.0.1]) by localhost (localhost [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RpjMzfQwJVZD; Mon, 12 May 2008 22:01:08 +0200 (CEST)
Received: from grfint2.intern.adiscon.com (p50989a7c.dip0.t-ipconnect.de [80.152.154.124]) by mailin.adiscon.com (Postfix) with ESMTP id CF82B7AD667; Mon, 12 May 2008 22:01:07 +0200 (CEST)
Content-class: urn:content-classes:message
MIME-Version: 1.0
X-MimeOLE: Produced By Microsoft Exchange V6.5
Date: Mon, 12 May 2008 22:05:18 +0200
Message-ID: <577465F99B41C842AAFBE9ED71E70ABA308FC3@grfint2.intern.adiscon.com>
In-Reply-To: <AC1CFD94F59A264488DC2BEC3E890DE505C95869@xmb-sjc-225.amer.cisco.com>
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Thread-Topic: [Syslog] I-D Action:draft-ietf-syslog-transport-tls-12.txt
Thread-Index: AcixJ3ChaVnUq5oITfGwAnnHJ7WqPwAfO5owABfCe0AAmYDUoA==
References: <20080507150001.D3CB428C65B@core3.amsl.com> <OF13490747.F0126D34-ON85257443.00540976-85257443.00574A09@agfa.com> <577465F99B41C842AAFBE9ED71E70ABA308FB3@grfint2.intern.adiscon.com> <AC1CFD94F59A264488DC2BEC3E890DE505C95869@xmb-sjc-225.amer.cisco.com>
From: Rainer Gerhards <rgerhards@hq.adiscon.com>
To: "Joseph Salowey (jsalowey)" <jsalowey@cisco.com>, robert.horn@agfa.com, syslog@ietf.org
Subject: Re: [Syslog] I-D Action:draft-ietf-syslog-transport-tls-12.txt
X-BeenThere: syslog@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Security Issues in Network Event Logging <syslog.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/syslog>, <mailto:syslog-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/pipermail/syslog>
List-Post: <mailto:syslog@ietf.org>
List-Help: <mailto:syslog-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/syslog>, <mailto:syslog-request@ietf.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: syslog-bounces@ietf.org
Errors-To: syslog-bounces@ietf.org

Hi Joe,

Comments inline below....
> -----Original Message-----
> From: Joseph Salowey (jsalowey) [mailto:jsalowey@cisco.com] 
> Sent: Monday, May 12, 2008 5:40 AM
> To: Rainer Gerhards; robert.horn@agfa.com; syslog@ietf.org
> Cc: Pasi.Eronen@nokia.com
> Subject: RE: [Syslog] I-D 
> Action:draft-ietf-syslog-transport-tls-12.txt
> 
> > -----Original Message-----
> > From: Rainer Gerhards [mailto:rgerhards@hq.adiscon.com] 
> > Sent: Friday, May 09, 2008 1:36 AM
> > To: robert.horn@agfa.com; Joseph Salowey (jsalowey); syslog@ietf.org
> > Cc: Pasi.Eronen@nokia.com
> > Subject: RE: [Syslog] I-D 
> > Action:draft-ietf-syslog-transport-tls-12.txt
> > 
> > Hi all,
> > 
> > I agree to Robert, policy decisions need to be separated. I 
> > CC Pasi because my comment is directly related to IESG 
> > requirements, which IMHO cannot be delivered by *any* syslog 
> > TLS document without compromise [comments directly related to 
> > IESG are somewhat later, I need to level ground first].
> > 
> > Let me tell the story from my implementor's POV. This is 
> > necessarily tied to rsyslog, but I still think there is a lot 
> > of general truth in it. So I consider it useful as an example.
> > 
> > I took some time yesterday to include the rules laid out in 
> > 4.2 into rsyslog design. I quickly came to the conclusion 
> > that 4.2. is talking about at least two things:
> > 
> > a) low-level handshake validation
> > b) access control
> > 
> [Joe] It is possible for the document to separate out
> authentication/cert validation from authorization.  I would note that
> there tends to be a lot of linkages between the two so this will tend
> increase the amount of text needed.  If the working group wants to do
> this I can work on it. 
> 
> > In a) we deal with the session setup. Here, I see certificate 
> > exchange and basic certificate validation (for example 
> > checking the validity dates). In my current POV, this phase 
> > ends when the remote peer can positively be identified.
> > 
> > Once we have positive identification, b) kicks in. In that 
> > phase, we need to check (via ACLs) if the remote peer is 
> > permitted to talk to us (or we are permitted to talk to it). 
> > Please note that from an architectural POV, this can be 
> > abstracted to a higher layer (and in rsyslog it probably 
> > will). For that layer, it is quite irrelevant if the remote 
> > peer's identity was obtained via a certificate (in the case 
> > of transport-tls), a simple reverse lookup (UDP syslog), SASL 
> > (RFC 3195) or whatever. What matters is that the ACL engine 
> > got a trusted identify from the transport layer and verifies 
> > that identity [level of trust varies, obviously]. Most policy 
> > decisions happen on that level.
> > 
> > There is some grayish between a) and b). For example, I can 
> > envision that if there is a syslog.conf rule (forward everything to
> > server.example.net)
> > 
> > *.* @@server.example.net
> > 
> > The certificate name check for server.example.net (using dNSName
> > extension) could probably be part of a) - others may think it 
> > is part of b).
> > 
> > Also, even doing a) places some burden onto the system, like 
> > the need to have trust anchors configured in order to do the 
> > validation. This hints at at least another sub-layer.
> > 
> > I think it would be useful to spell out these different 
> > entities in the draft.
> > 
> [Joe] What do you mean by entities.  We can define two separate
> processes that need to happen, but I don't think we want to 
> specify how
> an implementor must build this. 

I meant

a) low-level handshake validation
b) access control

I still think there is a fundamental difference between verifying the
name that is given in the config file and verifying higher level access
control. But, of course, we do not/can not define application design. I
just thought it would be usefule to spell out there is a difference.

> > Coming back to policy decisions, one must keep in mind that 
> > the IESG explicitly asked for those inside the document. This 
> > was done based on the -correct- assumption that today's 
> > Internet is no longer a friendly place. So the IESG would 
> > like to see a default policy implemented that provides at 
> > least a minimum acceptable security standard. 
> 
> [Joe] I believe this is part of the goal.  I think there is also the
> desire that implementations support some basic level of policy that
> allows interoperability.  Implementations can support other policies,
> deployers can deploy other policies. 
> 
> > Unfortunately, 
> > this is not easy to do in the world of syslog. For the home 
> > users, we cannot rely on any ability to configure something. 
> > For the enterprise folks, we need to have defaults that do 
> > not get into their way of doing things [aka "can be easily 
> > turned off"]. There is obviously much in between these poles, 
> > so it largely depends on the use case. I have begun a wiki 
> > page with use cases and hope people will contribute to it. It 
> > could lead us to a much better understanding of the needs 
> > (and the design decisions that need to be made to deliver 
> > these). It is available at
> > 
> > http://wiki.rsyslog.com/index.php/TLS_for_syslog_use_cases
> > 
> > After close consideration, I think the draft currently fails 
> > on addressing the two use cases define above properly. Partly 
> > it fails because it is not possible under the current IESG 
> > requirement to be safe by default. We cannot be fully safe by 
> > default without configuration, so whatever we specify will 
> > fail for the home user.
> > 
> > A compromise may be to provide "good enough" security in the 
> > default policy. I see two ways of doing that: one is to NOT 
> > address the Masquerade and Modification threats in the 
> > default policy, just the Disclosure threat. That leads us to 
> > unauthenticated syslog being the default (contrary to what is 
> > currently implemented) [Disclosure is addressed in this 
> > scenario as long as the client configs are not compromised, 
> > which I find sufficiently enough - someone who can compromise 
> > the client config can find other ways to get hold of the 
> > syslog message content].
> > 
> [Joe] If you don't address the relevant threats I'm not sure you can
> call security "good enough".

I can do this because, from a practical perspective, what most people
are concerned with is confidentiallity. Let me ask a question: how can
we say HTTPS is secure? After all, the HTTPS client is almost never
authenticated against the server. From my practical perspective,
HTTPS-like security, easily enabled by default even for the unskilled
user is much better than "full" security that only exists in theory -
because people turn it off. Security is only as good as the humans using
it...

> 
> > An alternative is to use the way HTTPS works: we only 
> > authenticate the server. To authenticate, we need to have 
> > trusted certificate inside the server. As we can see in 
> > HTTPS, this doesn't really require PKI. It is sufficient to 
> > have the server cert singed by one of few globally trusted 
> > CAs and have this root certificates distributed with all 
> > client installations as part of their setup procedure. This 
> > is quite doable. In that scenario, a client can verify a 
> > server's identity and the above sample (*.* 
> > @server.example.net) could be verified with sufficient trust. 
> > The client, however, is still not authenticated. However, the 
> > threats we intended to address are almost all addressed, 
> > except for the access control issue which is defined as part 
> > of the Masquerade threat (which I think is even a different 
> > beast and deserves its own threat definition now that I think 
> > about it). In short we just have an access control issue in 
> > that scenario. Nothing else.
> > 
> [Joe] I think the threat model in the document describes masquerade of
> the client.  

IMHO the document MUST describe masquerade of both client and server.
Server masquerade is very serious.

>Perhaps access control is not the only way to deal with
> this, perhaps just being able to associate the authenticated identity
> with the messages is enough, I don't know at this point. 

See my comment above.

>
> > The problem, however, is that the server still needs a 
> > certificate and now even one that, for a home user, is 
> > prohibitively expensive. The end result will be that people 
> > turn off TLS, because they neither know how to obtain the 
> > certificate nor are willing to trade in a weekend vacation 
> > for a certificate ;) In the end result, even that mode will 
> > be less useful than anonymous authentication.
> > 
> > The fingerprint idea is probably a smart solution to the 
> > problem. It depends on the ability to auto-generate a 
> > certificate [I expressed that I don't like that idea 
> > yesterday, but my thinking has evolved ;)] OR to ship every 
> > device/syslogd with a unique certificate. In this case, only 
> > minimal interaction is required. The idea obviously is like 
> > with SSH: if the remote peer is unknown, the user is queried 
> > if the connection request is permitted and if the certificate 
> > should be accepted in the future. If so, it is added 
> > permanently to the valid certificate store and used in the 
> > future to authenticate requests from the same peer. This 
> > limits the security weakness to the first session. HOWEVER, 
> > the problem with syslog is that the user typically cannot be 
> > prompted when the initial connection happens (everything is 
> > background activity). So the request must actually be logged 
> > and an interface be developed that provides for user 
> > notification and the ability to authorize the request.
> > 
> > This requires some kind of "unapproved certificate store" 
> > plus a management interface for it. Well done, this may 
> > indeed enable a home user to gain protection from all three 
> > threats without even knowing what he really does. It "just" 
> > requires some care in approving new fingerprints, but that's 
> > a general problem with human nature that we may tackle by 
> > good user interface desig but can't solve from a protocol 
> > point of view.
> > 
> > The bad thing is that it requires much more change to 
> > existing syslogd technology. That, I fear, reduces acceptance 
> > rate. Keep in mind that we already have a technically good 
> > solution (RFC 3195) which miserably failed in practice due to 
> > the fact it required too much change.
> > 
> > If I look at *nix implementations, syslogd implementers are 
> > probably tempted to "just" log a message telling "could not 
> > accept remote connection due to invalid fingerprint 
> > xx:xx:..." and leave it to the user to add it to syslog.conf. 
> > However, I fear that for most home setups even that would be 
> > too much. So in the end effect, in order to avoid user 
> > hassle, most vendors would probably default back to UDP 
> > syslog and enable TLS only on user request.
> > 
> > From my practical perspective this sounds even reasonable 
> > (given the needs and imperfections of the real world...). If 
> > that assessment is true, we would probably be better off by 
> > using anonymous TLS as the default policy, with the next 
> > priority on fingerprint authentication as laid out above. A 
> > single big switch could change between these two in actual 
> > implementations. Those users that "just want to get it running"
> > would never find that switch but still be somewhat 
> protected while the
> > (little) more technically aware can turn it to fingerprint 
> > authentication and then will hopefully be able to do the 
> > remaining few configuration steps. Another policy is the 
> > certificate chain based policy, where using public CAs would 
> > make sense to me.
> > 
> > To wrap it up: 
> > 
> > 1. I propose to lower the default level of security 
> >    for the reasons given.
> >    My humble view is that lower default security will 
> result in higher
> >    overall security.
> > 
> [Joe] I'm not convinced of this.  If you use TLS without 
> authentication
> and authorization you don't really gain any security benefit 
> from it, it
> is pretty much the equivalent of not running security.  

Again, this leads to the conlusion that HTTPS is not running security.

> It might be
> reasonable to relax the requirement on being able to do access control
> on the server, but I think the threat section would need to discuss
> this.

As the discusion shows, the threat section seems to be imprecise. It
should discuss the threats in the context of who is protected - client
or server. Obviously, server masquerade is different from server
masquerade. Again, I don't think that server masquerade is less serious.
Just think of hosts/dns spoof attacks fooling the client into sending
data to the wrong server.

Rainer 
> 
> > 2. We should split authentication policies from the protocol itself
> >    ... just as suggested by Robert and John. We should 
> define a core 
> >    set of policies (I think I described the most relevant simple 
> >    cases above, Robert described some complex ones) and leave it
> >    others to define additional policies based on their demand.
> > 
> [Joe] We can split the policies, but I don't it will necessarily be as
> clean as one might hope.  I believe the document does have to 
> specify a
> minimum set of policies that can 
> 
> A) enable interoperable deployment 
> B) provide mitigation against the listed threats
> 
> Additional policies can be supported and defined (in this 
> document or a
> separate one). 
> 
> > Policies should go either into their own section OR into 
> > their own documents. I have a strong favor of putting them 
> > into their own documents if that enables us to finally 
> > finish/publish -transport-tls and the new syslog RFC series. 
> > If that is not an option, I'd prefer to spend some more work 
> > on -transport-tls, even if it delays things further, instead 
> > of producing something that does not meet the needs found 
> in practice.
> > 
> 
> 
> > Rainer
> > 
> > 
> > > -----Original Message-----
> > > From: syslog-bounces@ietf.org [mailto:syslog-bounces@ietf.org] On 
> > > Behalf Of robert.horn@agfa.com
> > > Sent: Thursday, May 08, 2008 5:53 PM
> > > To: Joseph Salowey (jsalowey); syslog@ietf.org
> > > Subject: Re: [Syslog] I-D
> > Action:draft-ietf-syslog-transport-tls-12.txt
> > > 
> > > Section 4.2 is better, but it still needs work to separate 
> > the policy 
> > > decisions from the protocol definition.  Policy decisions 
> are driven
> > by
> > > risk analysis of the assets, threats, and environment 
> (among other 
> > > things).  These are not uniform over all uses of syslog.  
> That makes
> > it
> > > important to separate the policy from the protocol, in both the 
> > > specifications and in the products.
> > > 
> > > In the healthcare environment we use TLS to protect many of our 
> > > connections.  This is both an authentication protection and a 
> > > confidentiality protection.  The policy decisions regarding key 
> > > management and verification will be very similar for a 
> > healthcare use 
> > > of syslog.
> > > Some
> > > healthcare sites would reach the same policy decision as 
> is in 4.2,
> > but
> > > here are three other policy decisions that are also appropriate:
> > > 
> > > Policy A:
> > >    The clients are provided with their private keys and 
> the public 
> > > certificates for their authorized servers by means of 
> > physical media, 
> > > delivered by hand from the security office to the client machine 
> > > support staff.  (The media is often CD-R because it's 
> > cheap, easy to 
> > > create, easy to destroy, and easy to use.)  During TLS 
> > establishment 
> > > the clients
> > use
> > > their assigned private key and the server confirms that the 
> > connection 
> > > is from a machine with one of the assigned private keys.  
> > The client 
> > > confirms that the server matches one of the provided public 
> > > certificates by direct matching.  This is similar to the 
> > fingerprint 
> > > method, but not the
> > same.
> > > My
> > > most recent experience was with an installation using this 
> > method.  We 
> > > had two hours to install over 100 systems, including the network 
> > > facilities.
> > > This can only be done by removing as many installation schedule 
> > > dependencies as possible.  The media method removed the 
> certificate 
> > > management dependencies.
> > > 
> > > Policy B:
> > >   These client systems require safety and functional 
> certification 
> > > before they are made operational.  This is done by 
> inspection by an
> > acceptance
> > > team.  The acceptance team has a "CA on a laptop".  After 
> accepting 
> > > safety and function, they establish a direct isolated physical 
> > > connection between the client and the laptop.  Then using 
> > standard key 
> > > management tools, the client generates a private key and has the 
> > > corresponding public certificate generated and signed by 
> > the laptop.  
> > > The client is also provided with a public certificate for 
> > the CA that 
> > > must sign the certs for all incoming connections.
> > > 
> > > During a connection setup the client confirms that the 
> > server key has 
> > > been signed by that CA.  This is similar to a trusted 
> > anchor, but not 
> > > the same.
> > >  There is no chain of trust permitted.  The key must have been
> > directly
> > > signed by the CA.  During connection setup the server 
> confirms that
> > the
> > > client cert was signed by the "CA on a laptop".  Again, 
> no chain of 
> > > trust is permitted.  This policy is incorporating the extra 
> > aspect of 
> > > "has been inspected by the acceptance team" as part of the 
> > > authentication meaning.
> > > They decided on a policy-risk basis that there was not a need to 
> > > confirm re-inspection, but the "CA on a laptop" did have a 
> > revocation 
> > > server that was kept available to the servers, so that the 
> > acceptance 
> > > team could revoke at will.
> > > 
> > > Policy C:
> > >   This system was for a server that accepted connections 
> > from several 
> > > independent organizations.  Each organization managed 
> certificates 
> > > differently, but ensured that the organization-CA had 
> > signed all certs 
> > > used for external communications by that organization.  
> All of the 
> > > client machines were provided with the certs for the 
> shared servers 
> > > (by a method similar to the fingerprint method).  During TLS 
> > > connection the clients confirmed that the server cert 
> > matched one of 
> > > the certs on their list.
> > > The
> > > server confirmed that the client cert had been signed by the CA 
> > > responsible for that IP subnet.  The server was configured 
> > with a list 
> > > of organization CA certs and their corresponding IP subnets.
> > > 
> > > I do not expect any single policy choice to be 
> appropriate for all 
> > > syslog uses.  I think it will be better to encourage a 
> > separation of 
> > > function in products.  There is more likely to be a 
> commonality of 
> > > configuration needs for all users of TLS on a particular 
> > system than 
> > > to find a commonality of
> > > needs for all users of syslog.   The policy decisions implicit in
> > > section
> > > 4.2 make good sense for many uses.  They are not a complete 
> > set.  So a 
> > > phrasing that explains the kinds of maintenance and 
> > verification needs 
> > > that are likely is more appropriate.  The mandatory 
> > verifications can 
> > > be separated from the key management system and kept as 
> part of the 
> > > protocol definition.  The policy decisions should be left 
> > as important
> > examples.
> > > 
> > > Kind Regards,
> > > 
> > > Robert Horn | Agfa HealthCare
> > > Research Scientist | HE/Technology Office T  +1 978 897 4860
> > > 
> > > Agfa HealthCare Corporation, 100 Challenger Road, 
> > Ridgefield Park, NJ, 
> > > 07660-2199, United States http://www.agfa.com/healthcare/ 
> Click on 
> > > link to read important disclaimer:
> > > http://www.agfa.com/healthcare/maildisclaimer
> > > _______________________________________________
> > > Syslog mailing list
> > > Syslog@ietf.org
> > > https://www.ietf.org/mailman/listinfo/syslog
> > 
> 
_______________________________________________
Syslog mailing list
Syslog@ietf.org
https://www.ietf.org/mailman/listinfo/syslog