[Strint-attendees] Lazy Key Validation, Comments on Papers

Phill <hallam@gmail.com> Thu, 27 February 2014 21:33 UTC

Return-Path: <hallam@gmail.com>
X-Original-To: strint-attendees@lists.i1b.org
Received: from mail-wg0-f49.google.com (mail-wg0-f49.google.com [74.125.82.49]) by diego.dreamhost.com (Postfix) with ESMTP id 1967048820 for <strint-attendees@lists.i1b.org>; Thu, 27 Feb 2014 13:33:35 -0800 (PST)
Received: by mail-wg0-f49.google.com with SMTP id x12so3457372wgg.20 for <strint-attendees@lists.i1b.org>; Thu, 27 Feb 2014 13:33:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date:cc :message-id:references:to; bh=Dux+uXDQqpaT/6YQH1cBb8SE15T1QY+n+1CKOM6PWBI=; b=SQ4FS++jVV6uIKeMnqFgDcKCl6WJDKj4L6No9+k0Fxe+YboxgZlwM9mnExclGhOlB5 6hWZlWMmpGepT4V/2DgOVZQ3NyGX1raV+GkAjr9xzbrfPIXktA3ZnPD77GvBCEvlIA5d ZPREBtyRTGAj2QZq9iCM1tcHLZkAcctlKBGvlBopc0aATkNnhek4deQCBLakvMeRGkL2 2Ijv/mpUrz7VsxChZayFwc+29af1YoTVx5iJl33iQ4Hzms1NHJ0aJlG0ckv6FxZ7FxDw TBGvfuw7TNC2+Ki10uhEOoC6QuNNBwhK6ntNzeYSJN19rAS/Bpx+getdkUGQlimliR98 7pSQ==
X-Received: by 10.180.75.202 with SMTP id e10mr180820wiw.50.1393536814650; Thu, 27 Feb 2014 13:33:34 -0800 (PST)
Received: from [192.168.101.190] ([130.129.154.90]) by mx.google.com with ESMTPSA id u6sm23532774wif.6.2014.02.27.13.33.33 for <multiple recipients> (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 27 Feb 2014 13:33:33 -0800 (PST)
Content-Type: multipart/alternative; boundary="Apple-Mail=_9A676495-0A52-4E1C-BA9C-D1D46948C57A"
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Phill <hallam@gmail.com>
In-Reply-To: <530E5E68.1000403@cs.tcd.ie>
Date: Thu, 27 Feb 2014 13:13:34 -0500
Message-Id: <94B13197-2974-4BEB-895B-AD3FDD763E04@gmail.com>
References: <530E5E68.1000403@cs.tcd.ie>
To: "strint-attendees@lists.i1b.org" <strint-attendees@lists.i1b.org>
X-Mailer: Apple Mail (2.1827)
Cc: Rob Stradling <rob.stradling@comodo.com>, robin Alden q <robin@comodo.com>
Subject: [Strint-attendees] Lazy Key Validation, Comments on Papers
X-BeenThere: strint-attendees@lists.i1b.org
X-Mailman-Version: 2.1.14
Precedence: list
List-Id: STRINT Workshop Discussion List <strint-attendees-i1b.org>
List-Unsubscribe: <http://lists.i1b.org/options.cgi/strint-attendees-i1b.org>, <mailto:strint-attendees-request@lists.i1b.org?subject=unsubscribe>
List-Archive: <http://lists.i1b.org/pipermail/strint-attendees-i1b.org>
List-Post: <mailto:strint-attendees@lists.i1b.org>
List-Help: <mailto:strint-attendees-request@lists.i1b.org?subject=help>
List-Subscribe: <http://lists.i1b.org/listinfo.cgi/strint-attendees-i1b.org>, <mailto:strint-attendees-request@lists.i1b.org?subject=subscribe>
X-List-Received-Date: Thu, 27 Feb 2014 21:33:36 -0000

New ideas from reading the papers

* Lazy Key Validation as a way to do better than opportunistic encryption at the same cost.

* MAC addresses and the link layer are bogus. We can replace it and dramatically improve security. This would require new middleboxen for the new protocol but this is practical as there would be a usability advantage. And this would address privacy issues some have found very costly for them (and others will as well).


I am not commenting on every single paper, just the ones that raise issues I have not yet seen in the earlier ones.

One thing that does make me opportunistic is that there is a great deal of consensus in the approaches although where implementations have been created they are naturally different.


Papers 1, 8: End to end encryption (also see 22, 33, 40

The idea of applying a CT approach to email credentials appears to have multiple parties proposing it (see also the right key list).

Paper 1 (mine) proposes 1) framework for developing and deploying new approaches to email encryption that meets the usability criteria that supports one direct trust model and multiple indirect trust models 2) One particular trust model.

Paper 8 proposes ConfiMail which is mostly a novel trust model. We should see if it is possible to leverage (1) to implement (8)

Paper 22: Any PKI that credentials end users that works for email should work for synchronous protocols as well.

Paper 33: Some similar arguments

Paper 34: Is very relevant

Paper 40 is proposing a DNS/DNSEC based approach which could be one of the trust models supported by common plumbing. But that assumes that we are trying to establish trusts between domains rather than users or organizations.

Paper 59 is essentially proposing meeting the same objective but using canned scripts rather than introducing a new protocol to fuse PGP and S/MIME. If we generalize the approach it looks very similar to the ‘profiles’ suggested later on. It would be nice if I could run a script from a trustworthy source that would lock down Apache, SAMBA etc.

Paper 65 proposes a new trust model that might be applied in the email context but it is directed at a lower layer.

Paper 2: Opportunistic Encryption in MPLS networks (see also papers 7, 12, 27, 32, 38)

One nit in the paper is that it talks about the MPLS control plane but only encryption. Surely authentication is the prime concern for the control plane? I don’t much care about authenticating the data plane as I can do that end to end.  But I certainly want the control plane fully authenticated.

Useful objective. But maybe the term opportunistic encryption is limiting. The protocol is not passing a credential for the key in band, no credential might exist. But the attacker might not be able to tell that. Or alternatively the credential might be used without verification initially but verified after. This will not prevent an attack but it will reveal an attack made in the past.

[The JSON formats I am playing about with for key endorsement under PPE might be used to do a trial]

What I am thinking is

1) Routers establish session keys using DH exchange.

2) One or either Router register the keys (e.g. via Omnibroker, xkms etc.)

3) If there has been an attack, the keys won’t match and we can make an alert.

This lazy key validation is not going to prevent an attack but it can detect one. And the NSA tell me that they are very risk averse to being caught.

The advantage is that lazy keying provides a deterrent even if no check is made. The attacker cannot know whether their attack will be checked or not. So if they are discovery risk averse they should not attempt any MITM attacks. Moreover, were MITM attacks to occur and be detected with regularity this would encourage a rapid move to strong keying.

Paper 27: Points out that opportunistic encryption could be used as a substitute for good encryption and does not provide authentication. So if OE is pushed too aggressively we could end up weakening security overall. (and note the recent vendor bug that effectively downgraded all SSL encryption to opportunistic by not checking certs). 

Paper 32: Gives the rationale for tcpcrypt which is TLS at the TCP layer.

Paper 38: Makes a very similar proposal to the trust model proposed for PPE. 

Paper 51: is a variant on the same scheme but with a different twist on key discovery which is delayed rather than being random

Paper 66: a very comprehensive treatment of opportunistic encryption.


Paper 6 (also 29, 34, 37, 43)

The paper does not address the biggest problem we have which is enabling a non-tech savvy customer (i.e. the majority) to tell if their provider is in compliance. For that we need more than standards, we need someone to

1) Define the security criteria to be met (paper does this as do others)
2) Develop profiles of existing standards that meet known criteria
3) Communicate deficiencies in the standards that do not allow the criteria to be met as defect reports.

The existence of such profiles then enables providers to offer services that meet known profiles and other parties to audit them. cf. what goes on with Mastercard and Visa PCI compliance.

When I did control engineering I learned that every system is characterized by its feedback loops. If you want to change a system then change or add feedback. So this is a new feedback loop that connects the operators to the standards process in a manner that empowers them with a share of formalized decision making power.


Paper 29 describes an example of one profile.

Paper 34 describes the same requirement. However the institutional proposal is different to the one I suggest. I don’t see how the IETF can be both the proposer of the standards and the curator. Not least because the consensus based approach tends to encourage feature proliferation and IETF is already bandwidth constrained. Getting in a profile would merely be a new step in the standards process. Better to have one group of architects propose specs and another group of operations specialists curate them. 

Another important point to make here is that one of the reasons that PKIX is so complex is that people insisted on simplicity. Necessary features were left out of the original proposal as unnecessary and then added in as completely different mechanisms rather than one mechanism with multiple features. OCSP, CRLs and SCVP have very similar functions yet they are completely different protocols. Where simplification has taken place such as the introduction of formats to support CSRs instead of the original PKIX mechanism the result has been four mechanisms, the deployed CSR mechanism, an IETF revision plus the PKIX mechanism and a combined mechanism to support both approaches.

Paper 43 Describes the STREWS project which has many similarities to the mostly US OWASP user group and could be a basis for defining profiles (although final institutional arrangements would have to be emergent).


Paper 10

The takeaway here is that SIP security is bjorked and we should try again for RTC web rather than chase sunk costs. I agree.


Paper 14 (Also 15, 17

Another cut at a threat model for PPE. This is clearly a work item that needs to be addressed. This really needs to have material condensed from multiple sources in coherent fashion. I agree with each paper individually but they need to be harmonized and completified.


Paper 19

The idea seems bigger than the space available in the paper and I can’t download the referenced papers here.

While I agree that self-bootstrapping is possible, why build up from the domain name system as primary? In my view keys are the long term anchors for security. I can have a key valid for 20-30 years. But I have 30+ domains as a private individual and those come and go (e.g. just dropped RomneyPalin2012.com)


Paper 20

The use of SRV records and other DNS records besides A has always suffered from the problem that much of the Internet infrastructure blocks them. This paper finds that only 3% of locations suffer this problem which is consistent with other measurements. However it also notes that for use of Opportunistic Encryption, this is acceptable. While a 3% failure rate is completely unacceptable if it prevents use of the application, it is acceptable if the cost of the SRV records being blocked is not loss of functionality.

The problem here is downgrade attack vulnerability: A client that can’t abort if a security policy record is not present is vulnerable to downgrade attack.

But why are we so committed to demanding support for modern DNS for the client-resolver? Isn’t a better strategy to do what is suggested for SIP and just cut our losses? We can still use DNS for the publication protocol that the resolver consumes. But the client-resolver protocol can be something like Omnibroker which is designed to provide a reliable last hop that a client really can abort on if it is blocked.


Paper 28:

Points out that embedded devices have significant security requirements and that it is unsatisfactory for these to be dealt with in ‘the cloud’. Other papers observe that there is a control issue. The user is turned into a serf with no ability to control their systems if they will only interact with a cloud system via encrypted protocols. It is not even possible for users to see what private information is being disclosed.

Personal note: If we regard home automation properly it is a SCADA system. While my experience of control engineer predates the Internet I cannot imagine a situation where a site would ever rely on a SCADA system operated in ‘the cloud’ by a third party vendor. Yet this already happens for HVAC with Nest and some vendors would have me do it for lightswitches.


Paper 30:

Makes one of my favorite security points, the endpoints of a communication are not necessarily the same as the endpoints of the trust relationship. This was one of the principles around which XKMS was designed. If Alice is sending email to Bob in their personal capacities the users are the endpoints. But if Alice is a bank employee giving Bob information about his account then the trust context is between the Bank and Bob, what Alice thinks trustworthy is irrelevant.

Since the modern fashion is now for JSON, I redesigned XKMS in JSON, together with a transport that works over UDP and even shoehorned over DNS TXT record lookups, the result is Omnibroker.


Paper 35: 

My question: Why do we need link layer addresses at all? The only time a device identifier is needed is during initial setup of a communication. The stack should be IP down to the metal. 

Get rid of OUIs and MAC addresses. Instead define a hailing protocol to replace DHCP that is based on a machine identifier generated from a public key. For privacy protection the public key may be wrapped in an ephemeral key.

The whole WiFi connect scheme is rubbish and improves very slowly. 

Paper 65 is asking a very similar question.


Paper 36:

I get the feeling that the point the author is making is hidden under being too polite.


Paper 41: 

Application level tracking ‘fingerprinting’, anonymous author by the looks, Satoshi, is that you?


Paper 42 (also 46):

Proposes getting rid of bearer tokens (passwords, static cookies in channel) which is of course a no-brainer that we have been trying to do for decades now (Digest0MD5 anyone). I think that if this is going to happen we need a forcing factor such as the security profiles mechanism proposed. Then the way that it would work is the profile authors realize that they can’t meet the security criteria with bearer tokens and this would raise a defect report and a demand for an alternative. It would of course be open to curators to simply choose an ID if there was insufficiently quick process.

46 is another take on the same problem, same response. 


Paper 44:

How many papers is Cullen on?


Paper 46:

Raises the issue of clickthroughs and asks how do we deal with them. 

I think the answer might lie in the feedback loops. CISOat large company is going to decide to go for NewOrg Gold profile compliance across every IT system. Now the broken Web site is a security compliance issue and is dealt with by someone who has authority to shut the site down until it is in compliance.


Paper 50:

Suggests use of Ipv6 as leverage to address privacy issues, regarding them as disposable to break up long term associations.

I am not sure how effective this is for privacy as the observer can just take the same approach that the routers do and ignore the lower 64 bits. As far as the Internet is concerned, IPv6b is a 64 bit address space and an 80 bit port space for the use of the local network.

I have looked into the use of this sort of scheme in Omnibroker and ws/connect as a denial of service mitigation scheme.


Paper 55:

Raises some interesting privacy issues in reporting bad certs.

One of the reasons I see the need for a client side agent is to allow these to be addressed. If Alice is visiting a porn site with a bad site, a report from her browser can identify Alice and there is a risk that a misconfigured site is drowned in security reports.. Both issues are mitigated if the client reports the issue to an agent they have selected which forwards the notification.


Paper 58:

Another variant of the active attack suggested is one where instead of trying to break the connection the attacker merely wants to force disclosure of metadata.

I bet I can manipulate a lot of information in a TLS handshake without getting caught by issuing reset commands etc. at the right time.

Moving credential discovery out to a separate protocol mediated by a trusted agent as in Omnibroker might allow the privacy issues raised in the paper to be resolved by responding from cached credentials. While the trusted agent would still be a privacy liability this is much better than being exposed to the open net.


Note: I wrote this on a MacBook Air 11” on the flight from Boston to London. The machine was on for most of the 5 hour flight and still has 70% battery. So in-seat power connections become more or less ubiquitous to the point of being able to expect them to be there at about the same time that the machines get good enough not to need them.