Overall: -------- 0. Brevity is good. The document's not brief enough:-) 1. I'd suggest that this document be recast as something that describes a snapshot of the DTNRG view on bundling security, rather than (as currently seems to be the case) a requirements-type document. The main reason is that its already very useful as-is, but we don't necessarily need to finalise every detail before putting this to bed. Also, revisiting this to take account of the final prtocool would be a pain, and not very worthwhile. Where necessary and useful text from this could be inserted into the protocol document. Getting this to the state where everyone's happy to decide any controversy by referring back to this might be counterproductive. If this were agreed, I'd suggest a quick tidy up, followed by submission to the RFC editor (maybe this'll be the 1st DTNRG RFC!). 2. The document seems (understandably given previous discussion) to focus too much on cryptographic protection. I'd like to see more emphasis on specific authorisation mechanisms. For example, it'd be nice to say why using SAML+XACML at each bundle agent to decide whether or not to forward the bundle is good or bad. Even if that combination isn't ideal, (which'd take a bit of thought to establish), if it, or similar, could work, then the requirement for source authentication may be easier to meet. In particular, if we assume that the SAML assertions consumed by the policy enforcement point (PEP) are "fresh" then we no longer need to bother thinking about source-credential revocation within the network. A successful case of kicking-to-touch/punting! Basically, maybe I'm saying we ought to try think along the lines of rfc2906/7/8 about PDPs and PEPs and DTN-AAA, that than take what I could term the current DTNSec type model. (Ok, I confess - I'm a co-author on those RFCs, but it'd be interesting to see the extent to which they, or the general PDP/PEP model, might usefully apply to DTNs!) 3. I'm not entirely comfortable with the "protect the infrastructure" argument, (e.g. as stated at the start of section 2). While I don't disagree with lots of the conclusions, I do believe that infrastructure is only provided for applications, and infrasructure security likewise, so ultimately we do want to protect application data. I'm not sure how I'd suggest restating this, but I do think that we want to provide security tools for application data, (e.g. key-mgmt for end2end integ). 4. The security goals, as stated, are open to being attacked on a number of fronts (as happened in Minneapolis!). For example, the ability to disable a compromised agent probably implies (giving someone) the ability to prevent an authorized agent from forwarding bundles, so items 2 and 6 (from the list at the start of section 2) are somewhat contradictory. Similarly, we saw at the meeting that not detecting payload modification or replay is in conflict with item 3 from the above list. I think that recasting DTN security along the lines of the PDP/PEP model may help out here, but I also think that we do need to be able to provide tools/techniques that do address e.g. payload modification and replay. 5. Other than being a bit silly, is there a specific reason not to provide mechanisms that can be used to make it harder to do traffic analysis? If all it amounted to was defining a "pad" bundle payload, and allowing agents to send that whenever they want, then I can't see a good reason to not do this (even if its not a compelling thing to be doing). I would agree that defining DTN security so that it can provide something like a mix network with nymity is not a serious goal (OTOH, if someone had a great idea for doing that, would it be bad? Note: I don't have such an idea:-) 6. Security is (correctly) stated as being optional. Therefore use of BAH is optional. Therefore an authorized agent can simply send bundles which do not contain a BAH. Therefore agents which support incoming BAH, MUST have a policy DB (or rules, whatever) which determine whether a given DTN peer is, or is not, allowed to send me this bundle, with this content and set of headers now. That's a lot to get right. We might be able to tie this down a bit better later on, say if BAH is turned on iff PSH is. 7. This is almost a nit, but...Both IPSec and TLS provide for some ability to do payload padding in order to disguise payloads where the payload length determines the payload content, e.g. if the only options are "yes" and "no" then length=2=>"no". If we're supporting confidentiality, then I'd be for allowing the same thing here. Mind you, no-one ever uses this since there's no sensible way to provide an API that anyone'd want to call. 8. I disagree with the "security policy router is a special agent" idea. Basically, every agent has to make these decisions (by integrating a PEP), and anything after that is a matter of local configuration (unless and until we provide ways to signal in-band about some agent being a bottleneck for a region or something). 9. Replay. I may or may not convince you, but I believe we do have to provide replay detection mechanisms and maybe the Wed morning discussion in MSP went some way towards us agreeing that. In any case, I don't think that this document should be so deterministic (if you accept #1 above that is), so its ok to say "replay: we're not sure yet" :-) 10. I don't know but we may need to do more to support routing. Say if a new route is announced, then a) that might be a sort-of broadcast message and hence might e.g. have a different set of immutable headers (it'd certainly have to use signing and not MACing) and b) an agent who's "installing" that route, might have to make some kind of DTN connection to a security server if he's never before been in touch with one of the regions in the announcement, for example. At that stage we'd maybe have to define additional security rules say matching values from the announced route and some of the stuff from the security interchange. I know this is all very vague, but basically, the point is that we might have to define specific routing security, which'd best be done in a routing docuement and not here. For example, I'm not sure the discussion of compromised routers and (presumably) MACing keys is in the right place (all DTN agents are routers really, aren't they?) 11. Traffic padding. While it may be silly in many DTN contexts to want to do traffic padding, there will be contexts when it may make sense. For example, if a DTN bundle traverses the terrestrial Internet (which is highly likely for many DTNs), then there may be reasons to disguise the existence of real traffic. For example, if the "sensor" end of that DTN were of the presence-detecting kind and takes a lot of trouble (e.g. via spread-spectrum) to try to hide a presence-detection, then it'd be a pity to leak the detection to someone who only has to watch the SNMP counts on some bundle agent or router. As section 4.1 implies, being able to support traffic padding requires payload confidentiality but also requires confidentiality for some header fields (mainly the destination addreess and port equivalents). I believe we ought to make this possible, but I don't believe that we particularly need a fully worked out, standard way to actually do traffic padding (e.g. I don't think we need to specify bahaviour for using delays for bundles just to diguise routing behaviour). If we can figure out how to provide basic support for this, then it could be appropriate to include it in an appendix, it certainly oughtn't be anywhere near mandatory. 12. I believe that section 4.2 reflects a slightly unusual use of terminology. In my mind at least, every bundle agent is making a network access control choice whenever it decides to forward or deliver a bundle. So, of course I don't buy the security policy router special category. 13. Fragmentation is a pain, that's clear. Combining fragmentation with almost any other service casues grief (e.g. aside from securtiy, CoS/QoS stuff has to get confusing). So why don't we define a don't-fragment bit in the header, just like in IP? That way, at least we can avoid the nasty cases where network provisioning allows. 14. Fragmentation again. I see a note from some "disruption" folks which implies that maybe we've been thinking about only a subset of the types of disruption they're considering. In particular, they seem to envisage more but shorter outages. I think its fair to say (at least in my case) that I've generally been thinking if the link went down, then it'd be down for the remainder of the contact. So maybe we ought to try to figure out a scheme which nicely handles mid-bundle outages. This changes the requirement for digesting the payload from one where our ideal mechanism is "Verify MAC on first N bytes, for any N", to one where we're after a scheme which allows "Verify MAC on entire object minus some (probably contiguous) bytes". These are perhaps sufficiently different problems that different approaches should be taken. I don't know. 15. Section 4.4 points out the correct fact that successfully verifying a MAC means that either the claimed sender or else the verifier created the MAC. While it might be impossible (given the protocol context) to get these confused, its also considered good practice to ensure at the crypto-mechanism level that this cannot occur. There are two ways to do this, either derive keys based on the direction of communication, or else ensure that the to-be-MACed data includes the sender-id. The former is the preferred way to achieve this, since the latter is more vulnerable to evolutionary changes to the protocol, or other re-uses of the MACing keys which don't identify the direction of communication. In this context, where we probably cannot mandate a single key derivation scheme for all DTNs, maybe the best we can do is to impose a requirement that all key distribution schemes derive different keys for each direction of communication. (If/when we specify key management schemes we should really ensure that they all have this property of using different keys for each purpose in each direction. This is done in TLS and IPSec and many other reputable protocols and has little or no overhead.) To give another example, and one that could easily happen, it would be potentially very bad to use the same symmetric key to authenticate both a BAH and a PSH. 15. The DoS discussion focuses on whether senders are legitimate or not. Generally the more interesting distinction is whether the originator is on-path or off-path. On-path DoS attacks are harder to make hard, whereas using random values in counters etc. is quite effective in reducing the probability of successful off-path attacks. At one stage it is also stated the replays are difficult to mount; well that may be true or false, depending on the network in question, but the point is that once you can send in a single packet its easy to arrange to send in a million, and replaying is your first target if there's a way for recipients to turn on authentication checks! 16. DoS and replay again. My main concern with the bundling protocol is to make it harder to mount a battery depletion attack via replays (or more generally via an adaptive attack). The problem is that many many DTNs will have nodes which are vulnerable to battery depletion so we just have to do our best to prevent those nodes being flooded. If bundles can be inserted into the DTN at some other point(s) with a high probability that they'll eventually cause traffic (and not even direct traffic, it could be NACKs or equivalent) to be routed via the target node, then we've got a live DoS vulnerability which will often partition an entire network. (Note that this is another strong argument to integrity check the entire payload even hop-by-hop, so that application layer cheating cannot result in the aimed-for misbehaviour.) 17. Application layer services. We seem to be agreed on the provision of data integrity, origin authentication and application layer anti-replay protection, all of which can be provided using the same mechanism, i.e. digital signature/MAC, most likely using signature due to the ability validate the signature at multiple locations. All good so far, but let's see the consequences. First, to be able to use MACing end-to-end (which is quite useful, even if end-to-middle MACing isn't so useful), you need a key distribution scheme. The examples of IPSec, Bluetooth and WEP, show that failing to provide such a key distribution scheme limits the scope for deploying networks (ok, this can be argued since the proximate causes differ in each case, but I belive its substantially true). That means that we need (at some point) to define one or more ways to distribute symmetric keys end-to-end. Now we could punt on this and only allow that to occur out of band (e.g. during node provisioning), but then we'd face the issue of key rollover which really has to be done in-band. If we do provide in-band key rollover, the we absolutely require (an equivalent to) an end-to-end confidentiality mechanism to be provided by the DTN, and not by the applications. Remember that all of this is to maintain end-to-end MACing to support data integrity, origin authentication and application layer anti-replay, so the lesson is: if you provide end-to-end MACing and want the network to scale, then you require an end-to-end data confidentiality mechanism (note, a mechanism, not necesarily a service for applications). Ok, someone who doesn't like this will say that this isn't needed to support digital signatures. There are, however, cases where an equivalent mechanism is required. For example, not all nodes can generate key pairs, and when one of those nodes requires a new private key (required by good cryptographic practice) that must be provided to such a node using a confidentiality mechanism; if a private key must be moved from one node to another, say when node hardware is being upgraded then again a confidentiality mechanism is required. If an IBC scheme were to be used, then again the decyptor would have to deal with the security server and may require a confidentiality mechanism. There are also some schemes which make use of symmetric keys in order to distribute new trusted public key information. Standard public key managment protocols like CMP, CMC and XKMS all do include such confidentiality mechanisms. Certainly, this does not mean that every DTN node must include an implementation of a confidentiality mechanism, nor does it mean that the confidentiality mechanism must use the bundling protocol, nor the same routing, nor need the key distribution protocols necessarily be delay or disruption tolerant. However, it does require either the secret keys required are exposed between the endpoints in question, or else that an end-to-end standard set of key management data structures are required, which provide a fairly generic confidentialtiy mechanism, at least for small, infrequently sent amounts of data. The first option was tried by WAP (using TLS and then WTLS) but was one of the main things which (it was claimed) caused reluctance amongst e.g. banks to deploy secure banking via WAP (it may be more to the point that network operators and banks have different buisness models, but nonetheless the claims were made). And traditionally, end-to-end proponents (most Internet developers) would argue strongly not to expose keying material in mid-network. So, it appears that at least some, and arguably many, DTN nodes require an end-to-end confidentiality mechanism, though not necessarily an end-to-end confidentiality service for applications. The question to ask though is: given the above, is it really hard to extend this to provide key distribution for such an end-to-end confidentiality service? 18. Combining end-to-end integrity and confidentiality. The document currently allows for a flag indicating that the payload is encrypted. It also says that the PSH digest is calculated over the cleartext payload. This means that whenever the payload is encrpyted, the no middle-host can validate the PSH, e.g. none of the "security policy routers" get to do their job. There are good arguments for both ways of doing this (i.e. signing cleartext or signing ciphertext), but signing the cleartext, then encrypting and including the signature in clear is uncommon and has weaknesses. For example, if the cleartext is guessable, then the signature allows for checking of guesses. As it stands (i.e. if we reject my #17 above on an end-to-end confidentiality service), probably best is for the PSH to contain a signature over ciphertext and for data-origin-authentication to be provided in the same way as confidentiality. (Note the PSH as reforumulated here doesn't provide origin-authentication for the cleartext payload, since its possible that the signer has stolen the ciphertext if the key managemnet scheme allows this.) 19. Keys expire in any reasonable setting. Bundles expire too. Therefore one of the things we have to do is define how these expiries relate. For example: is it ok for a bundle expiry to succeed the expiry of the key with which you're about to validate the bundle so long as the key was valid when the bundle was protected? (SF's answer: probably.) There are a couple of these timing issues that are fairly easy, but need addressing. Basically, everywhere a key is used (BAH, PSH and application layer confid or whatever), you have to define how the key lifecycle and bundle expiry match up. 20. The security policy router as sinkhole threat is very serious. If I could arrange (or know) that fragmentation will occur, as currently forumulated this probably makes it pretty easy to DoS such a router by simply binning the last fragment from all bundles that pass you by. (And the you here may not be a DTN router, but some intra-regional node.) I'm not sure of the best approach to take here, since it'll depend on the network, but I believe that if we abandon the idea of "special" security policy routers and allow every router to have a policy enforcment point, then we may make the problem more tractable. OTOH, this is probably just a hard problem and hard cases will always exist regardless of how a particular network looks (unless you totally turn off fragmentation). Maybe the right thing (TM) here is to go back to the requirement and momentarily forget about this solution. I belive that fragmentation and ingress controls were largely motivated by earthstations, where the Earth's rotation means you have to use >1 station to upload some data to a spacecraft, but you really want strict ingress controls on the long haul links involved. If you take the PEP model, then that requirement could be met by having a single PDP for all of the Earthstations and which is stateful where each of the PEPs contacts the PDP to check if sending this fragment is ok or not. I'm not entirely sure this is the way to go, but it may well offer benefits compared to the current "get all the fragments back together" approach. 21. Combining algorithms. We support either signatures or MACs. Could it happen that an originator mightn't know which to use for a certain destination? Usually there are a few corner cases where its hard to code this up correctly. In that case, one might want the option to include both a signature and a MAC. Personally though I wouldn't do that, its probably better to either stick to one or else allow any number of independent authenticators (e.g. including two MACs with successive keys can be useful during rollover periods, and then a signature for middle boxes to check en-route). For now, I'd be ok with just a single authenticator in a PSH and this can probably be ignored for BAH (though if the delay is significant enough, the key rollover problem might hit MACs occasionally). 22. Using region-native security instead of BAH. This needs to be done a bit differently. Firstly, to be useful, the interface would have to expose something about how the region secured the bundle - there's a big difference between WEP and IPSec! There may also be a need to expose some naming information too, e.g. if regional security used SSL, then it may help to know the name or other stuff from the client certificate if one was used. Secondly, 10.2 states the either the bundle has a good BAH or else it MUST have been secured by the region, or else it MUST be discarded. This is wrong, even with a "NULL security is ok for this region" modifier. Basically, BAH should be optional, as would regional security and the router's PEP has to decide if the presence or absence of these is required or not in each case. Finally, IPSec VPNs unfortunately do not offer this information, esp. if the convergence layer isn't in the kernel, so there's a big class of region security that can be used but cannot really be asserted to have been used at this layer. (The fix here is to push up the IP addresses & ports and for the PEP to have a synched-up view of the IPSec rules. Ugly, but there ya go.) 23. As an input for section 11: Some DTNs will have a bunch of really challenged nodes at their edges (say sensors) and its only when you get off that network that the next DTN router ("Alice") will have sufficient CPU etc. to be able to add a PSH or BAH. I'd like this to be allowed, but currently the PSH case seems to be prevented. There was some mention of treating this as a VPN tunnel in Minneapolis, i.e. running bundling in bundling and then having Alice add a PSH to the outer layer of bundling, but that'd mean there needs to be another router ("Bob") who's expecting Alice to do this and who's going to act as the other end of the tunnel. I don't think this works! What I'd like is for Alice to be able to add a PSH and for the middleboxes and final destination to be able to decide (via their PEPs) whether that's ok or not for this bundle. There may also be cases where PSH validation would similarly be done at a middlebox (say if revocation support is turned on) and the final destination is in the sensor network. This might also be a useful mechanism to support some security for sort-of broadcast bundles, where we separately secure the bundle up to the equivalent of the mail-list exploder and thereafter. Itsy-bitsy things: ----------------- 1. LTP and Bundling. I think this document is just addressing bundling. It ought to be clear that LTP is out of scope here. Having said that, at some point we ought consider common features, esp. wrt key distribution - I see no reason for that to differ between LTP and bundling - the inclusion of some sort of "key purpose" field should be enough to allow handling both. 2. p4, mainly the last para. X.509 certificates already expire, I think you're saying that short-lived certificates without revocation support may be a better option than long-lived certificates + revocation. 3. s1.1, para1. I don't think the weakest link argument is that good for networks these days, given the way we use firewalls/VPN-gateways and other middleboxes. Nor is that text really needed - I'd rephrase the para. to say that DTN security depends on region security which depends on intra-region security which depends on host security... 4. s1.3, end. Is secure-DTN-muticast a bridge-too-far? Seems v. ambitious for this round of work, though perhaps you need it for darpa purposes. 5. s1.4, if this is a standards track document, then this section ought to be using MUST/MAY/SHOULD, but if its a snapshot, must etc. are fine. 6. s1.4, definition of "source": is this deliberately silent as to the distinction between the "proper" source and a potential spoofer? If not, then "source" needs to be re-defined to be the originator of the bundle when first transmitted or similar. I think I'd prefer that definition. Should we also give name to an attacking network entity? 7. s1.4: "refers to [3] and [8]" I prefer not to use this style myself, rather "refers to the bundle protocol specification [3] and the (still TBD) bundle protocol security specification [8]." 8. s1.4, last para: This seems to be somewhat circular, since we're motivating security here by referring to the PDUs which are motivated by this. Maybe this text really belongs in the TBD security protocol spec. 9. s2, 1st sentence: I'm a bit uneasy with language like this that implies that we "include the capability to...prevent access by unauthorized applications" - basically, since this is, in general, impossible! Restating this and others along the lines of "include services and/or mechanisms which...aid in preventing or detecting access by unauthorized applications" would be better IMO. 10. s3: why are we providing an overview of services that are not provided? 11. s3.1: Arguably, there is no such security service as "protection against DoS" - the argument being that you cannot prevent or really protect against all DoSes. Instead, we should aim to provide tools which can mitigate attempted DoS. So item 4 in the list at the start of 3.1 isn't really right. 12. s3.1.1, 3rd list item: bundle integrity allows detection of modifications which are not supposed to occur, not those which are. 13. s3.2: The name of this section is odd. Sticking to the "goals" too closely perhaps. 14. s3.3, 1st sentence: Regardless of what we end up doing *some* DTNs might well include a security service for replay detection, so the sentence as currently written, is false. 15. Crypto key terminology. The terms "public" and "private" should only IMO be used wrt asymmetric algorithms like RSA, for symmetric algorithms like AES, using "secret" is better. This is pretty much the normal usage. 15. Bundle integrity. The term as currently used is somewhat misleading. I thought that it did include the payload even for the BAH. However, I now realise it doesn't. If we keep this kind of protection, then we ought not refer to it as bundle integrity but rather as bundle header integrity. 16. Section 4.3, says: "Therefore, if the private keys of all of the nodes along the path from source to destination have not been compromised, then the bundle (except its payload) sent along this path from source to destination can be verified by the destination node not to have been modified while in-transit on the path from source to destination." This is false. The missing clause is something like "and if all the bundle agents are operating correctly, and without cheating". Note that software integrity is not the only issue, since sometimes people do include malicious code in the original software, either accidentally, or perhaps more commonly, intentionally (e.g. spyware of the various commercial or governmental types:-). 17. Section 4.3, last para: This restates the fallacy from nit #16. 18. Section 4.4.1 and elsewhere. The use of the phrase "decrypt the hash value" isn't quite correct, for example with HMAC the key is prepended to the to-be-MACed data and CBC-residue schemes use the key for each cipher-block-sized chunk of to-be-MACed data. In both cases, there is no stage of calculation where there's an unkeyed digest. This isn't a problem for someone used to MACs, signatures etc., but could mislead a naive reader. Better to talk of MACing/signing and MAC/signature checking. If you simply explain first time you say "sign" that you include both signatures and MACs then that's ok (its also done in the xmldsig specifications for example). 19. Section 4.4, "If private key cryptography...", this is an example of where you really ought say "If secret key cryptography..." 20. Section 4.5, correctly says that you cannot prevent DoS, but goes wrong in some of the discussion. For example, stating that: "Without valid, current keys, a secure network can be brought to a standstill" is misguided. In actual fact, the use of cryptography enables new DoSes (flipping one bit kills the entire protected block of bytes). I think the right thing to do is just state the DoS cannot be prevented, but can be ameliorated, and certainly that protocols can be designed to be more or less DoS resistant. There can be no reason to argue that a DTN should be a priori less DoS resistant. (Ok, I can't resist: "Despite how futile..." is probably worse than even my English:-) 21. Setion 4.5.3, 5th list element, says that signatures are more easily replayed which is true. However, this could be prevented by including (a hash of) the recipient tuple within the signature. (Though note that this isn't standard PKCS#1 padding, it has been done in the past.) Basically, most signatures (esp RSA) involve values significantly larger than the hash-OID+hash-Value which PKCS#1 block type 1 uses and therefore contain a number of random bytes (from memory about 90+ bytes of random data). There's plenty of room there to include an additional digest or two! 22. Section 4.5.3, 6th list element. Its not really good to argue that DTNs are both more replay resistant and harder-to-protect due to the extended lifetimes of bundles. In particular, each replayed bundle has a magnified effect since it'll perhaps persist in the network much longer than in non DTN cases. 23. Section 4.5.3, 6th list element. There are purely local ways in which a node can protect itself (and not the DTN in general!) against replays. Simply keep a replay cache of the digests of the inbound bytes, perhaps with additional counts and timing information (though that's not ususally done). The node can then tailor its response based on the replay status of the incoming message. (Note that the bytes input to digesting can be ecverything receieved or else some sensible subset, or even multiple subsets so that different partial replays can be spotted and handled as such). The cost of doing this is memory and CPU for digesting which is normally not that bad. The hard part here is knowing when to flush the replay cache entries. There are a range of options,s can be spotted and handled as such). The cost of doing this is memory and CPU for digesting which is normally not that bad. The hard part here is knowing when to flush the replay cache entries. There are a range of options, e.g., simply bounding the cache by size, or else more complex schemes which flush entries based on age and the number of replay occurrences, or whatever. (Since this is not an interoperability issue, programmers can more or less do what they like here.) Finally, if a number of such nodes can share information about their individual replay caches then the network as a whole can be given relative good protection. Again this can be done without impacting the basic protocol except that the rules for what is and is not an unexpected replay need to be figured out. (This one's probably not really a nit is it;-) 24. Section 4.5.3.1.1. Each bundle/fragment in the network is uniquely identifiable - by running its bytes through a digest. See #23 above. 25. Section 4.5.3.1.2. This only works if the same keying material is in place allowing the BAH to be validated in both destinations. While it might happen for signatures, I'd expect MACing to be far more common for BAH. There's no good reason to have the same symmetric key usable for validation at two different routers. 25. Section 5, 1st sentence: As before (#16) this also requires that the DTN nodes on the path don't cheat. 26. Section 5.2.1. The description here is a little wrong. The main problem with originator key compromise is that you don't know where copies of the key are now available, and therefore you don't get origin authentication. Modifying the bundle in-flight is also possible, but less likely. 27. Section 5.2.2. I think its not really necessary to say things like this. 28. Section 5.3.1. Again, too many words. This could all have been said in section 5.2. 29. Section 5.4. This has all been said already! 30. Section 6. Again I think this has all been said already. 31. Page 40, says: "A source may use the Source Routing Header to ensure that a given bundle, and therefore all fragments that may result from that bundle, will pass through one or more specified security policy routers en route to its destination." The problem with this is that the "will" is really a "should" since the header is asking for something and not enforcing it. So if there's a bad node somewhere on the (DTN or intra-regional) path, it'll still cheat, maybe by deleting both this header and the BAH, and it may still be able to arrange a DoS on the "security policy router." 32. Page 40, last para: what's this doing in a section about security policy routers? 33. Section 7. This was probably changed after Minneapolis. I'm not sure if its worthwhile trying to keep it up-to-date or not. Maybe it'd be ok to move the 2nd para to an earlier section and then delete the section entirely. 34. Section 10.1. The list of algorithms isn't quite right. Using HMAC means only having one algorithm for #2 and #3, and when using signatures given the pkcs#1 padding scheme includes the hashing OID in the signature block, its usual to call e.g. rsaWithSha1 a signature algorithm, rather than saying we're using sha-1 and we're using rsa. Main thing though is that with MACing constructs you usually can't separate #2 from #3 (nor #4 from #5).