[OPSEC] Roman Danyliw's No Objection on draft-ietf-opsec-indicators-of-compromise-03: (with COMMENT)

Roman Danyliw via Datatracker <noreply@ietf.org> Thu, 19 January 2023 03:46 UTC

Return-Path: <noreply@ietf.org>
X-Original-To: opsec@ietf.org
Delivered-To: opsec@ietfa.amsl.com
Received: from ietfa.amsl.com (localhost [IPv6:::1]) by ietfa.amsl.com (Postfix) with ESMTP id 4A5A6C1522D5; Wed, 18 Jan 2023 19:46:03 -0800 (PST)
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
From: Roman Danyliw via Datatracker <noreply@ietf.org>
To: The IESG <iesg@ietf.org>
Cc: draft-ietf-opsec-indicators-of-compromise@ietf.org, opsec-chairs@ietf.org, opsec@ietf.org, furry13@gmail.com, furry13@gmail.com
X-Test-IDTracker: no
X-IETF-IDTracker: 9.5.0
Auto-Submitted: auto-generated
Precedence: bulk
Reply-To: Roman Danyliw <rdd@cert.org>
Message-ID: <167409996329.55748.13265375207386339799@ietfa.amsl.com>
Date: Wed, 18 Jan 2023 19:46:03 -0800
Archived-At: <https://mailarchive.ietf.org/arch/msg/opsec/XFKxT-cOZ0oGg_S-r8NJAeBrrKo>
Subject: [OPSEC] Roman Danyliw's No Objection on draft-ietf-opsec-indicators-of-compromise-03: (with COMMENT)
X-BeenThere: opsec@ietf.org
X-Mailman-Version: 2.1.39
List-Id: opsec wg mailing list <opsec.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/opsec>, <mailto:opsec-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/opsec/>
List-Post: <mailto:opsec@ietf.org>
List-Help: <mailto:opsec-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/opsec>, <mailto:opsec-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 19 Jan 2023 03:46:03 -0000

Roman Danyliw has entered the following ballot position for
draft-ietf-opsec-indicators-of-compromise-03: No Objection

When responding, please keep the subject line intact and reply to all
email addresses included in the To and CC lines. (Feel free to cut this
introductory paragraph, however.)


Please refer to https://www.ietf.org/about/groups/iesg/statements/handling-ballot-positions/ 
for more information about how to handle DISCUSS and COMMENT positions.


The document, along with other ballot positions, can be found here:
https://datatracker.ietf.org/doc/draft-ietf-opsec-indicators-of-compromise/



----------------------------------------------------------------------
COMMENT:
----------------------------------------------------------------------

Thank you to Kathleen Moriarty for the SECDIR review.

** Abstract
.  It
   highlights the need for IoCs to be detectable in implementations of
   Internet protocols, tools, and technologies - both for the IoCs'
   initial discovery and their use in detection - and provides a
   foundation for new approaches to operational challenges in network
   security.

What “new approaches” are being suggested?  It wasn't clear for the body of the
text.

** Section 1.
   intrusion set (a
   collection of indicators for a specific attack)

This definition is not consistent with the use of the term as I know it.  In my
experience an intrusion set is set of activity attributed to an actor.  It may
entail multiple campaigns by a threat actor, and consist of many attacks, TTPs
and intrusions.  APT33 is an example of an intrusion set.

** Section 1.  Editorial. s/amount intelligence practitioners/cyber
intelligence practitioners/

** Section 2.  Editorial.
   used in malware strains to
   generate domain names periodically.  Adversaries may use DGAs to
   dynamically identify a destination for C2 traffic, rather than
   relying on a list of static IP addresses or domains that can be
   blocked more easily.

-- Isn’t the key idea that these domains names are algorithmically generated on
a periodic basis? -- Don’t adversaries computer not identify the C2
destination? -- Be cleared on the value proposition of dynamic generation vs
hard coded IPs

NEW
used in malware strains to periodically generate domain names algorithmically. 
This malware uses a DGAs to compute a destination for C2 traffic, rather than
relying on pre-assigned list of static IP addresses or domains that can be
blocked more easily if extracted from the malware.

** Section 2.  Kill chains need not be restricted to the seven phases defined
in the original Lockheed model.

** Section 3.2.1  Editorial.
   IoCs are often discovered initially through manual investigation or
   automated analysis.

Isn’t manual or automated the only two options?  Perhaps s/IoCs are often
discovered/IoC are discovered/

** Section 3.2.1.
   They can be discovered in a range of sources,
   including in networks and at endpoints

What is “in networks” in this context?  Does it mean by monitoring the network?

** Section 3.2.1.
   Identifying a particular protocol run related to an
   attack
What is a “protocol run”? Is that a given session of a given protocol?

** Section 3.2.1

   Identifying a particular protocol run related to an
   attack is of limited benefit if indicators cannot be extracted and
   subsequently associated with a later related run of the same, or a
   different, protocol.

-- Is this text assuming that the indicators to identify the flow need to come
from the network?  Couldn’t one have reversed engineering a malware sample and
that be the basis of the IOC to watch for?

-- Wouldn’t there be some residual value in identifying known attack traffic as
a one-off, if nothing more to timestamp the activity of the threat actor?

** Section 3.2.3.  In addition to ISACs, the term ISAO is also used (at least
in the US) OLD
   often
   dubbed Information Sharing and Analysis Centres (ISACs)
NEW
   often
   dubbed Information Sharing and Analysis Centres (ISACs) or Information
   Sharing and Analysis Organizations (ISAOs)

** Section 3.2.3.  s/intel feeds/intelligence feeds/

** Section 3.2.3. s/international Computer Emergency Response Teams
(CERTs)/internal Computer Security Incident Response Teams (CSIRTs)/

** Section 3.2.3
   Whomever
   they are, sharers commonly indicate the extent to which receivers may
   further distribute IoCs using the Traffic Light Protocol [TLP].

Perhaps weaker that TLP is the common way pass the redistribution guidance,
unless there is a strong citation to support the claim.

** Section 3.2.4
   For IoCs to provide defence-in-depth (see Section 6.1), which is one
   of their key strengths, and so cope with different points of failure,
   they should be deployed in controls monitoring networks and endpoints
   through solutions that have sufficient privilege to act on them.

I’m having trouble unpacking this sentence.

-- Even with the text in Section 6.1, I don’t follow how IoCs provide defense
in depth.  It’s the underlying technology/controls performing mitigation that
provide this defense.

-- what is a “controls monitoring networks”?

-- could more be said about the reference “solutions”

** Section 3.2.4
   While IoCs may be manually assessed after
   discovery or receipt, significant advantage may be gained by
   automatically ingesting, processing, assessing, and deploying IoCs
   from logs or intel feeds to the appropriate security controls.

True in certain cases.  Section 3.2.2. appropriately warned that IoCs are of
different quality and that one might need to ascribe different confidence to
them.  Recommend propagating or citing that caution.

** Section 3.2.4.

   IoCs can be particularly effective when deployed in security controls
   with the broadest impact.

-- Could this principle be further explained?  What I got from the subsequent
text was that a managed configuration by a vendor (instead of the end-user) is
particularly effective.

-- It would be useful to explicitly say the obvious which is that “IoC can be
particularly effective _at mitigating malicious activity_”

** Section 3.2.5.

   Security controls with deployed IoCs monitor their relevant control
   space and trigger a generic or specific reaction upon detection of
   the IoC in monitored logs.

Is it just “logs” being monitored by security controls?  Couldn’t a network
tap/interface be used too?

** Section 4.1.1.  Editorial. This section has significant similarity with
Section 6.1.  Consider if this related material can be integrated or
streamlined.

** Section 4.1.1.  Editorial.

   Anti-Virus (AV) and Endpoint Detection and
   Response (EDR) products deploy IoCs via catalogues or libraries to
   all supported client endpoints

Is it “all support client endpoints” or “client endpoints”?  What does “all”
add?

** Section 4.1.1.

   Some types of IoC may be present
   across all those controls while others may be deployed only in
   certain layers.

What is a layer?  Is that layer in a protocol stack or a "defense in depth"
layer?

** Section 4.1.1.  I don’t understand how the two examples in this section
illuminate the thesis of the opening paragraph t that almost all modern cyber
defense tools rely on indicators.

** Section 4.1.1.  What is “estate-wide patching”?  Is that the same as
“enterprise-wide”?

** Section 4.1.2.  With respect, the thesis of this section is rather
simplistic and fails to capture the complexity and expertise required to field
IoCs.  No argument that a small manufacturer may be a target.  However, there
is a degree of expertise and time required to be able to load and curate these
IoCs.  In particular, I am challenged by the following sentence, “IoCs are
inexpensive, scalable, and easy to deploy, making their use particularly
beneficial for smaller entities ...”  My experience is that small business even
struggle with these activities.

IMO, the thesis (mentioned later in the text) should that the development of
IoCs can be left to better resourced organizations.  Organizations without the
ability to do so could still benefit from the shared threat intelligence.

Additionally:
   One reason for this is that use of IoCs does not require the same
   intensive training as needed for more subjective controls, such as
   those based on manual analysis of machine learning events which
   require further manual analysis to verify if malicious.

-- what are “subjective controls”?  The provided example of a “machine learning
event” is the output of such a system?

** Section 4.1.4.  This section has high overlap with Section 3.2.3.

-- Can they be streamlined?

-- Can the standards to shared indicators be made consistent?

-- (author conflict of interest) Consider if you want to list IETF’s own
indicator sharing format, RFC7970/RFC8727

** Section 4.1.4

   Quick and easy sharing of IoCs gives blanket coverage for
   organisations and allows widespread mitigation in a timely fashion -
   they can be shared with systems administrators, from small to large
   organisations and from large teams to single individuals, allowing
   them all to implement defences on their networks.

Isn’t this text conveying the same idea as was said in the section right before
it (Section 4.1.3)?

** Section 4.1.5  Isn’t the thesis of automatic deployment of indicators
already stated in Section 3.2.4.

** Section 4.1.5

   While it is still necessary to invest effort both to enable efficient
   IoC deployment, and to eliminate false positives when widely
   deploying IoCs, the cost and effort involved can be far smaller than
   the work entailed in reliably manually updating all endpoint and
   network devices.

What is the false positive being referenced here?  Is it false positive matches
against the IoC?  If so, how is that related to manually updated endpoints?

** Section 4.1.7.  No disagreement on the need for context.  However, I’m
confused about how this text is an “opportunity” and the new material it is
adding.  In my experience with the classes of organizations named as
distributing IoCs in Section 3.2.3. (i.e., ISACs, ISAO, CSIRTS, national cyber
centers), context is “table stakes” for sharing.  How does a receiving party
know how to act on the IoC otherwise?

** Section 5.1.1

   Malicious IP addresses and domain names can also be
   changed between campaigns, but this happens less frequently due to
   the greater pain of managing infrastructure compared to altering
   files, and so IP addresses and domain names provide a less fragile
   detection capability.

Please soften this claim or cite a reference.  How often an infrastructure
changes between campaigns can vary widely between threat actors.

** Section 5.1.2
   To be used in attack defence, IoCs must first be discovered through
   proactive hunting or reactive investigation.

Couldn’t they also be shared with an organization too?

** Section 5.3.

   Self-censoring by sharers appears more prevalent and more extensive
   when sharing IoCs into groups with more members, into groups with a
   broader range of perceived member expertise (particularly the further
   the lower bound extends below the sharer's perceived own expertise),
   and into groups that do not maintain strong intermember trust.

Is there a citable basis for these assertions?

** Section 5.3.

   Research
   opportunities exist to determine how IoC sharing groups' requirements
   for trust and members' interaction strategies vary and whether
   sharing can be optimised or incentivised, such as by using game
   theoretic approaches.

IMO, this seems asymmetric to call out.  In almost every section there would be
the opportunity for research.

** Section 5.4.

   The adoption of automation can also enable faster and easier
   correlation of IoC detections across log sources, time, and space.

-- Does “log sources” also mean network monitoring?
-- what is “space” in this context? Is it the same part of the network?

** Section 6.1.  The new “best practice” in this section isn’t clear. 
“Defense-in-Depth” has been previously mentioned.

** Section 6.1.  Editorial.

   If an attack happens, then you hope an endpoint solution will pick it
   up.

Consider less colloquial language.

** Section 6.1.  It isn’t clear to me how the example of NCSC’s PDNS service
demonstrated defense in depth.  What I read into was a successful, managed
security offering.  Where was the “depth”?

** Section 6.1.

  but if the IoC is on PDNS, a consistent defence is
   maintained. This offers protection, regardless of whether the
   context is a BYOD environment

In a BYOD why is consistent defense ensured.  There is no assurance that the
device will be using the PDNS?

** Section 6.2.  It seems odd to next the Security Considerations under best
practices.  Especially since it is recommending speculative and not performed
research.  Additionally, per the “privacy-preserving” researching, the privacy
concerned noted in Section 5.3 don’t seem clear enough to action.