Re: [MMUSIC] Clarification questions for rfc8838 ?

Justin Uberti <juberti@google.com> Fri, 09 April 2021 02:07 UTC

Return-Path: <juberti@google.com>
X-Original-To: mmusic@ietfa.amsl.com
Delivered-To: mmusic@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 62C963A2698 for <mmusic@ietfa.amsl.com>; Thu, 8 Apr 2021 19:07:29 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -17.598
X-Spam-Level:
X-Spam-Status: No, score=-17.598 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_MED=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, ENV_AND_HDR_SPF_MATCH=-0.5, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, USER_IN_DEF_DKIM_WL=-7.5, USER_IN_DEF_SPF_WL=-7.5] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=google.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 8Oys8wW-jvCc for <mmusic@ietfa.amsl.com>; Thu, 8 Apr 2021 19:07:24 -0700 (PDT)
Received: from mail-yb1-xb32.google.com (mail-yb1-xb32.google.com [IPv6:2607:f8b0:4864:20::b32]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 3B4833A2696 for <mmusic@ietf.org>; Thu, 8 Apr 2021 19:07:24 -0700 (PDT)
Received: by mail-yb1-xb32.google.com with SMTP id 185so4904939ybf.3 for <mmusic@ietf.org>; Thu, 08 Apr 2021 19:07:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=3JCv23SD3jzW7oohAHscqN6mYI77F/WKkKNbb8s4HzU=; b=NSO1GxKfkAG/+vWYmpkqnIEi9E/NVjOewBOLaaNdNXc8+AQh2l+QNhaeGDUvWZWyva mG1jMM4JyoTGND+VflooIjMbvQvPhD96BY5lckV5J+lroSN1AlFXs2qNPFPuduhGjc8x /jHYi3J8m13V/QSRtafeHH348A91vpThtYF0dmyIx9MZbTmVYpADSTjb07v4cs412FvL XVpIUdyNm8raRWrLVnDhx82xOJnqwZEDeoqMWgucxIiDnwbeXImTxaR3ekOtPcGieP5h P0FLYXIHG4xfZYw34oUZ4Dk7jwVy+clAsVcApeKsK5bEbVC0OGicyYD9z7nEAJ5/Cf3g 8Fpg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=3JCv23SD3jzW7oohAHscqN6mYI77F/WKkKNbb8s4HzU=; b=UO2vHMH704T+BflST9fewAHV/rGy+J2/86LB8Co5ENDVhswK4eQ8mIwm33rF4sJypb PiqUAsgIcL1FdPk03JqhP4iiK29Mwi7iUpBjt29hLBdy/qSJkc58blLJiOn/YFy8rEfC DILJS96zMtBEeAHc2VfX3+x93PCuUVE2KpQinBXo0hq9PwZKqIuRqROJndm3jhA3Ac8A Ug6Ue9v+4Sf4nCNN89k3Rmemp1x5pLPnarftJ5B8UWwct/2BfEj96ngQ2DhdF+u92Vlr PmGJQc8ouByuR+u/qU64OdJC/PtmGUDLQ6W7qZtSXNuYcnBkyYJNw11RtHZz9Qn+tMeO Bc7A==
X-Gm-Message-State: AOAM533Ys3mudr0D3RhcVI+N9vsdDq25Hjb5+Jnd+jhonyOBHZMhes4d kqZoJQCNWzIFA06K2olgzCNLaJBIG5c9kQUQ9ZoBpw==
X-Google-Smtp-Source: ABdhPJxSTru4uWUX+fD+13diDNuOXBeFty5A+T21Sc1/ljGo2vgCywsc75QK69jwsSHchlfRvPJJd5rsS+VrragGOzM=
X-Received: by 2002:a25:2443:: with SMTP id k64mr4676016ybk.46.1617934041968; Thu, 08 Apr 2021 19:07:21 -0700 (PDT)
MIME-Version: 1.0
References: <CH2PR10MB4328165A4CB749A9134C431EF1759@CH2PR10MB4328.namprd10.prod.outlook.com>
In-Reply-To: <CH2PR10MB4328165A4CB749A9134C431EF1759@CH2PR10MB4328.namprd10.prod.outlook.com>
From: Justin Uberti <juberti@google.com>
Date: Thu, 08 Apr 2021 19:07:09 -0700
Message-ID: <CAOJ7v-0cVQbhJHO7ZV+gvbPCtbO-1d6C9omyNtfSMhVeDuExKA@mail.gmail.com>
To: Michael Jones <Michael.Jones@genesys.com>
Cc: "mmusic@ietf.org" <mmusic@ietf.org>
Content-Type: multipart/alternative; boundary="000000000000d84cc605bf809f65"
Archived-At: <https://mailarchive.ietf.org/arch/msg/mmusic/N4Wh6FbaGptEDWc6-2bTUDgkHkc>
Subject: Re: [MMUSIC] Clarification questions for rfc8838 ?
X-BeenThere: mmusic@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Multiparty Multimedia Session Control Working Group <mmusic.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/mmusic>, <mailto:mmusic-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/mmusic/>
List-Post: <mailto:mmusic@ietf.org>
List-Help: <mailto:mmusic-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/mmusic>, <mailto:mmusic-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 09 Apr 2021 02:07:29 -0000

There are a lot of questions here. You might want to break this up into a
few threads for easier discussion.

Some answers to some of the initial questions inline.

On Wed, Apr 7, 2021 at 3:11 PM Michael Jones <Michael.Jones@genesys.com>
wrote:

> Quote from https://tools.ietf.org/html/rfc8838#section-9
>
>    After Trickle ICE agents have conveyed initial ICE descriptions and
>
>    initial ICE responses, they will most likely continue gathering new
>
>    local candidates as STUN, TURN, and other non-host candidate
>
>    gathering mechanisms begin to yield results.  Whenever an agent
>
>    discovers such a new candidate, it will compute its priority, type,
>
>    foundation, and component ID according to regular ICE procedures.
>
>
>
> Are peer-reflexive candidates discovered through connectivity checks with
> the remote party included in this gathering of “other non-host candidate
> gathering mechanisms”, and supposed to be communicated via trickle?
>

prfx candidates don't need to be communicated via trickle as the remote
side is already aware of them.

>
>
>
>
> Quote from https://tools.ietf.org/html/rfc8838#section-9
>
>    The new candidate is then checked for redundancy against the existing
>
>    list of local candidates.  If its transport address and base match
>
>    those of an existing candidate, it will be considered redundant and
>
>    will be ignored.  This would often happen for server-reflexive
>
>    candidates that match the host addresses they were obtained from
>
>    (e.g., when the latter are public IPv4 addresses).  Contrary to
>
>    regular ICE, Trickle ICE agents will consider the new candidate
>
>    redundant regardless of its priority.
>
>
>
> Ignored in what aspect? Ignored entirely in every way? Or only ignored for
> purposes of communicating the discovered candidate to the remote party, and
> then otherwise handled according to the procedures in rfc8445?
>

Discarded. There's already a local candidate with the same address and
base, so the 8445 handling would be identical.


>
>
> It’s possible for a local agent to discover a local peer-reflexive
> candidate through connectivity checks prior to STUN server harvesting
> completing. In that situation, should the peer-reflexive candidate be kept,
> and the server-reflexive candidate discarded? The peer-reflexive candidate *
> *does** have a higher priority, if the recommended type-preferences from
> https://tools.ietf.org/html/rfc8445#section-5.1.2.2 are used.
>

That is a corner case, but yes, I agree the srflx candidate should be
discarded here.

>
>
>
>
> Quote from https://tools.ietf.org/html/rfc8838#section-9
>
>    When candidates are trickled, the using protocol MUST deliver each
>
>    candidate (and any end-of-candidates indication as described in
>
>    Section 13 <https://tools.ietf.org/html/rfc8838#section-13>) to the receiving Trickle ICE implementation exactly once
>
>    and in the same order it was conveyed.  If the using protocol
>
>    provides any candidate retransmissions, they need to be hidden from
>
>    the ICE implementation.
>
>
>
> Why must candidates be delivered in the order conveyed and exactly once?
>
>
>
> Connectivity checks are not performed based on the order that candidates
> are received unless no higher priority candidate pairs are available to
> perform checks against.
>
>
>
> Redundant candidates, and redundant candidate pairs, are already pruned on
> both sides of the session regardless of if the candidates are exact
> duplicates or are merely redundant.
>
>
>
> I can’t find any justification for this in rfc8838, and I can’t think of
> any way for delivery of duplicate / out of order candidates to break an
> implementation if it’s following the rfc8445 rules for handling redundant /
> duplicate candidate pairs.
>

Well, you certainly wouldn't want to get an end-of-candidates indication
prior to an actual candidate, right? Yes, this might be overly strict when
just looking at individual candidates, but if your messaging transport does
not preserve order, you're going to have many subtle problems.


>
>
>
>
> Quote from https://tools.ietf.org/html/rfc8838#section-10
>
>    As a Trickle ICE agent gathers local candidates, it needs to form
>
>    candidate pairs; this works as described in the ICE specification
>
>    [RFC8445 <https://tools.ietf.org/html/rfc8445>], with the following provisos:
>
>
>
>    1.  A Trickle ICE agent MUST NOT pair a local candidate until it has
>
>        been trickled to the remote party.
>
>
>
> What does this mean in practice?
>
>
>
> Must an agent wait until it’s confirmed, via some signaling path specific
> mechanism, that the remote party has received the trickled candidate?
>
>
>
> If it does not mean that, why is it specified as a MUST, or even at all?
>
>
>
> When dealing with asynchronous events like this, there’s little difference
> between emitting the trickled candidate via the signaling path first, or
> second. From the remote party’s perspective, it’s unknowable which order
> the local agent conducted the operations.
>
>
>
>
>
> Quote from https://tools.ietf.org/html/rfc8838#section-11
>
>    3.  If a newly formed pair has a local candidate whose type is
>
>        server-reflexive, the agent MUST replace the local candidate with
>
>        its base before completing the redundancy check in the next step.
>
>
>
>    4.  The agent prunes redundant pairs as described below but checks
>
>        existing pairs only if they have a state of Waiting or Frozen;
>
>        this avoids removal of pairs for which connectivity checks are in
>
>        flight (a state of In-Progress) or for which connectivity checks
>
>        have already yielded a definitive result (a state of Succeeded or
>
>        Failed).
>
>
>
>        A.  If the agent finds a redundancy between two pairs and one of
>
>            those pairs contains a newly received remote candidate whose
>
>            type is peer-reflexive, the agent SHOULD discard the pair
>
>            containing that candidate, set the priority of the existing
>
>            pair to the priority of the discarded pair, and re-sort the
>
>            checklist.  (This policy helps to eliminate problems with
>
>            remote peer-reflexive candidates for which a STUN Binding
>
>            request is received before signaling of the candidate is
>
>            trickled to the receiving agent, such as a different view of
>
>            pair priorities between the local agent and the remote agent,
>
>            because the same candidate could be perceived as peer-
>
>            reflexive by one agent and as server-reflexive by the other agent.)
>
>
>
>        B.  The agent then applies the rules defined in Section 6.1.2.4 of [RFC8445] <https://tools.ietf.org/html/rfc8445#section-6.1.2.4>.
>
>
>
>
>
> Paragraph 4.A is difficult to understand.
>
>
>
> It appears to read as:
>
>
>
> After forming pairs from the newly trickled remote candidate and replacing
> any server-reflexive local candidates with their base, the agent compares
> each of the newly created pairs with existing pairs that are in the Waiting
> or Frozen states.
>
> Then the agent performs the following pseudocode operation:
>
> For each redundancy of $newPair <=> $existingPair:
>
> if $newPair.remoteCandidate.type == peer-reflexive
>
> then
>
>     $existingPair.priority = $newPair. priority
>
>     discard $newPair
>
> fi
>
>
>
> Under what conditions could a remote party send the local agent a
> peer-reflexive candidate that local agent thinks is the remote party’s
> server-reflexive candidate?
>
>
>
> In section 9, we’re told that trickled candidates must be delivered to the
> remote party in the order they are sent, so this can’t be a situation where
> the remote party sends peer-reflexive and then server-reflexive, but they
> are delivered as server-reflexive before peer-reflexive.
>
>
>
> If it’s the peer discovering it has a peer-reflexive candidate because it
> sent a connectivity check to the local agent before its stun server
> harvesting finished, then the local agent would not know about the remote
> party having a server-reflexive candidate in the first place (as the peer
> doesn’t know it yet either).
>
>
>
> Is this paragraph trying to say that in the situation where the existing
> pair has a remote peer-reflexive (discovered by the local agent receiving
> an incoming connectivity check from the remote party), and the new pair has
> a remote host/server-reflexive/relay candidate (which we’re told of via
> trickle), then to replace the peer-reflexive with the new candidate? If so,
> the wording of the paragraph is, I think, wrong, or if not wrong, then too
> vague. Perhaps something similar to:
>
>
>
> If the agent finds a redundancy between two pairs and one of
>
> those pairs contains a remote candidate with a peer-reflexive
>
> type, then the pair with the peer-reflexive remote candidate
>
> should be discarded, and the checklist should be resorted.
>
>
>
> This policy helps to eliminate problems with different pair
>
> priorities between the local agent and remote party when
>
> the local agent discovers a remote peer-reflexive candidate
>
> via the procedures in https://tools.ietf.org/html/rfc8445#section-7.3.1.3
>
> and the remote party later sends a non-peer-reflexive candidate
>
> with the same transport address as the peer-reflexive candidate.
>
> if the local agent did not discard the peer-reflexive candidate
>
> in favor of the newly trickled candidate, the local and remote
>
> agents will have a different view of pair priorities, which can
>
> cause slowdowns in the overall ice negotiation.
>
>
>
> Note that in my suggested wording, I don’t try to differentiate between an
> existing pair and a new pair. That’s not the point of the comparison. The
> point of the comparison is that remote peer-reflexive candidates should be
> replaced with their non-peer-reflexive equivalents whenever possible.
>
>
>
>
>
> Quote from https://tools.ietf.org/html/rfc8838#section-12
>
>    After a local agent has trickled a candidate and formed a candidate
>
>    pair from that local candidate (Section 9 <https://tools.ietf.org/html/rfc8838#section-9>), or after a remote agent
>
>    has received a trickled candidate and formed a candidate pair from
>
>    that remote candidate (Section 11 <https://tools.ietf.org/html/rfc8838#section-11>), a Trickle ICE agent adds the new
>
>    candidate pair to a checklist as defined in this section.
>
>
>
> Does this mean:
>
>    After an ICE agent has discovered a new local candidate, or has
>
>    received a new remote candidate trickled from its remote peer,
>
>    and has formed one or more candidate pairs from it, a Trickle ICE
>
>    agent adds the new candidate pair to a checklist as defined in
>
>    this section.
>
> ?
>
>
>
> Specifically, the wording
>
> or after a remote agent has received a trickled candidate and formed a candidate pair from that remote candidate
>
>
>
> Is ambiguous as to whether the local agent is supposed to do something
> when the “remote agent has received a trickled candidate”, or if the local
> agent is the one doing the receiving.
>
>
>
>
>
> Quote from https://tools.ietf.org/html/rfc8838#section-17
>
>    To achieve this, when trickling candidates, agents SHOULD respect the
>
>    order of components as reflected by their component IDs; that is,
>
>    candidates for a given component SHOULD NOT be conveyed prior to
>
>    candidates for a component with a lower ID number within the same
>
>    foundation.  In addition, candidates SHOULD be paired, following the
>
>    procedures in Section 12 <https://tools.ietf.org/html/rfc8838#section-12>, in the same order they are conveyed.
>
>
>
> Is this saying that the results of harvesting local candidates
> (server-reflexive, relay) should not be communicated to the remote party
> until all anticipated candidates from lower component IDs are finished
> harvesting?
>
>
>
> How does that play out with packet loss?
>
>
>
> For example: If the harvesting transaction for component ID 1 has all its
> outgoing packets dropped, including retries, but the harvesting transaction
> for component ID 2 succeeded right away, then component ID 2 should not be
> communicated to the remote peer until component ID1 finishes? That’s a
> minimum of 500ms of delay for component ID2, potentially much more if
> multiple retries are attempted (and failed) for component ID1.
>
>
>
>
>
>
> _______________________________________________
> mmusic mailing list
> mmusic@ietf.org
> https://www.ietf.org/mailman/listinfo/mmusic
>