Re: [Dime] Ongoing Throttling Information in request messages

Ben Campbell <ben@nostrum.com> Tue, 26 November 2013 22:33 UTC

Return-Path: <ben@nostrum.com>
X-Original-To: dime@ietfa.amsl.com
Delivered-To: dime@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id BA4A71ADF7C for <dime@ietfa.amsl.com>; Tue, 26 Nov 2013 14:33:43 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.036
X-Spam-Level:
X-Spam-Status: No, score=-1.036 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HELO_MISMATCH_COM=0.553, HOST_MISMATCH_NET=0.311] autolearn=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id NHEgTWU3aAav for <dime@ietfa.amsl.com>; Tue, 26 Nov 2013 14:33:39 -0800 (PST)
Received: from shaman.nostrum.com (nostrum-pt.tunnel.tserv2.fmt.ipv6.he.net [IPv6:2001:470:1f03:267::2]) by ietfa.amsl.com (Postfix) with ESMTP id CAF0B1ADE72 for <dime@ietf.org>; Tue, 26 Nov 2013 14:33:38 -0800 (PST)
Received: from [10.0.1.29] (cpe-173-172-146-58.tx.res.rr.com [173.172.146.58]) (authenticated bits=0) by shaman.nostrum.com (8.14.3/8.14.3) with ESMTP id rAQMXVjn000452 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Tue, 26 Nov 2013 16:33:32 -0600 (CST) (envelope-from ben@nostrum.com)
Content-Type: text/plain; charset="iso-8859-1"
Mime-Version: 1.0 (Mac OS X Mail 7.0 \(1822\))
From: Ben Campbell <ben@nostrum.com>
In-Reply-To: <5BCBA1FC2B7F0B4C9D935572D900066815193EFD@DEMUMBX014.nsn-intra.net>
Date: Tue, 26 Nov 2013 16:33:30 -0600
Content-Transfer-Encoding: quoted-printable
Message-Id: <AB2686C7-478B-4874-9228-8314DD363815@nostrum.com>
References: <5BCBA1FC2B7F0B4C9D935572D9000668151918EC@DEMUMBX014.nsn-intra.net> <A9CA33BB78081F478946E4F34BF9AAA014CF3131@xmb-rcd-x10.cisco.com> <5BCBA1FC2B7F0B4C9D935572D900066815192B2B@DEMUMBX014.nsn-intra.net> <A9CA33BB78081F478946E4F34BF9AAA014CF31E9@xmb-rcd-x10.cisco.com> <5BCBA1FC2B7F0B4C9D935572D900066815192B90@DEMUMBX014.nsn-intra.net> <A9CA33BB78081F478946E4F34BF9AAA014CF3237@xmb-rcd-x10.cisco.com> <5BCBA1FC2B7F0B4C9D935572D900066815192BF7@DEMUMBX014.nsn-intra.net> <A9CA33BB78081F478946E4F34BF9AAA014CF32CD@xmb-rcd-x10.cisco.com> <5BCBA1FC2B7F0B4C9D935572D900066815192CD2@DEMUMBX014.nsn-intra.net> <A9CA33BB78081F478946E4F34BF9AAA014CF4801@xmb-rcd-x10.cisco.com> <5BCBA1FC2B7F0B4C9D935572D900066815192D5F@DEMUMBX014.nsn-intra.net> <A9CA33BB78081F478946E4F34BF9AAA014CF4BF3@xmb-rcd-x10.cisco.com> <5BCBA1FC2B7F0B4C9D935572D90006681519337D@DEMUMBX014.nsn-intra.net> <A9CA33BB78081F478946E4F34BF9AAA014D13546@xmb-rcd-x10.cisco.com> <5BCBA1FC2B7F0B4C9D935572D900066815193EFD@DEMU! MBX014.nsn-intra.net>
To: "Wiehe, Ulrich (NSN - DE/Munich)" <ulrich.wiehe@nsn.com>
X-Mailer: Apple Mail (2.1822)
Received-SPF: pass (shaman.nostrum.com: 173.172.146.58 is authenticated by a trusted mechanism)
Cc: "dime@ietf.org" <dime@ietf.org>
Subject: Re: [Dime] Ongoing Throttling Information in request messages
X-BeenThere: dime@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Diameter Maintanence and Extentions Working Group <dime.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dime>, <mailto:dime-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dime/>
List-Post: <mailto:dime@ietf.org>
List-Help: <mailto:dime-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dime>, <mailto:dime-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 26 Nov 2013 22:33:45 -0000

Hi,

I do not object to marking requests that survived throttling in general, although I think we might need some simulations to see if it really helps. However, I have reservations about the use of the timestamp.

Your argument for the timestamp seems to be based on an assumption that it is expensive to resend an existing OLR, or at least that it is more expensive to resend it than to check timestamps to determine if you need to send it. I note that Nirav and Steve both questioned that assumption. So do I.

I think in most cases, the execution of logic to determine if a node needs to resend an OLR is likely to be more expensive than just sending it.

We're talking about a reporting node that's already in overload. That means it already has insufficient resources to handle the offered load. There's a fair chance that one of those constrained resources is CPU cycles. It's _not_ particularly likely that the constrained resource is the network link between the reporting nodes and its reacting nodes. If those links were overloaded, the reporting node would probably be seeing less traffic in the first place. Inserting an already (or mostly already) constructed OLR into a Diameter answer.

Again, this is probably something better tested in simulation than just assumed--I'm just going on my own instincts and experience.

One approach to this would be to _allow_ a reporting node to use the timestamp to avoid sending extra copies of an OLR, but not _require_ it. That is, treat it like an hint to the server that the server can apply if it helps it to do so.

Thanks!

Ben.

On Nov 20, 2013, at 9:24 AM, Wiehe, Ulrich (NSN - DE/Munich) <ulrich.wiehe@nsn.com> wrote:

> Nirav,
> 
> thank you for your confirmation.
> In order to select the best solution let me try to start a comparism:
> 
> 1) A1 can be compared with B1 as both require to insert an AVP in every message (answer/request, while overloaded/while performing throttling): A1 adds (resource consuming) functionality to the (overloaded) server/reporting node while B1 adds to the client/reacting node. Furthermore, the AVP to be inserted in A1 (OC-OLR) is the only OC-specific AVP to be inserted to the answer whereas the AVP to be inserted in B1 (OC-Ongoing-Throttling-Information) would be in addition to the OC-Feature-Vector which anyway needs to be added to every request (inserting OC specific info to requests is needed anyway). Furthermore A1 seems to violate REQ 13 from RFC 7068 while B1 does not. All this clearly is a pro for option B.
> 
> 2) A2 can be compared with B2: A2 either requires additional processing at the server/reporting node or relies on timouts (violating REQ 10) while B2 does not require any corresponding functionality. This is also a pro for Option B.
> 
> 3) A3 can be compared with B3 as both recommend to check the presence of new OC-specific info in all received messages: A3 adds the recommended functionality to the (overloaded) server/reporting node while B3 adds to the client/reacting node. This is a pro for option A.
> 
> 4) Option B adds a feedback channel from client to server making the solution more robust. It also allows the server to give some priority to already throttled traffic (see REQ 18) and it allows agents to ensure that already throttled traffic (by a downstream node) is not throttled again. This is a pro for option B.
> 
> 5) In Option A it is not clear how the client/reacting node should behave when it receives an answer without OC-OLR AVP while performing throttling. This is a con for option A.
> 
> 
> In summary my preference is for option B.
> 
> Comments are welcome.
> 
> Ulrich
> 
> 
> 
> 
> 
> -----Original Message-----
> From: ext Nirav Salot (nsalot) [mailto:nsalot@cisco.com] 
> Sent: Thursday, November 14, 2013 3:54 PM
> To: Wiehe, Ulrich (NSN - DE/Munich); doc-dt@ietf.org; dime@ietf.org
> Subject: RE: Ongoing Throttling Information in request messages
> 
> Ulrich,
> 
> I confirm the summary below of having two valid options.
> 
> Just wanted to highlight one aspect, in general.
> Current definition of the OC-OLR AVP suggest the inclusion of TimeStamp and ValidityDuration AVPs. And these AVPs are applicable/valid/needed in both the options below.
> Additionally, if we decide to go for option B, we would need to define new AVP OC-Ongoing-Throttling-Information AVP.
> 
> Regards,
> Nirav.
> 
> -----Original Message-----
> From: Wiehe, Ulrich (NSN - DE/Munich) [mailto:ulrich.wiehe@nsn.com] 
> Sent: Wednesday, November 13, 2013 9:03 PM
> To: Nirav Salot (nsalot); doc-dt@ietf.org; dime@ietf.org
> Subject: RE: Ongoing Throttling Information in request messages
> 
> Nirav,
> 
> in summary it seems that we have two valid options:
> 
> 
> Option A: 
> A1: the server (or reporting node), while overloaded must insert the load OC-OLR AVP in every answer message.
> A2: the server (or reporting node) after leaving the overload state must continue inserting the OC-OLR AVP (indicating end of throttling request) for some time (how long needs to be calculated by the server) or the server must rely on expiry of outdated overload reports.
> A3: the reacting node must/should check the timeStamp in every OC-OLR AVP received in answer messages in order not to miss an update.
> 
> Option B:
> B1: the reacting node, while performing throttling, must insert the OC-Ongoing-Throttling-Information in every request message.
> B2: void (the reacting node , while no longer throttling, simply does not insert OC-Ongoing-Throttling-Information)
> B3: the reporting node must/should check OC-Ongoing-Throttling-Information received in a request in order to decide whether or not OC-OLR must be sent back. 
> 
> Ulrich
> 
> 
> 
> 
> -----Original Message-----
> From: ext Nirav Salot (nsalot) [mailto:nsalot@cisco.com] 
> Sent: Thursday, November 07, 2013 5:29 PM
> To: Wiehe, Ulrich (NSN - DE/Munich); doc-dt@ietf.org; dime@ietf.org
> Subject: RE: Ongoing Throttling Information in request messages
> 
> Ulrich,
> 
> Not sure the significance of REQ 10 here but I do not agree to enforce the server to include overload-report (indicating non-zero or zero overload condition) in every message.
> Even in your example, the overload condition will end sometime - e.g. after 1000 sec. and then the server can stop including the overload-report.
> Besides, I am still not convince that "during the overload condition, using some logic to avoid inclusion of overload-report is less resource consuming than simply including the same overload-report".
> 
> Yes, the reason behind defining timestamp is to allow the reacting node to discard the overload-report if the same was already received from the reporting node.
> 
> Regards,
> Nirav.
> 
> -----Original Message-----
> From: Wiehe, Ulrich (NSN - DE/Munich) [mailto:ulrich.wiehe@nsn.com] 
> Sent: Thursday, November 07, 2013 3:53 PM
> To: Nirav Salot (nsalot); doc-dt@ietf.org; dime@ietf.org
> Subject: RE: Ongoing Throttling Information in request messages
> 
> Nirav,
> 
> please see inline.
> 
> Best regards
> Ulrich
> 
> -----Original Message-----
> From: ext Nirav Salot (nsalot) [mailto:nsalot@cisco.com]
> Sent: Thursday, November 07, 2013 10:15 AM
> To: Wiehe, Ulrich (NSN - DE/Munich); doc-dt@ietf.org; dime@ietf.org
> Subject: RE: Ongoing Throttling Information in request messages
> 
> Ulrich,
> 
> Please read my responses inline.
> 
> Regards,
> Nirav.
> 
> -----Original Message-----
> From: Wiehe, Ulrich (NSN - DE/Munich) [mailto:ulrich.wiehe@nsn.com]
> Sent: Thursday, November 07, 2013 1:54 PM
> To: Nirav Salot (nsalot); doc-dt@ietf.org; dime@ietf.org
> Subject: RE: Ongoing Throttling Information in request messages
> 
> Nirav,
> 
> thank you for the explanation. This may be  a solution which adds more complexity to the reporting node: The reporting node must remember the maximum not yet expired fraction of duration values it sent when leaving the overload state and must continue reporting "end of overload condition" until expiry. (there is no corresponding complexity at the reacting node in my proposal) [Nirav] This is not always true, e.g. as I had indicated if the node has advertised 10% reduction-percentage for 10 sec., it need not bother to advertise no-overload condition since the validity-period was very small.
> [Ulrich] Also not always true, e.g. if the reporting node has requested at 20% reduction for 1000sec at t1 and then at t1 + 10sec sends an update to request a 10% reduction for 10 sec. Although the validity-period is small (10 sec) there may still be a reacting node that did not get the update and keeps on throttling by 20% until t1 + 1000sec. Furthermore you seem not to take REQ 10 seriously. My understanding was that timeout is last resort, not the normal way.
> 
> Another question: While doing a throttling, what is the expected behaviour of the reacting node when receiving an answer message without OC-OLR AVP? (stop/continue throttling?) (there is no corresponding question in my proposal since not sending of OC-Ongoing-Throttling-Information clearly means that throttling is not in place) [Nirav] The reacting node should continue to apply the earlier received OC-OLR. 
> " (Note: We seem to
>   have consensus that a server MAY repeat OLRs in subsequent messages,
>   but is not required to do so, based on local policy.)"
> [Ulrich] This needs to be reconsidered. See the following example:
> Non supporting client C1 sends a request via supporting agent A1 to Server S.
> S returns a 10% throttling request. 
> C sends an new request via supporting agent A2 to S. 
> S decides not to repeat the 10% throttling request (hence A2 is not aware of the throttling request). 
> C1 sends a new request via A1. 
> S decides to repeat the throttling request (redundant). 
> Supporting client C2 sends a request via A2 to S.
> S decides not to repeat....
> To avoid this mess we either need to mandate sending OC-OLR in every answer (while in overload) or let the Server keep track which agent and which client is updated with the newest info. (or consider my proposal).
> 
> Another question is for the reacting node: What is the expected behaviour when receiving lots of redundant OC-OLR AVPs in answer messages? The reacting node needs to check every single OC-OLR AVP in order to find out whether it contains an update. Is that the correct understanding? (this seems to be equivalent to checking for redundant OC-Ongoing-Throttling-Information AVPs in every request by the reporting node in my proposal) [Nirav] Please refer to Timestamp AVP within OC-OLR.
> The TimeStamp AVP indicates when the original OC-OLR AVP with the
>   current content was created.  It is possible to replay the same OC-
>   OLR AVP multiple times between the overload endpoints, however, when
>   the OC-OLR AVP content changes or the other information sending
>   endpoint wants the receiving endpoint to update its overload control
>   information, then the TimeStamp AVP MUST contain a new value.
> [Ulrich] This does not explicitly say that the reacting node must check every TimeStamp received within OC-OLR AVPs. But I guess this is the consequence. Can you please confirm.
> 
> 
> Adding dime-list again.
> 
> Ulrich
> 
> 
> From: ext Nirav Salot (nsalot) [mailto:nsalot@cisco.com]
> Sent: Wednesday, November 06, 2013 4:58 PM
> To: Wiehe, Ulrich (NSN - DE/Munich); doc-dt@ietf.org
> Subject: RE: Ongoing Throttling Information in request messages
> 
> Ulrich,
> 
> During "no overload condition", why should reporting node include overload-information in all the answer messages?
> e.g. if the reporting node has advertised overload-report with period-of-validity=10 sec., it knows that after that period all the reacting node will automatically stop applying any overload mitigation action. And hence it does not need to explicitly send any overload-report indicating "no overload condition".
> 
> In general, I assume that overload period of would be much shorter as compared to "no overload" period. And hence I do not see reason to always include overload-report.
> 
> Regards,
> Nirav.
> 
> From: Wiehe, Ulrich (NSN - DE/Munich) [mailto:ulrich.wiehe@nsn.com]
> Sent: Wednesday, November 06, 2013 9:12 PM
> To: Nirav Salot (nsalot); doc-dt@ietf.org
> Subject: RE: Ongoing Throttling Information in request messages
> 
> Nirav,
> 
> "During the overload" yes I agree, but "when no longer in overload" all answer messages would contain the information "no longer in overload" while only few request messages would contain the Ongoing-Throttling-Information.
> 
> Removing dime-list which bounces.
> 
> Best regards
> Ulrich
> From: ext Nirav Salot (nsalot) [mailto:nsalot@cisco.com]
> Sent: Wednesday, November 06, 2013 4:02 PM
> To: Wiehe, Ulrich (NSN - DE/Munich); doc-dt@ietf.org; dime@ietf.org
> Subject: RE: Ongoing Throttling Information in request messages
> 
> Ulrich,
> 
> I might be missing something here so please bear with me.
> 
> Number of answer messages sent by server = number of request messages received by the server.
> During the overload, the server would receive only those requests which survived throttling.
> And then the server would send answer messages to only those request messages.
> So "every" and "some" are actually equal in the below equation. No?
> 
> Regards,
> Nirav.
> 
> From: Wiehe, Ulrich (NSN - DE/Munich) [mailto:ulrich.wiehe@nsn.com]
> Sent: Wednesday, November 06, 2013 8:24 PM
> To: Nirav Salot (nsalot); doc-dt@ietf.org; dime@ietf.org
> Subject: RE: Ongoing Throttling Information in request messages
> 
> Nirav,
> 
> Not quite.
> 
> The question is:
> "is sending an overload-report in every answer message that corresponds to request with OC-Feature-Vector present more resource consuming than receiving Ongoing-Throttling-Information in some request messages (only those that survived a throttling)?"
> 
> Best regards
> Ulrich
> 
> 
> 
> From: ext Nirav Salot (nsalot) [mailto:nsalot@cisco.com]
> Sent: Wednesday, November 06, 2013 3:15 PM
> To: Wiehe, Ulrich (NSN - DE/Munich); doc-dt@ietf.org; dime@ietf.org
> Subject: RE: Ongoing Throttling Information in request messages
> 
> Ulrich,
> 
> Thanks for clarification.
> 
> Then the question would be
> "is sending overload-report in every answer message is more resource consuming than the solution below - i.e. receiving OC-Ongoing-Throttling-Information in all request message and analyzing the timestamp and then deciding if the overload-report should be included or not."
> I am not sure.
> 
> Regards,
> Nirav.
> 
> From: Wiehe, Ulrich (NSN - DE/Munich) [mailto:ulrich.wiehe@nsn.com]
> Sent: Wednesday, November 06, 2013 7:08 PM
> To: Nirav Salot (nsalot); doc-dt@ietf.org; dime@ietf.org
> Subject: RE: Ongoing Throttling Information in request messages
> 
> Nirav,
> thank you for your comments.
> 
> The comparism is between:
> Current way: "send OC specific information (Feature-Vector) in every request message and in every corresponding answer message"
> My proposal: "send OC specific information (Feature-Vector and in some cases Ongoing-Throttling-Info) in every request message and in corresponding answer messages only when required".
> 
> Sending OC specific information  is regarded a resource consuming action and we should not put this action to the (overloaded) server where avoidable. See REQ 13.
> 
> Best regards
> Ulrich
> 
> 
> 
> 
> From: ext Nirav Salot (nsalot) [mailto:nsalot@cisco.com]
> Sent: Wednesday, November 06, 2013 12:04 PM
> To: Wiehe, Ulrich (NSN - DE/Munich); doc-dt@ietf.org; dime@ietf.org
> Subject: RE: Ongoing Throttling Information in request messages
> 
> Ulrich,
> 
> Thanks for the detail explanation of your proposal.
> 
> Just to verify my understanding,
> Your proposal would allow the reporting node to avoid inclusion of the "same" overload-report in the answer message since the request message indicates that the path contains at least one reacting node which already has the latest overload-report.
> In other words, the reporting node need not include overload-report in every message, blindly.
> 
> To achieve the above objective, the solution below suggest the reacting node to include new AVP OC-Ongoing-Throttling-Information in every request message, which survives throttling.
> So the net result is that, the inclusion of the overload-report is prevented in every answer message since the OC-Ongoing-Throttling-Information is included in every request message.
> [Wiehe, Ulrich (NSN - DE/Munich)] no.  .in every request message that survived a throttling.
> And hence I am not sure if the solution has sound benefit from the inclusion of redundant information point of view.
> 
> In summary, I think the solution suggested below works as explained.
> But I fail to see the benefit of using this solution compare to a solution in which the reporting node includes overload-report in every answer message.
> 
> Regards,
> Nirav.
> 
> From: doc-dt-bounces@ietf.org [mailto:doc-dt-bounces@ietf.org] On Behalf Of Wiehe, Ulrich (NSN - DE/Munich)
> Sent: Tuesday, November 05, 2013 9:36 PM
> To: doc-dt@ietf.org; dime@ietf.org
> Subject: [doc-dt] Ongoing Throttling Information in request messages
> 
> Hi,
> draft-docdt-dime-ovli-01
> in Appendix B, Table 1, REQ 13 says:
>         .. Another way
>         is to let the request sender (reacting node) insert
>         information in the request to say whether a
>         throttling is actually performed.  The reporting node
>         then can base its decision on information received in
>         the request; no need for keeping state to record who
>         has received overload reports.  
>  
> And in Appendix B, Table 1, REQ 18:
>         There has been a proposal to mark
>         messages that survived overload throttling as one
>         method for an overloaded node to address fairness but
>         this proposal is not yet part of the solution.  
>  
> I would like to come back to this proposal. 
> A new AVP OC-Ongoing-Throttling-Information inserted in request messages would indicate that the request message survived a throttling. Furthermore, information within the AVP indicates the TimeStamp as received in the previous OC-OLR AVP, according to which the ongoing throttling (which was survived) is performed.
>  
> OC-Ongoing-Throttling-Information ::= < AVP Header: TBD9 >
>               < TimeStamp >
>             * [ AVP ]
>  
> Supporting Clients would insert the OC-Ongoing-Throttling-Information AVP  into request messages that survived a throttling performed at that client.
> Supporting Agents when receiving a request message that contains an OC-Ongoing-Throttling-Information AVP would not perform an additional throttling (since the client or a downstream agent already performed the throttling) , while, when receiving a request that does not contain OC-Ongoing-Throttling-Information AVP would perform throttling (on behalf of the client) according to a previously received and stored OC-OLR, and if that throttling is survived the agent would insert the OC-Ongoing-Throttling-Information AVP in the request before sending it further upstream.
> Servers (or in general reporting nodes) would check presence and content of the OC-Ongoing-Throttling-Information AVP in received request messages and could detect (together with a check of presence of OC-Feature-Vector AVP) whether inserting an OC-OLR AVP in the corresponding answer message is needed in order to update the reacting node with a new OC-OLR).
>  
> This proposal especially addresses use cases like the following:
>  
> Architecture:
>  
>                        +----------------+
>                        | supporting     |
>                       /| agent A1       |\
>   +----------------+ / +----------------+ \
>   | non supporting |/                      \
>   | client C       |\                       \
>   +----------------+ \ +----------------+    \ +------------+    +---------+
>                       \| non supporting |     \| supporting |    | Server  |
>                        |  agent A2      |------| agent A3   |----|  S      |
>                       +----------------+      +------------+    +---------+
>  
> 1. A request is sent from C via A2 and A3 to S. A3 recognizes that there is no reacting node downstream (no OC-Feature-Vector received) and therefore takes the role of the reacting node and inserts an OC-Feature-Vector AVP. A3 has no valid OLR from S stored and therefore does not perform throttling and does not insert an OC-Throttling-Information AVP.
> 2. S recognizes that there is a reacting node downstream which is actually not performing a throttling. S returns a 10% throttling request (TimeStamp=t1, duration=d) within OC-OLR in the answer which goes back via A3 and A2 to C.
> 3. A3 stores the 10% throttling request.
> 4. A new request is sent from C via A2 and A3 to S. A3 recognizes that there is no reacting node downstream (no OC-Feature-Vector received) and therefore takes the role of the reacting node and inserts an OC-Feature-Vector AVP. A3 has  valid OLR from S stored and performs a 10% throttling. The request survives and A3 inserts an OC-Throttling-Information AVP with timeStamp=t1.
> 5. S recognizes that correct throttling (correct time stamp) is in place and therefore does not return OC-OLR in the answer.
> 6. A new request is sent from C via A1 and A3 to S. A1 recognizes that there is no reacting node downstream (no OC-Feature-Vector received) and therefore takes the role of the reacting node and inserts an OC-Feature-AVP. A1 has no valid OLR from S stored and therefore does not perform throttling and does not insert an OC-Throttling-Information AVP. A3 recognizes that there is a reacting node downstream (OC-Feature-Vector received) and therefore does not take the role of the reacting node.
> 7. S recognizes that there is a reacting node downstream which is actually not performing a throttling. S returns a 10% throttling request (TimeStamp=t1, duration=d) within OC-OLR in the answer which goes back via A3 and A1 to C.
> 8. A1 stores the 10% throttling request.
> 9. A new request is sent from C via A1 and A3 to S. A1 recognizes that there is no reacting node downstream (no OC-Feature-Vector received) and therefore takes the role of the reacting node and inserts an OC-Feature-AVP. A1 has  valid OLR from S stored and therefore performs a 10% throttling. The request survives and A1 inserts an OC-Throttling-Information AVP with timeStamp=t1. A3 recognizes that there is a reacting node downstream (OC-Feature-Vector received) and therefore does not take the role of the reacting node.
> 10. S recognizes that correct throttling (correct time stamp) is in place and therefore does not return OC-OLR in the answer.
> 11. Requests sent from C via A1 and A3 to S undergo a 10% throttling at A1, requests sent from C via A2 and A3 to S undergo a 10% throttling at A3.
> 12. Overload situation in S for some reason gets worse, S decides to request 20 % reduction.
> 13. A new request is sent from C via A1 and A3 to S. A1 recognizes that there is no reacting node downstream (no OC-Feature-Vector received) and therefore takes the role of the reacting node and inserts an OC-Feature-AVP. A1 has  valid OLR from S stored and therefore performs a 10% throttling. The request survives and A1 inserts an OC-Throttling-Information AVP with timeStamp=t1. A3 recognizes that there is a reacting node downstream (OC-Feature-Vector received) and therefore does not take the role of the reacting node.
> 14. S recognizes that incorrect throttling (wrong time stamp) is in place and therefore S returns a 20% throttling request (TimeStamp=t2, duration=x) within OC-OLR in the answer which goes back via A3 and A1 to C.
> 15. A3 (not taking the role of the reactingt node)may, A1 must store the 20% throttling request (replacing the 10% request).
> 16. A new request is sent from C via A2 and A3 to S. A3 recognizes that there is no reacting node downstream (no OC-Feature-Vector received) and therefore takes the role of the reacting node and inserts an OC-Feature-Vector AVP. A3 has  valid OLR from S stored and performs a 10% throttling. The request survives and A3 inserts an OC-Throttling-Information AVP with timeStamp=t1. (assuming A3 did not store the 20% request at step 14) 17. S recognizes that incorrect throttling (wrong time stamp) is in place and therefore S returns a 20% throttling request (TimeStamp=t2, duration=x) within OC-OLR in the answer which goes back via A3 and A2 to C.
> 18. A3 stores the 20% throttling request (replacing the 10% request).
> 19. Requests sent from C via A1 and A3 to S undergo a 20% throttling at A1, requests sent from C via A2 and A3 to S undergo a 20% throttling at A3.
>  
>  
> Comments are welcome.
>  
> Best regards
> Ulrich
>  
>  
> _______________________________________________
> DiME mailing list
> DiME@ietf.org
> https://www.ietf.org/mailman/listinfo/dime