Re: [Dime] Ongoing Throttling Information in request messages

"Wiehe, Ulrich (NSN - DE/Munich)" <ulrich.wiehe@nsn.com> Tue, 03 December 2013 12:21 UTC

Return-Path: <ulrich.wiehe@nsn.com>
X-Original-To: dime@ietfa.amsl.com
Delivered-To: dime@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id CE61F1ADEA6 for <dime@ietfa.amsl.com>; Tue, 3 Dec 2013 04:21:37 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.9
X-Spam-Level:
X-Spam-Status: No, score=-6.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_HI=-5] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id vC8RbUiHuMop for <dime@ietfa.amsl.com>; Tue, 3 Dec 2013 04:21:32 -0800 (PST)
Received: from demumfd001.nsn-inter.net (demumfd001.nsn-inter.net [93.183.12.32]) by ietfa.amsl.com (Postfix) with ESMTP id 3E5581AE12D for <dime@ietf.org>; Tue, 3 Dec 2013 04:21:09 -0800 (PST)
Received: from demuprx017.emea.nsn-intra.net ([10.150.129.56]) by demumfd001.nsn-inter.net (8.12.11.20060308/8.12.11) with ESMTP id rB3CL3HJ007862 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL); Tue, 3 Dec 2013 13:21:03 +0100
Received: from DEMUHTC004.nsn-intra.net ([10.159.42.35]) by demuprx017.emea.nsn-intra.net (8.12.11.20060308/8.12.11) with ESMTP id rB3CL3eA013296 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Tue, 3 Dec 2013 13:21:03 +0100
Received: from DEMUHTC011.nsn-intra.net (10.159.42.42) by DEMUHTC004.nsn-intra.net (10.159.42.35) with Microsoft SMTP Server (TLS) id 14.3.123.3; Tue, 3 Dec 2013 13:21:02 +0100
Received: from DEMUMBX014.nsn-intra.net ([169.254.14.152]) by DEMUHTC011.nsn-intra.net ([10.159.42.42]) with mapi id 14.03.0123.003; Tue, 3 Dec 2013 13:21:02 +0100
From: "Wiehe, Ulrich (NSN - DE/Munich)" <ulrich.wiehe@nsn.com>
To: ext Maria Cruz Bartolome <maria.cruz.bartolome@ericsson.com>, "dime@ietf.org" <dime@ietf.org>
Thread-Topic: Ongoing Throttling Information in request messages
Thread-Index: Ac7aQPqQ1tyE3SNOTC+vVrUogwBrJQAnWH+wAADoRNAABgE0oAAA57DwAACoAlAAASWEQAAAqKUAACDdWeAAAz9GUAABLecQAA4Qc8ABE6PKsABJVfJQASwtPKACh2J2UAABYVvw
Date: Tue, 03 Dec 2013 12:21:02 +0000
Message-ID: <5BCBA1FC2B7F0B4C9D935572D90006681519D35C@DEMUMBX014.nsn-intra.net>
References: <5BCBA1FC2B7F0B4C9D935572D9000668151918EC@DEMUMBX014.nsn-intra.net> <A9CA33BB78081F478946E4F34BF9AAA014CF3131@xmb-rcd-x10.cisco.com> <5BCBA1FC2B7F0B4C9D935572D900066815192B2B@DEMUMBX014.nsn-intra.net> <A9CA33BB78081F478946E4F34BF9AAA014CF31E9@xmb-rcd-x10.cisco.com> <5BCBA1FC2B7F0B4C9D935572D900066815192B90@DEMUMBX014.nsn-intra.net> <A9CA33BB78081F478946E4F34BF9AAA014CF3237@xmb-rcd-x10.cisco.com> <5BCBA1FC2B7F0B4C9D935572D900066815192BF7@DEMUMBX014.nsn-intra.net> <A9CA33BB78081F478946E4F34BF9AAA014CF32CD@xmb-rcd-x10.cisco.com> <5BCBA1FC2B7F0B4C9D935572D900066815192CD2@DEMUMBX014.nsn-intra.net> <A9CA33BB78081F478946E4F34BF9AAA014CF4801@xmb-rcd-x10.cisco.com> <5BCBA1FC2B7F0B4C9D935572D900066815192D5F@DEMUMBX014.nsn-intra.net> <A9CA33BB78081F478946E4F34BF9AAA014CF4BF3@xmb-rcd-x10.cisco.com> <5BCBA1FC2B7F0B4C9D935572D90006681519337D@DEMUMBX014.nsn-intra.net> <A9CA33BB78081F478946E4F34BF9AAA014D13546@xmb-rcd-x10.cisco.com> <5BCBA1FC2B7F0B4C9D935572D900066815193EFD@DEMUMBX014.nsn-intra.net> <087A34937E64E74E848732CFF8354B920972BF81@ESESSMB101.ericsson.se>
In-Reply-To: <087A34937E64E74E848732CFF8354B920972BF81@ESESSMB101.ericsson.se>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [10.159.42.117]
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-purgate-type: clean
X-purgate-Ad: Categorized by eleven eXpurgate (R) http://www.eleven.de
X-purgate: clean
X-purgate: This mail is considered clean (visit http://www.eleven.de for further information)
X-purgate-size: 26724
X-purgate-ID: 151667::1386073264-000022AE-4B456F14/0-0/0-0
Subject: Re: [Dime] Ongoing Throttling Information in request messages
X-BeenThere: dime@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Diameter Maintanence and Extentions Working Group <dime.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dime>, <mailto:dime-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dime/>
List-Post: <mailto:dime@ietf.org>
List-Help: <mailto:dime-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dime>, <mailto:dime-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Dec 2013 12:21:38 -0000

Hi MCruz,

thank you for your comments.

What is your interpretation of REQ10? Which interpretation other than "when overload condition ends at the server, client must be able to detect this" is reasonable?

With regard to robustness (see 4) below):
If for any reason (attacker, misconfiguration,...) a wrong throttling is performed by the client, the server is able to immediately detect and correct this in option B).

With regard to the trade of between REQ18 and REQ13 (see 4) below):
Option B allows to leave the decision to implementations (even dynamically) while option A takes a numb decision against REQ18. Also note that it is not only the (overloaded) server but also the (not at all overloaded) agent that could fulfill REQ18 without contravening REQ13 in option B.

Best regards
Ulrich

 



-----Original Message-----
From: DiME [mailto:dime-bounces@ietf.org] On Behalf Of ext Maria Cruz Bartolome
Sent: Tuesday, December 03, 2013 12:19 PM
To: dime@ietf.org
Subject: Re: [Dime] Ongoing Throttling Information in request messages

Hello,

See some comments below
Best regards
/MCruz


-----Original Message-----
From: DiME [mailto:dime-bounces@ietf.org] On Behalf Of Wiehe, Ulrich (NSN - DE/Munich)
Sent: miércoles, 20 de noviembre de 2013 16:25
To: ext Nirav Salot (nsalot); dime@ietf.org
Subject: Re: [Dime] Ongoing Throttling Information in request messages

Nirav,

thank you for your confirmation.
In order to select the best solution let me try to start a comparism:

1) A1 can be compared with B1 as both require to insert an AVP in every message (answer/request, while overloaded/while performing throttling): A1 adds (resource consuming) functionality to the (overloaded) server/reporting node while B1 adds to the client/reacting node. Furthermore, the AVP to be inserted in A1 (OC-OLR) is the only OC-specific AVP to be inserted to the answer whereas the AVP to be inserted in B1 (OC-Ongoing-Throttling-Information) would be in addition to the OC-Feature-Vector which anyway needs to be added to every request (inserting OC specific info to requests is needed anyway). Furthermore A1 seems to violate REQ 13 from RFC 7068 while B1 does not. All this clearly is a pro for option B.

2) A2 can be compared with B2: A2 either requires additional processing at the server/reporting node or relies on timouts (violating REQ 10) while B2 does not require any corresponding functionality. This is also a pro for Option B.

[MCruz] Violation of REQ10 I think may be subject to interpretation (I would think the requirement text could be improved though...). Then, for this comparison I am not sure this is a key factor.

3) A3 can be compared with B3 as both recommend to check the presence of new OC-specific info in all received messages: A3 adds the recommended functionality to the (overloaded) server/reporting node while B3 adds to the client/reacting node. This is a pro for option A.

[MCruz] I presume you meant " _B3_ adds the recommended functionality to the (overloaded) server/reporting node while _A3_ adds to the client/reacting node
Here we can question whether option B violates REQ13 from RFC7068 while A does not. Then, key question is to compare how much work implies each option to an overloaded server. Not easy to get a conclusion since it could depend a lot on implementation.


4) Option B adds a feedback channel from client to server making the solution more robust. 
[MCruz] More robust? In which sense?

It also allows the server to give some priority to already throttled traffic (see REQ 18) and it allows agents to ensure that already throttled traffic (by a downstream node) is not throttled again. This is a pro for option B.
[MCruz] In my opinion this is the best advantage of solution B over A. But, we need to highlight as well that if server needs (e.g.) to throttle traffic that is not being throttled, then this helps to fulfill REQ-18, but it is against REQ13. Then, if we need to fulfill REQ13 as much as possible, then this advantage vanishes.


5) In Option A it is not clear how the client/reacting node should behave when it receives an answer without OC-OLR AVP while performing throttling. This is a con for option A.


In summary my preference is for option B.

Comments are welcome.

Ulrich





-----Original Message-----
From: ext Nirav Salot (nsalot) [mailto:nsalot@cisco.com] 
Sent: Thursday, November 14, 2013 3:54 PM
To: Wiehe, Ulrich (NSN - DE/Munich); doc-dt@ietf.org; dime@ietf.org
Subject: RE: Ongoing Throttling Information in request messages

Ulrich,

I confirm the summary below of having two valid options.

Just wanted to highlight one aspect, in general.
Current definition of the OC-OLR AVP suggest the inclusion of TimeStamp and ValidityDuration AVPs. And these AVPs are applicable/valid/needed in both the options below.
Additionally, if we decide to go for option B, we would need to define new AVP OC-Ongoing-Throttling-Information AVP.

Regards,
Nirav.

-----Original Message-----
From: Wiehe, Ulrich (NSN - DE/Munich) [mailto:ulrich.wiehe@nsn.com] 
Sent: Wednesday, November 13, 2013 9:03 PM
To: Nirav Salot (nsalot); doc-dt@ietf.org; dime@ietf.org
Subject: RE: Ongoing Throttling Information in request messages

Nirav,

in summary it seems that we have two valid options:


Option A: 
A1: the server (or reporting node), while overloaded must insert the load OC-OLR AVP in every answer message.
A2: the server (or reporting node) after leaving the overload state must continue inserting the OC-OLR AVP (indicating end of throttling request) for some time (how long needs to be calculated by the server) or the server must rely on expiry of outdated overload reports.
A3: the reacting node must/should check the timeStamp in every OC-OLR AVP received in answer messages in order not to miss an update.

Option B:
B1: the reacting node, while performing throttling, must insert the OC-Ongoing-Throttling-Information in every request message.
B2: void (the reacting node , while no longer throttling, simply does not insert OC-Ongoing-Throttling-Information)
B3: the reporting node must/should check OC-Ongoing-Throttling-Information received in a request in order to decide whether or not OC-OLR must be sent back. 

Ulrich




-----Original Message-----
From: ext Nirav Salot (nsalot) [mailto:nsalot@cisco.com] 
Sent: Thursday, November 07, 2013 5:29 PM
To: Wiehe, Ulrich (NSN - DE/Munich); doc-dt@ietf.org; dime@ietf.org
Subject: RE: Ongoing Throttling Information in request messages

Ulrich,

Not sure the significance of REQ 10 here but I do not agree to enforce the server to include overload-report (indicating non-zero or zero overload condition) in every message.
Even in your example, the overload condition will end sometime - e.g. after 1000 sec. and then the server can stop including the overload-report.
Besides, I am still not convince that "during the overload condition, using some logic to avoid inclusion of overload-report is less resource consuming than simply including the same overload-report".

Yes, the reason behind defining timestamp is to allow the reacting node to discard the overload-report if the same was already received from the reporting node.

Regards,
Nirav.

-----Original Message-----
From: Wiehe, Ulrich (NSN - DE/Munich) [mailto:ulrich.wiehe@nsn.com] 
Sent: Thursday, November 07, 2013 3:53 PM
To: Nirav Salot (nsalot); doc-dt@ietf.org; dime@ietf.org
Subject: RE: Ongoing Throttling Information in request messages

Nirav,

please see inline.

Best regards
Ulrich

-----Original Message-----
From: ext Nirav Salot (nsalot) [mailto:nsalot@cisco.com]
Sent: Thursday, November 07, 2013 10:15 AM
To: Wiehe, Ulrich (NSN - DE/Munich); doc-dt@ietf.org; dime@ietf.org
Subject: RE: Ongoing Throttling Information in request messages

Ulrich,

Please read my responses inline.

Regards,
Nirav.

-----Original Message-----
From: Wiehe, Ulrich (NSN - DE/Munich) [mailto:ulrich.wiehe@nsn.com]
Sent: Thursday, November 07, 2013 1:54 PM
To: Nirav Salot (nsalot); doc-dt@ietf.org; dime@ietf.org
Subject: RE: Ongoing Throttling Information in request messages

Nirav,

thank you for the explanation. This may be  a solution which adds more complexity to the reporting node: The reporting node must remember the maximum not yet expired fraction of duration values it sent when leaving the overload state and must continue reporting "end of overload condition" until expiry. (there is no corresponding complexity at the reacting node in my proposal) [Nirav] This is not always true, e.g. as I had indicated if the node has advertised 10% reduction-percentage for 10 sec., it need not bother to advertise no-overload condition since the validity-period was very small.
[Ulrich] Also not always true, e.g. if the reporting node has requested at 20% reduction for 1000sec at t1 and then at t1 + 10sec sends an update to request a 10% reduction for 10 sec. Although the validity-period is small (10 sec) there may still be a reacting node that did not get the update and keeps on throttling by 20% until t1 + 1000sec. Furthermore you seem not to take REQ 10 seriously. My understanding was that timeout is last resort, not the normal way.

Another question: While doing a throttling, what is the expected behaviour of the reacting node when receiving an answer message without OC-OLR AVP? (stop/continue throttling?) (there is no corresponding question in my proposal since not sending of OC-Ongoing-Throttling-Information clearly means that throttling is not in place) [Nirav] The reacting node should continue to apply the earlier received OC-OLR. 
" (Note: We seem to
   have consensus that a server MAY repeat OLRs in subsequent messages,
   but is not required to do so, based on local policy.)"
[Ulrich] This needs to be reconsidered. See the following example:
Non supporting client C1 sends a request via supporting agent A1 to Server S.
S returns a 10% throttling request. 
C sends an new request via supporting agent A2 to S. 
S decides not to repeat the 10% throttling request (hence A2 is not aware of the throttling request). 
C1 sends a new request via A1. 
S decides to repeat the throttling request (redundant). 
Supporting client C2 sends a request via A2 to S.
S decides not to repeat....
To avoid this mess we either need to mandate sending OC-OLR in every answer (while in overload) or let the Server keep track which agent and which client is updated with the newest info. (or consider my proposal).

Another question is for the reacting node: What is the expected behaviour when receiving lots of redundant OC-OLR AVPs in answer messages? The reacting node needs to check every single OC-OLR AVP in order to find out whether it contains an update. Is that the correct understanding? (this seems to be equivalent to checking for redundant OC-Ongoing-Throttling-Information AVPs in every request by the reporting node in my proposal) [Nirav] Please refer to Timestamp AVP within OC-OLR.
The TimeStamp AVP indicates when the original OC-OLR AVP with the
   current content was created.  It is possible to replay the same OC-
   OLR AVP multiple times between the overload endpoints, however, when
   the OC-OLR AVP content changes or the other information sending
   endpoint wants the receiving endpoint to update its overload control
   information, then the TimeStamp AVP MUST contain a new value.
[Ulrich] This does not explicitly say that the reacting node must check every TimeStamp received within OC-OLR AVPs. But I guess this is the consequence. Can you please confirm.


Adding dime-list again.

Ulrich


From: ext Nirav Salot (nsalot) [mailto:nsalot@cisco.com]
Sent: Wednesday, November 06, 2013 4:58 PM
To: Wiehe, Ulrich (NSN - DE/Munich); doc-dt@ietf.org
Subject: RE: Ongoing Throttling Information in request messages

Ulrich,

During "no overload condition", why should reporting node include overload-information in all the answer messages?
e.g. if the reporting node has advertised overload-report with period-of-validity=10 sec., it knows that after that period all the reacting node will automatically stop applying any overload mitigation action. And hence it does not need to explicitly send any overload-report indicating "no overload condition".

In general, I assume that overload period of would be much shorter as compared to "no overload" period. And hence I do not see reason to always include overload-report.

Regards,
Nirav.

From: Wiehe, Ulrich (NSN - DE/Munich) [mailto:ulrich.wiehe@nsn.com]
Sent: Wednesday, November 06, 2013 9:12 PM
To: Nirav Salot (nsalot); doc-dt@ietf.org
Subject: RE: Ongoing Throttling Information in request messages

Nirav,

"During the overload" yes I agree, but "when no longer in overload" all answer messages would contain the information "no longer in overload" while only few request messages would contain the Ongoing-Throttling-Information.

Removing dime-list which bounces.

Best regards
Ulrich
From: ext Nirav Salot (nsalot) [mailto:nsalot@cisco.com]
Sent: Wednesday, November 06, 2013 4:02 PM
To: Wiehe, Ulrich (NSN - DE/Munich); doc-dt@ietf.org; dime@ietf.org
Subject: RE: Ongoing Throttling Information in request messages

Ulrich,

I might be missing something here so please bear with me.

Number of answer messages sent by server = number of request messages received by the server.
During the overload, the server would receive only those requests which survived throttling.
And then the server would send answer messages to only those request messages.
So "every" and "some" are actually equal in the below equation. No?

Regards,
Nirav.

From: Wiehe, Ulrich (NSN - DE/Munich) [mailto:ulrich.wiehe@nsn.com]
Sent: Wednesday, November 06, 2013 8:24 PM
To: Nirav Salot (nsalot); doc-dt@ietf.org; dime@ietf.org
Subject: RE: Ongoing Throttling Information in request messages

Nirav,

Not quite.

The question is:
"is sending an overload-report in every answer message that corresponds to request with OC-Feature-Vector present more resource consuming than receiving Ongoing-Throttling-Information in some request messages (only those that survived a throttling)?"

Best regards
Ulrich



From: ext Nirav Salot (nsalot) [mailto:nsalot@cisco.com]
Sent: Wednesday, November 06, 2013 3:15 PM
To: Wiehe, Ulrich (NSN - DE/Munich); doc-dt@ietf.org; dime@ietf.org
Subject: RE: Ongoing Throttling Information in request messages

Ulrich,

Thanks for clarification.

Then the question would be
"is sending overload-report in every answer message is more resource consuming than the solution below - i.e. receiving OC-Ongoing-Throttling-Information in all request message and analyzing the timestamp and then deciding if the overload-report should be included or not."
I am not sure.

Regards,
Nirav.

From: Wiehe, Ulrich (NSN - DE/Munich) [mailto:ulrich.wiehe@nsn.com]
Sent: Wednesday, November 06, 2013 7:08 PM
To: Nirav Salot (nsalot); doc-dt@ietf.org; dime@ietf.org
Subject: RE: Ongoing Throttling Information in request messages

Nirav,
thank you for your comments.

The comparism is between:
Current way: "send OC specific information (Feature-Vector) in every request message and in every corresponding answer message"
My proposal: "send OC specific information (Feature-Vector and in some cases Ongoing-Throttling-Info) in every request message and in corresponding answer messages only when required".

Sending OC specific information  is regarded a resource consuming action and we should not put this action to the (overloaded) server where avoidable. See REQ 13.

Best regards
Ulrich




From: ext Nirav Salot (nsalot) [mailto:nsalot@cisco.com]
Sent: Wednesday, November 06, 2013 12:04 PM
To: Wiehe, Ulrich (NSN - DE/Munich); doc-dt@ietf.org; dime@ietf.org
Subject: RE: Ongoing Throttling Information in request messages

Ulrich,

Thanks for the detail explanation of your proposal.

Just to verify my understanding,
Your proposal would allow the reporting node to avoid inclusion of the "same" overload-report in the answer message since the request message indicates that the path contains at least one reacting node which already has the latest overload-report.
In other words, the reporting node need not include overload-report in every message, blindly.

To achieve the above objective, the solution below suggest the reacting node to include new AVP OC-Ongoing-Throttling-Information in every request message, which survives throttling.
So the net result is that, the inclusion of the overload-report is prevented in every answer message since the OC-Ongoing-Throttling-Information is included in every request message.
[Wiehe, Ulrich (NSN - DE/Munich)] no.  .in every request message that survived a throttling.
And hence I am not sure if the solution has sound benefit from the inclusion of redundant information point of view.

In summary, I think the solution suggested below works as explained.
But I fail to see the benefit of using this solution compare to a solution in which the reporting node includes overload-report in every answer message.

Regards,
Nirav.

From: doc-dt-bounces@ietf.org [mailto:doc-dt-bounces@ietf.org] On Behalf Of Wiehe, Ulrich (NSN - DE/Munich)
Sent: Tuesday, November 05, 2013 9:36 PM
To: doc-dt@ietf.org; dime@ietf.org
Subject: [doc-dt] Ongoing Throttling Information in request messages

Hi,
draft-docdt-dime-ovli-01
in Appendix B, Table 1, REQ 13 says:
        .. Another way
        is to let the request sender (reacting node) insert
        information in the request to say whether a
        throttling is actually performed.  The reporting node
        then can base its decision on information received in
        the request; no need for keeping state to record who
         has received overload reports.  
 
And in Appendix B, Table 1, REQ 18:
        There has been a proposal to mark
        messages that survived overload throttling as one
        method for an overloaded node to address fairness but
        this proposal is not yet part of the solution.  
 
I would like to come back to this proposal. 
A new AVP OC-Ongoing-Throttling-Information inserted in request messages would indicate that the request message survived a throttling. Furthermore, information within the AVP indicates the TimeStamp as received in the previous OC-OLR AVP, according to which the ongoing throttling (which was survived) is performed.
 
OC-Ongoing-Throttling-Information ::= < AVP Header: TBD9 >
              < TimeStamp >
            * [ AVP ]
 
Supporting Clients would insert the OC-Ongoing-Throttling-Information AVP  into request messages that survived a throttling performed at that client.
Supporting Agents when receiving a request message that contains an OC-Ongoing-Throttling-Information AVP would not perform an additional throttling (since the client or a downstream agent already performed the throttling) , while, when receiving a request that does not contain OC-Ongoing-Throttling-Information AVP would perform throttling (on behalf of the client) according to a previously received and stored OC-OLR, and if that throttling is survived the agent would insert the OC-Ongoing-Throttling-Information AVP in the request before sending it further upstream.
Servers (or in general reporting nodes) would check presence and content of the OC-Ongoing-Throttling-Information AVP in received request messages and could detect (together with a check of presence of OC-Feature-Vector AVP) whether inserting an OC-OLR AVP in the corresponding answer message is needed in order to update the reacting node with a new OC-OLR).
 
This proposal especially addresses use cases like the following:
 
Architecture:
 
                       +----------------+
                       | supporting     |
                      /| agent A1       |\
  +----------------+ / +----------------+ \
  | non supporting |/                      \
  | client C       |\                       \
  +----------------+ \ +----------------+    \ +------------+    +---------+
                      \| non supporting |     \| supporting |    | Server  |
                       |  agent A2      |------| agent A3   |----|  S      |
                      +----------------+      +------------+    +---------+
 
1. A request is sent from C via A2 and A3 to S. A3 recognizes that there is no reacting node downstream (no OC-Feature-Vector received) and therefore takes the role of the reacting node and inserts an OC-Feature-Vector AVP. A3 has no valid OLR from S stored and therefore does not perform throttling and does not insert an OC-Throttling-Information AVP.
2. S recognizes that there is a reacting node downstream which is actually not performing a throttling. S returns a 10% throttling request (TimeStamp=t1, duration=d) within OC-OLR in the answer which goes back via A3 and A2 to C.
3. A3 stores the 10% throttling request.
4. A new request is sent from C via A2 and A3 to S. A3 recognizes that there is no reacting node downstream (no OC-Feature-Vector received) and therefore takes the role of the reacting node and inserts an OC-Feature-Vector AVP. A3 has  valid OLR from S stored and performs a 10% throttling. The request survives and A3 inserts an OC-Throttling-Information AVP with timeStamp=t1.
5. S recognizes that correct throttling (correct time stamp) is in place and therefore does not return OC-OLR in the answer.
6. A new request is sent from C via A1 and A3 to S. A1 recognizes that there is no reacting node downstream (no OC-Feature-Vector received) and therefore takes the role of the reacting node and inserts an OC-Feature-AVP. A1 has no valid OLR from S stored and therefore does not perform throttling and does not insert an OC-Throttling-Information AVP. A3 recognizes that there is a reacting node downstream (OC-Feature-Vector received) and therefore does not take the role of the reacting node.
7. S recognizes that there is a reacting node downstream which is actually not performing a throttling. S returns a 10% throttling request (TimeStamp=t1, duration=d) within OC-OLR in the answer which goes back via A3 and A1 to C.
8. A1 stores the 10% throttling request.
9. A new request is sent from C via A1 and A3 to S. A1 recognizes that there is no reacting node downstream (no OC-Feature-Vector received) and therefore takes the role of the reacting node and inserts an OC-Feature-AVP. A1 has  valid OLR from S stored and therefore performs a 10% throttling. The request survives and A1 inserts an OC-Throttling-Information AVP with timeStamp=t1. A3 recognizes that there is a reacting node downstream (OC-Feature-Vector received) and therefore does not take the role of the reacting node.
10. S recognizes that correct throttling (correct time stamp) is in place and therefore does not return OC-OLR in the answer.
11. Requests sent from C via A1 and A3 to S undergo a 10% throttling at A1, requests sent from C via A2 and A3 to S undergo a 10% throttling at A3.
12. Overload situation in S for some reason gets worse, S decides to request 20 % reduction.
13. A new request is sent from C via A1 and A3 to S. A1 recognizes that there is no reacting node downstream (no OC-Feature-Vector received) and therefore takes the role of the reacting node and inserts an OC-Feature-AVP. A1 has  valid OLR from S stored and therefore performs a 10% throttling. The request survives and A1 inserts an OC-Throttling-Information AVP with timeStamp=t1. A3 recognizes that there is a reacting node downstream (OC-Feature-Vector received) and therefore does not take the role of the reacting node.
14. S recognizes that incorrect throttling (wrong time stamp) is in place and therefore S returns a 20% throttling request (TimeStamp=t2, duration=x) within OC-OLR in the answer which goes back via A3 and A1 to C.
15. A3 (not taking the role of the reactingt node)may, A1 must store the 20% throttling request (replacing the 10% request).
16. A new request is sent from C via A2 and A3 to S. A3 recognizes that there is no reacting node downstream (no OC-Feature-Vector received) and therefore takes the role of the reacting node and inserts an OC-Feature-Vector AVP. A3 has  valid OLR from S stored and performs a 10% throttling. The request survives and A3 inserts an OC-Throttling-Information AVP with timeStamp=t1. (assuming A3 did not store the 20% request at step 14) 17. S recognizes that incorrect throttling (wrong time stamp) is in place and therefore S returns a 20% throttling request (TimeStamp=t2, duration=x) within OC-OLR in the answer which goes back via A3 and A2 to C.
18. A3 stores the 20% throttling request (replacing the 10% request).
19. Requests sent from C via A1 and A3 to S undergo a 20% throttling at A1, requests sent from C via A2 and A3 to S undergo a 20% throttling at A3.
 
 
Comments are welcome.
 
Best regards
Ulrich
 
 
_______________________________________________
DiME mailing list
DiME@ietf.org
https://www.ietf.org/mailman/listinfo/dime
_______________________________________________
DiME mailing list
DiME@ietf.org
https://www.ietf.org/mailman/listinfo/dime