Re: [rtcweb] Traffic should be encrypted. (Re: Let's define the purpose of WebRTC)

Neil Stratford <> Mon, 14 November 2011 11:48 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 5952121F8ED9 for <>; Mon, 14 Nov 2011 03:48:30 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.923
X-Spam-Status: No, score=-2.923 tagged_above=-999 required=5 tests=[AWL=0.053, BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-1]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id I4LjKL1iMkzx for <>; Mon, 14 Nov 2011 03:48:29 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 80F2521F8F19 for <>; Mon, 14 Nov 2011 03:48:23 -0800 (PST)
Received: by iaeo4 with SMTP id o4so9231551iae.31 for <>; Mon, 14 Nov 2011 03:48:23 -0800 (PST)
MIME-Version: 1.0
Received: by with SMTP id m9mr5114643ibe.96.1321271302856; Mon, 14 Nov 2011 03:48:22 -0800 (PST)
Received: by with HTTP; Mon, 14 Nov 2011 03:48:22 -0800 (PST)
In-Reply-To: <>
References: <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <>
Date: Mon, 14 Nov 2011 11:48:22 +0000
X-Google-Sender-Auth: S6KdAVQnOxho5KnqsY9FjnaZDRg
Message-ID: <>
From: Neil Stratford <>
To: Justin Uberti <>
Content-Type: multipart/alternative; boundary=0015175ce0eaa8a3b404b1b07288
Cc: "<>" <>
Subject: Re: [rtcweb] Traffic should be encrypted. (Re: Let's define the purpose of WebRTC)
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Real-Time Communication in WEB-browsers working group list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 14 Nov 2011 11:48:30 -0000

On Mon, Nov 14, 2011 at 11:07 AM, Justin Uberti <> wrote:

>>>>> If two peers have negotiated a data channel, it doesn't make sense to
>>>>> use DTMF at all.
>>>> If a WebRTC capable media server is relaying media to the PSTN you may
>>>> still want to signal some kind of DTMF between the client and the media
>>>> server for onward relay. If the data channel is available, why not use it
>>>> for DTMF?
>>> Why represent the DTMF in an alternate format, when the only thing that
>>> cares about it wants it in RFC 4733?
>> Only a SIP destination endpoint would want it in RFC 4733, a PSTN
>> endpoint is going to want it in-band in the media stream, which the
>> relaying media server from WebRTC to PSTN would likely do. (No SIP here.)
> This is a RTP thing, not a SIP thing. The PSTN gateway will take RTP as
> input, including RFC 4733 telephone-event.

In my example the terminating WebRTC media server *is* the PSTN gateway.
There is no RTP beyond the gateway, just ISDN etc. In this case to get the
most reliable DTMF transport from the client to the PSTN I'd have to roll
my own DTMF transport over a DataStream, carry if over the signalling
channel, or accept the lossy RTP DTMF channel. In many cases this DTMF RTP
will be the only RTP sent in that direction - I'm often asked for the
ability to send DTMF without requesting microphone access permission for
information only IVR use cases.

I'd prefer we didn't have a DTMF API and instead leave it up to the
developer to send over the signalling channel (the usual sync arguments
don't apply here if you are sending from an API), but if we do have an API
it makes sense to use the most reliable transport available to carry it.

How today am I notified of a quality or resolution change in the decoded
>> incoming video stream? What I want to avoid is the use of an external
>> signalling mechanism to tell me this, so I can avoid the glitchy
>> resize-before-ready - I want to do it when the data really is at the new
>> resolution.
> Agreed. The <video/> element has videoWidth/videoHeight properties, but it
> doesn't appear that a callback exists to indicate changes, so we'll
> probably need to make a new API here.
> Note though that no signaling is required for this case.

I agree that no signalling is required, but I'm not sure that a simple
callback with width/height is enough to enable notification of quality
changes etc. I still think that to build telepresence/broadcast quality
systems we will need closer access to both codec parameters and codec state