Re: [Jmap] new JMAP server for prototyping

Neil Jenkins <neilj@fastmailteam.com> Mon, 24 May 2021 05:00 UTC

Return-Path: <neilj@fastmailteam.com>
X-Original-To: jmap@ietfa.amsl.com
Delivered-To: jmap@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E7EA83A1794 for <jmap@ietfa.amsl.com>; Sun, 23 May 2021 22:00:04 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.797
X-Spam-Level:
X-Spam-Status: No, score=-2.797 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=fastmailteam.com header.b=q0fjXw/V; dkim=pass (2048-bit key) header.d=messagingengine.com header.b=m6Z/8Fur
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id GFqLgTELYk24 for <jmap@ietfa.amsl.com>; Sun, 23 May 2021 21:59:58 -0700 (PDT)
Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 918793A1793 for <jmap@ietf.org>; Sun, 23 May 2021 21:59:58 -0700 (PDT)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 86B935C0112 for <jmap@ietf.org>; Mon, 24 May 2021 00:59:55 -0400 (EDT)
Received: from imap42 ([10.202.2.92]) by compute5.internal (MEProxy); Mon, 24 May 2021 00:59:55 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= fastmailteam.com; h=mime-version:message-id:in-reply-to :references:date:from:to:subject:content-type; s=fm2; bh=KGnR8i9 7aHuDyzhsDwpfL296S504TqM5bomVtLs7Rww=; b=q0fjXw/VgABPiryqyKlCi1R U4FFlV2xPas5+CnPQs3IfFtmoACtORz40u7OLJbODV5w1uU2vgkSzeR3yvIxKzvj oLl29zzf3MNQ1247FvYIh4g8TwEYp94UTt6hJwYZOcBTz1p/ZEfo9/ed5GWz/dER GKBmpdjpvwIXjxvOFdcehMfbIuYbshV6lscuJYftZsf530vAo/4hgZI99MDVfen8 36v96H9XtZRfflubOSoeh3H8WbEaEnwRFkkYwM2TKutuY6pY6bXewA8XUQTsXKuc BT+MtHdQFT6rttBB5UV6x08kN353XpimP67xIG5JL/rE2AlfGnUTIHLfAok+Hvg= =
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=KGnR8i 97aHuDyzhsDwpfL296S504TqM5bomVtLs7Rww=; b=m6Z/8FurEHae/lJKo3Uiu4 utX8PXGQvTERS1WDGwe/1WfdRWc7d6mfdGZsTPOMTaGessto0H4C7rVDG4STIa8b AI/HsDFI/+yRBIrW3NGStM/fH+2MZ9OarPpL0c5yxoD9vHx4OJ0+1mjnibA53OLm IEFlaxtqrbIV1bo44e659/llvE+DI6GRB1ukM+wHZ4nEzrVpA7FBgFEJacg/8M2h Uk7yOvys6luM9LOuKbT05YZzTaodAO2MEffetp7wfdlbCpknRtPGYqScPcamGXxr a/fEeORLdycfwdHg/f18ru1yGeZ1svE7YiHsiSLkOGXn8K04xOi5dlhEclRBbdcQ ==
X-ME-Sender: <xms:yzKrYOhkKEgUBgoL1xJSAcAghBBfAHBewL20Q2FafzL-JkWS7NiM5A> <xme:yzKrYPCCsYGLInQ_l1bJzpokqb-Nbi6QLLiKjnpFo9Lw0kcNbWwZhA--A_f7Ls7LY KJT4d7KX0VXig>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdejkedgiedtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhepofgfggfkjghffffhvffutgesrgdtreerreerjeenucfhrhhomhepfdfpvghi lhculfgvnhhkihhnshdfuceonhgvihhljhesfhgrshhtmhgrihhlthgvrghmrdgtohhmqe enucggtffrrghtthgvrhhnpeehuefhudejtdeiveekvdfhfffgleeflefhfeekhefhkeel kefhfeeufeevffejieenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrih hlfhhrohhmpehnvghilhhjsehfrghsthhmrghilhhtvggrmhdrtghomh
X-ME-Proxy: <xmx:yzKrYGG42ID6Y9hL049kEPd4B9zZAj1YV3SoOxL-ownJNTCNcyuTEw> <xmx:yzKrYHQTPe7onHfOMiVloQrXBVp4nSSMZ_bf8YZs1ZBgb4GJqDpNdA> <xmx:yzKrYLxErxIZ1XkrKkKJ3Nfz4lXPJLXBctDBmt1NQy1Hcsecj3YdoQ> <xmx:yzKrYK-EuZ0Q0BZyrteDOEolzAdbdu4XNOq_R-Rm7ku8zqcgP87RCA>
Received: by mailuser.nyi.internal (Postfix, from userid 501) id 134C6310005F; Mon, 24 May 2021 00:59:55 -0400 (EDT)
X-Mailer: MessagingEngine.com Webmail Interface
User-Agent: Cyrus-JMAP/3.5.0-alpha0-701-g78bd539edf-fm-ubox-20210517.001-g78bd539e
Mime-Version: 1.0
Message-Id: <2fc30015-792a-4545-be1b-49257fec6481@beta.fastmail.com>
In-Reply-To: <20210521202012.GC7261@eh>
References: <20210521202012.GC7261@eh>
Date: Mon, 24 May 2021 14:59:02 +1000
From: "Neil Jenkins" <neilj@fastmailteam.com>
To: "IETF JMAP Mailing List" <jmap@ietf.org>
Content-Type: multipart/alternative; boundary=279631519b3240a58ca8bb7442d1a66b
Archived-At: <https://mailarchive.ietf.org/arch/msg/jmap/DRKI0LaVhztd86mHBSf5wKU4qak>
Subject: Re: [Jmap] new JMAP server for prototyping
X-BeenThere: jmap@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: JSON Message Access Protocol <jmap.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/jmap>, <mailto:jmap-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/jmap/>
List-Post: <mailto:jmap@ietf.org>
List-Help: <mailto:jmap-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/jmap>, <mailto:jmap-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 24 May 2021 05:00:05 -0000

On Sat, 22 May 2021, at 06:20, Jamey Sharp wrote:
> Looking at a different bit of the text:
> 
>     In the case of records with references to the same type, the server 
>     MUST order the creates and updates within a single method call so 
>     that creates happen before their creation ids are referenced by 
>     another create/update/destroy in the same call.
> 
> I think this wording is a little strange because I don't think destroys 
> can reference creation IDs at all, right?

Technically it could destroy a creation in the same request (it's a weird edge case, but it was worded that way deliberately); I know we actually came across this scenario at some point, although I can't remember why just now.

> But more importantly: I think a strict reading would prohibit circular 
> references in creates, since there is no ordering satisfying this MUST 
> in that case.

Yes, I would read that the same; I think this paragraph does implicitly rule out creating circular references directly.

> It seems easy enough for a client to use random creation Ids which are 
> sufficiently large that they never collide with each other or with 
> user-provided data. So if it's intended that clients may use strings 
> which match their own creation Ids, I might just decide this 
> implementation doesn't interoperate with clients that do that, which 
> seems okay in a prototyping setting.

Hmm, I mean, sure you can do whatever you want for prototyping, but this is definitely not spec compliant. (My client, for example, just uses an incrementing integer for creation ids, which avoids reuse as per spec but is definitely not guaranteed to not appear by coincidence in other properties).

> "Most recent success" means a client which violates this 
> "SHOULD" can't predict which object it's going to be referencing, and 
> the target could even be of a different type than expected. But after 
> all, you did tell them they shouldn't do that…

It's a SHOULD because the server behaviour is (in theory) well defined, and in theory you could do something crazy (e.g. Try creating three different items then set a reference to the last one that succeeded…).

> >   The definition of "position" in the response is
> 
> I agree it shouldn't matter, but I think it might be helpful to define 
> it any of these equivalent ways:
> 
> - the position immediately after the last item in the list
> - the length of the list
> - the minimum value of position for which you'd get the same response

Sure, I think this value is a reasonable choice. It does mean there are a few optimisations you can't make though (because it's essentially the same as forcing the server to calculate the total), although they are unlikely to be important ones I think.

> >   Close. In this situation you have to invalidate any results in your 
> >   sparse query after the first gap, but you can still keep the ones 
> >   before.
> 
> Ohhhh. That makes sense, thanks. The spec is wrong here though, I think:
> 
>     The result of this is that if the client has a cached sparse array of
>     Foo ids corresponding to the results in the old state, then:
> 
>     fooIds = [ "id1", "id2", null, null, "id3", "id4", null, null, null ]
> 
>     If it *splices out* all ids in the removed array that it has in its
>     cached results, then:
> 
>        removed = [ "id2", "id31", ... ];
>        fooIds => [ "id1", null, null, "id3", "id4", null, null, null ]
> 
>     and *splices in* (one by one in order, starting with the lowest
>     index) all of the ids in the added array:
> 
>    added = [{ id: "id5", index: 0, ... }];
>    fooIds => [ "id5", "id1", null, null, "id3", "id4", null, null, null ]
> 
> Here, "id31" is shown as removed, which wasn't in fooIds, but "id3" and 
> "id4" come after a gap and weren't removed.

Hey damn, you're right! I think my brain glossed over this when proofreading because the line below starts **and truncates**, and in my code the flag that gets set when you have an unknown destroyed id is called `truncateAtFirstGap`. But the spec is actually just description the final step of adjusting the query results length, not removing the ids after the gap.

Would you like to submit the errata? Or I'm happy to do it.

> >   I'm not quite sure what you mean by "to the same length", but yes if 
> >   you have the old query cached you could optimise the case where the 
> >   upToId is not in the new results a bit further. But again, the spec 
> >   is written to allow implementers to calculate query changes without 
> >   caching the old query state.
> 
> I meant that if the client has cached the first 30 items, then upToId 
> refers to the 30th item they knew about; a delta which gets them back to 
> 30 cached items seems like a reasonable thing to want.
> […]
> I can't quite figure out what client UX the upToId feature supports. I 
> can imagine trying to keep the ten threads starting at #23 visible on 
> the screen, or keep the thread list centered around the thread which is 
> currently being displayed, but there's no reason to expect this will let 
> me do either of those things, right?

It's purely an optimisation in both what's sent over the wire and (for certain implementations) how much work the server does. In most scenarios, a client will have just the start of a list cached. Extreme scenario, we have the first 3 ids cached from a list of 500 results:

[ id1, id2, id3, null4 … null500 ]

The goal of queryChanges is to ensure that the state of the client after applying the changes is an accurate representation of the new state on the server, while sending a minimal delta. Suppose the last 100 items are removed, and a further 50 items are inserted at the end instead. Simply truncating the client's sparse array to the new length (450) means it is now a valid representation of the new state: every id that's there is in the correct position. This is what upToId does: instead of sending 50 "added" and 100 "removed" ids you can just send the new total instead.

Cheers,
Neil.